name
stringlengths 10
10
| title
stringlengths 22
113
| abstract
stringlengths 282
2.29k
| fulltext
stringlengths 15.3k
85.1k
| keywords
stringlengths 87
585
|
---|---|---|---|---|
train_I-70 | A Multi-Agent System for Building Dynamic Ontologies | Ontologies building from text is still a time-consuming task which justifies the growth of Ontology Learning. Our system named Dynamo is designed along this domain but following an original approach based on an adaptive multi-agent architecture. In this paper we present a distributed hierarchical clustering algorithm, core of our approach. It is evaluated and compared to a more conventional centralized algorithm. We also present how it has been improved using a multi-criteria approach. With those results in mind, we discuss the limits of our system and add as perspectives the modifications required to reach a complete ontology building solution. | 1. INTRODUCTION
Nowadays, it is well established that ontologies are needed for
semantic web, knowledge management, B2B... For knowledge
management, ontologies are used to annotate documents and to
enhance the information retrieval. But building an ontology manually
is a slow, tedious, costly, complex and time consuming process.
Currently, a real challenge lies in building them automatically or
semi-automatically and keeping them up to date. It would mean
creating dynamic ontologies [10] and it justifies the emergence of
ontology learning techniques [14] [13].
Our research focuses on Dynamo (an acronym of DYNAMic
Ontologies), a tool based on an adaptive multi-agent system to
construct and maintain an ontology from a domain specific set of texts.
Our aim is not to build an exhaustive, general hierarchical ontology
but a domain specific one. We propose a semi-automated tool since
an external resource is required: the "ontologist". An ontologist is
a kind of cognitive engineer, or analyst, who is using information
from texts and expert interviews to design ontologies.
In the multi-agent field, ontologies generally enable agents to
understand each other [12]. They"re sometimes used to ease the
ontology building process, in particular for collaborative contexts [3],
but they rarely represent the ontology itself [16]. Most works
interested in the construction of ontologies [7] propose the refinement of
ontologies. This process consists in using an existing ontology and
building a new one from it. This approach is different from our
approach because Dynamo starts from scratch. Researchers, working
on the construction of ontologies from texts, claim that the work to
be automated requires external resources such as a dictionary [14],
or web access [5]. In our work, we propose an interaction between
the ontologist and the system, our external resource lies both in the
texts and the ontologist.
This paper first presents, in section 2, the big picture of the
Dynamo system. In particular the motives that led to its creation and
its general architecture. Then, in section 3 we discuss the
distributed clustering algorithm used in Dynamo and compare it to
a more classic centralized approach. Section 4 is dedicated to some
enhancement of the agents behavior that got designed by taking
into account criteria ignored by clustering. And finally, in section
5, we discuss the limitations of our approach and explain how it
will be addressed in further work.
2. DYNAMO OVERVIEW
2.1 Ontology as a Multi-Agent System
Dynamo aims at reducing the need for manual actions in
processing the text analysis results and at suggesting a concept
network kick-off in order to build ontologies more efficiently. The
chosen approach is completely original to our knowledge and uses
an adaptive multi-agent system. This choice comes from the
qualities offered by multi-agent system: they can ease the interactive
design of a system [8] (in our case, a conceptual network), they
allow its incremental building by progressively taking into account
new data (coming from text analysis and user interaction), and last
but not least they can be easily distributed across a computer
network.
Dynamo takes a syntactical and terminological analysis of texts
as input. It uses several criteria based on statistics computed from
the linguistic contexts of terms to create and position the concepts.
As output, Dynamo provides to the analyst a hierarchical
organization of concepts (the multi-agent system itself) that can be
validated, refined of modified, until he/she obtains a satisfying state of
1286
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
the semantic network.
An ontology can be seen as a stable map constituted of
conceptual entities, represented here by agents, linked by labelled
relations. Thus, our approach considers an ontology as a type of
equilibrium between its concept-agents where their forces are
defined by their potential relationships. The ontology modification
is a perturbation of the previous equilibrium by the appearance or
disappearance of agents or relationships. In this way, a dynamic
ontology is a self-organizing process occurring when new texts are
included into the corpus, or when the ontologist interacts with it.
To support the needed flexibility of such a system we use a
selforganizing multi-agent system based on a cooperative approach [9].
We followed the ADELFE method [4] proposed to drive the design
of this kind of multi-agent system. It justifies how we designed
some of the rules used by our agents in order to maximize the
cooperation degree within Dynamo"s multi-agent system.
2.2 Proposed Architecture
In this section, we present our system architecture. It addresses
the needs of Knowledge Engineering in the context of dynamic
ontology management and maintenance when the ontology is linked
to a document collection.
The Dynamo system consists of three parts (cf. figure 1):
• a term network, obtained thanks to a term extraction tool
used to preprocess the textual corpus,
• a multi-agent system which uses the term network to make a
hierarchical clustering in order to obtain a taxonomy of
concepts,
• an interface allowing the ontologist to visualize and control
the clustering process.
??
Ontologist
Interface
System
Concept Agent Term
Term network
Terms
Extraction
Tool
Figure 1: System architecture
The term extractor we use is Syntex, a software that has
efficiently been used for ontology building tasks [11]. We mainly
selected it because of its robustness and the great amount of
information extracted. In particular, it creates a "Head-Expansion" network
which has already proven to be interesting for a clustering system
[1]. In such a network, each term is linked to its head term1
and
1
i.e. the maximum sub-phrase located as head of the term
its expansion term2
, and also to all the terms for which it is a head
or an expansion term. For example, "knowledge engineering from
text" has "knowledge engineering" as head term and "text" as
expansion term. Moreover, "knowledge engineering" is composed of
"knowledge" as head term and "engineering" as expansion term.
With Dynamo, the term network obtained as the output of the
extractor is stored in a database. For each term pair, we assume that it
is possible to compute a similarity value in order to make a
clustering [6] [1]. Because of the nature of the data, we are only focusing
on similarity computation between objects described thanks to
binary variables, that means that each item is described by the
presence or absence of a characteristic set [15]. In the case of terms
we are generally dealing with their usage contexts. With Syntex,
those contexts are identified by terms and characterized by some
syntactic relations.
The Dynamo multi-agent system implements the distributed
clustering algorithm described in detail in section 3 and the rules
described in section 4. It is designed to be both the system
producing the resulting structure and the structure itself. It means that
each agent represent a class in the taxonomy. Then, the system
output is the organization obtained from the interaction between
agents, while taking into account feedback coming from the
ontologist when he/she modifies the taxonomy given his needs or
expertise.
3. DISTRIBUTED CLUSTERING
This section presents the distributed clustering algorithm used in
Dynamo. For the sake of understanding, and because of its
evaluation in section 3.1, we recall the basic centralized algorithm used
for a hierarchical ascending clustering in a non metric space, when
a symmetrical similarity measure is available [15] (which is the
case of the measures used in our system).
Algorithm 1: Centralized hierarchical ascending clustering
algorithm
Data: List L of items to organize as a hierarchy
Result: Root R of the hierarchy
while length(L) > 1 do
max ← 0;
A ← nil;
B ← nil;
for i ← 1 to length(L) do
I ← L[i];
for j ← i + 1 to length(L) do
J ← L[j];
sim ← similarity(I, J);
if sim > max then
max ← sim;
A ← I;
B ← J;
end
end
end
remove(A, L);
remove(B, L);
append((A, B), L);
end
R ← L[1];
In algorithm 1, for each clustering step, the pair of the most
similar elements is determined. Those two elements are grouped in a
cluster, and the resulting class is appended to the list of remaining
elements. This algorithm stops when the list has only one element
left.
2
i.e. the maximum sub-phrase located as tail of the term
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1287
The hierarchy resulting from algorithm 1 is always a binary tree
because of the way grouping is done. Moreover grouping the most
similar elements is equivalent to moving them away from the least
similar ones. Our distributed algorithm is designed relying on those
two facts. It is executed concurrently in each of the agents of the
system.
Note that, in the following of this paper, we used for both
algorithms an Anderberg similarity (with α = 0.75) and an average
link clustering strategy [15]. Those choices have an impact on the
resulting tree, but they impact neither the global execution of the
algorithm nor its complexity.
We now present the distributed algorithm used in our system. It
is bootstrapped in the following way:
• a TOP agent having no parent is created, it will be the root of
the resulting taxonomy,
• an agent is created for each term to be positioned in the
taxonomy, they all have TOP as parent.
Once this basic structure is set, the algorithm runs until it reaches
equilibrium and then provides the resulting taxonomy.
Ak−1 Ak AnA2A1
P
...... ......
A1
Figure 2: Distributed classification: Step 1
The process first step (figure 2) is triggered when an agent (here
Ak) has more than one brother (since we want to obtain a binary
tree). Then it sends a message to its parent P indicating its most
dissimilar brother (here A1). Then P receives the same kind of
message from each of its children. In the following, this kind of
message will be called a "vote".
Ak−1 Ak AnA2A1
P
P"
...... ......
P"
P"
Figure 3: Distributed clustering: Step 2
Next, when P has got messages from all its children, it starts the
second step (figure 3). Thanks to the received messages indicating
the preferences of its children, P can determine three sub-groups
among its children:
• the child which got the most "votes" by its brothers, that is
the child being the most dissimilar from the greatest number
of its brothers. In case of a draw, one of the winners is chosen
randomly (here A1),
• the children that allowed the "election" of the first group, that
is the agents which chose their brother of the first group as
being the most dissimilar one (here Ak to An),
• the remaining children (here A2 to Ak−1).
Then P creates a new agent P (having P as parent) and asks
agents from the second group (here agents Ak to An) to make it
their new parent.
Ak−1 Ak AnA2A1
P
P"
...... ......
Figure 4: Distributed clustering: Step 3
Finally, step 3 (figure 4) is trivial. The children rejected by P
(here agent A2 to An) take its message into account and choose P
as their new parent. The hierarchy just created a new intermediate
level.
Note that this algorithm generally converges, since the number of
brothers of an agent drops. When an agent has only one remaining
brother, its activity stops (although it keeps processing messages
coming from its children). However in a few cases we can reach
a "circular conflict" in the voting procedure when for example A
votes against B, B against C and C against A. With the current
system no decision can be taken. The current procedure should be
improved to address this, probably using a ranked voting method.
3.1 Quantitative Evaluation
Now, we evaluate the properties of our distributed algorithm. It
requires to begin with a quantitative evaluation, based on its
complexity, while comparing it with the algorithm 1 from the previous
section.
Its theoretical complexity is calculated for the worst case, by
considering the similarity computation operation as elementary. For
the distributed algorithm, the worst case means that for each run,
only a two-item group can be created. Under those conditions, for a
given dataset of n items, we can determine the amount of similarity
computations.
For algorithm 1, we note l = length(L), then the most enclosed
"for" loop is run l − i times. And its body has the only similarity
computation, so its cost is l−i. The second "for" loop is ran l times
for i ranging from 1 to l. Then its cost is
Pl
i=1(l − i) which can
be simplified in l×(l−1)
2
. Finally for each run of the "while" loop,
l is decreased from n to 1 which gives us t1(n) as the amount of
similarity computations for algorithm 1:
t1(n) =
nX
l=1
l × (l − 1)
2
(1)
For the distributed algorithm, at a given step, each one of the l
agents evaluates the similarity with its l −1 brothers. So each steps
has a l × (l − 1) cost. Then, groups are created and another vote
occurs with l decreased by one (since we assume worst case, only
groups of size 2 or l −1 are built). Since l is equal to n on first run,
we obtain tdist(n) as the amount of similarity computations for the
distributed algorithm:
tdist(n) =
nX
l=1
l × (l − 1) (2)
Both algorithms then have an O(n3
) complexity. But in the
worst case, the distributed algorithm does twice the number of
el1288 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
ementary operations done by the centralized algorithm. This gap
comes from the local decision making in each agent. Because of
this, the similarity computations are done twice for each agent pair.
We could conceive that an agent sends its computation result to its
peer. But, it would simply move the problem by generating more
communication in the system.
0
20000
40000
60000
80000
100000
120000
140000
160000
180000
10 20 30 40 50 60 70 80 90 100
Amountofcomparisons
Amount of input terms
1. Distributed algorithm (on average, with min and max)
2. Logarithmic polynomial
3. Centralized algorithm
Figure 5: Experimental results
In a second step, the average complexity of the algorithm has
been determined by experiments. The multi-agent system has been
executed with randomly generated input data sets ranging from ten
to one hundred terms. The given value is the average of
comparisons made for one hundred of runs without any user interaction.
It results in the plots of figure 5. The algorithm is then more
efficient on average than the centralized algorithm, and its average
complexity is below the worst case. It can be explained by the low
probability that a data set forces the system to create only minimal
groups (two items) or maximal (n − 1 elements) for each step of
reasoning. Curve number 2 represents the logarithmic polynomial
minimizing the error with curve number 1. The highest degree term
of this polynomial is in n2
log(n), then our distributed algorithm
has a O(n2
log(n)) complexity on average. Finally, let"s note the
reduced variation of the average performances with the maximum
and the minimum. In the worst case for 100 terms, the variation is
of 1,960.75 for an average of 40,550.10 (around 5%) which shows
the good stability of the system.
3.2 Qualitative Evaluation
Although the quantitative results are interesting, the real
advantage of this approach comes from more qualitative characteristics
that we will present in this section. All are advantages obtained
thanks to the use of an adaptive multi-agent system.
The main advantage to the use of a multi-agent system for a
clustering task is to introduce dynamic in such a system. The ontologist
can make modifications and the hierarchy adapts depending on the
request. It is particularly interesting in a knowledge engineering
context. Indeed, the hierarchy created by the system is meant to be
modified by the ontologist since it is the result of a statistic
computation. During the necessary look at the texts to examine the
usage contexts of terms [2], the ontologist will be able to interpret
the real content and to revise the system proposal. It is extremely
difficult to realize this with a centralized "black-box" approach. In
most cases, one has to find which reasoning step generated the error
and to manually modify the resulting class. Unfortunately, in this
case, all the reasoning steps that occurred after the creation of the
modified class are lost and must be recalculated by taking the
modification into account. That is why a system like ASIUM [6] tries to
soften the problem with a system-user collaboration by showing to
the ontologist the created classes after each step of reasoning. But,
the ontologist can make a mistake, and become aware of it too late.
Figure 6: Concept agent tree after autonomous stabilization of
the system
In order to illustrate our claims, we present an example thanks to
a few screenshots from the working prototype tested on a medical
related corpus. By using test data and letting the system work by
itself, we obtain the hierarchy from figure 6 after stabilization. It is
clear that the concept described by the term "lésion" (lesion) is
misplaced. It happens that the similarity computations place it closer to
"femme" (woman) and "chirurgien" (surgeon) than to "infection",
"gastro-entérite" (gastro-enteritis) and "hépatite" (hepatitis). This
wrong position for "lesion" is explained by the fact that without
ontologist input the reasoning is only done on statistics criteria.
Figure 7: Concept agent tree after ontologist modification
Then, the ontologist replaces the concept in the right branch, by
affecting "ConceptAgent:8" as its new parent. The name
"ConceptAgent:X" is automatically given to a concept agent that is not
described by a term. The system reacts by itself and refines the
clustering hierarchy to obtain a binary tree by creating
"ConceptAgent:11". The new stable state if the one of figure 7.
This system-user coupling is necessary to build an ontology, but
no particular adjustment to the distributed algorithm principle is
needed since each agent does an autonomous local processing and
communicates with its neighborhood by messages.
Moreover, this algorithm can de facto be distributed on a
computer network. The communication between agents is then done by
sending messages and each one keeps its decision autonomy. Then,
a system modification to make it run networked would not require
to adjust the algorithm. On the contrary, it would only require to
rework the communication layer and the agent creation process since
in our current implementation those are not networked.
4. MULTI-CRITERIA HIERARCHY
In the previous sections, we assumed that similarity can be
computed for any term pair. But, as soon as one uses real data this
property is not verified anymore. Some terms do not have any
similarity value with any extracted term. Moreover for leaf nodes it is
sometimes interesting to use other means to position them in the
hierarchy. For this low level structuring, ontologists generally base
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1289
their choices on simple heuristics. Using this observation, we built
a new set of rules, which are not based on similarity to support low
level structuring.
4.1 Adding Head Coverage Rules
In this case, agents can act with a very local point of view simply
by looking at the parent/child relation. Each agent can try to
determine if its parent is adequate. It is possible to guess this because
each concept agent is described by a set of terms and thanks to the
"Head-Expansion" term network.
In the following TX will be the set of terms describing concept
agent X and head(TX ) the set of all the terms that are head of at
least one element of TX . Thanks to those two notations we can
describe the parent adequacy function a(P, C) between a parent P
and a child C:
a(P, C) =
|TP ∩ head(TC )|
|TP ∪ head(TC )|
(3)
Then, the best parent for C is the P agent that maximizes a(P, C).
An agent unsatisfied by its parent can then try to find a better one
by evaluating adequacy with candidates. We designed a
complementary algorithm to drive this search:
When an agent C is unsatisfied by its parent P, it evaluates
a(Bi, C) with all its brothers (noted Bi) the one maximizing a(Bi, C)
is then chosen as the new parent.
Figure 8: Concept agent tree after autonomous stabilization of
the system without head coverage rule
We now illustrate this rule behavior with an example. Figure 8
shows the state of the system after stabilization on test data. We
can notice that "hépatite viral" (viral hepatitis) is still linked to the
taxonomy root. It is caused by the fact that there is no similarity
value between the "viral hepatitis" term and any of the term of the
other concept agents.
Figure 9: Concept agent tree after activation of the head
coverage rule
After activating the head coverage rule and letting the system
stabilize again we obtain figure 9. We can see that "viral hepatitis"
slipped through the branch leading to "hepatitis" and chose it as its
new parent. It is a sensible default choice since "viral hepatitis" is
a more specific term than "hepatitis".
This rule tends to push agents described by a set of term to
become leafs of the concept tree. It addresses our concern to improve
the low level structuring of our taxonomy. But obviously our agents
lack a way to backtrack in case of modifications in the taxonomy
which would make them be located in the wrong branch. That is
one of the point where our system still has to be improved by adding
another set of rules.
4.2 On Using Several Criteria
In the previous sections and examples, we only used one
algorithm at a time. The distributed clustering algorithm tends to
introduce new layers in the taxonomy, while the head coverage
algorithm tends to push some of the agents toward the leafs of the
taxonomy. It obviously raises the question on how to deal with
multiple criteria in our taxonomy building, and how agents
determine their priorities at a given time.
The solution we chose came from the search for minimizing non
cooperation within the system in accordance with the ADELFE
method. Each agent computes three non cooperation degrees and
chooses its current priority depending on which degree is the
highest. For a given agent A having a parent P, a set of brothers Bi
and which received a set of messages Mk having the priority pk
the three non cooperation degrees are:
• μH (A) = 1 − a(P, A), is the "head coverage" non
cooperation degree, determined by the head coverage of the parent,
• μB(A) = max(1 − similarity(A, Bi)), is the
"brotherhood" non cooperation degree, determined by the worst brother
of A regarding similarities,
• μM (A) = max(pk), is the "message" non cooperation
degree, determined by the most urgent message received.
Then, the non cooperation degree μ(A) of agent A is:
μ(A) = max(μH (A), μB(A), μM (A)) (4)
Then, we have three cases determining which kind of action A will
choose:
• if μ(A) = μH (A) then A will use the head coverage
algorithm we detailed in the previous subsection
• if μ(A) = μB(A) then A will use the distributed clustering
algorithm (see section 3)
• if μ(A) = μM (A) then A will process Mk immediately in
order to help its sender
Those three cases summarize the current activities of our agents:
they have to find the best parent for them (μ(A) = μH (A)),
improve the structuring through clustering (μ(A) = μB(A)) and
process other agent messages (μ(A) = μM (A)) in order to help them
fulfill their own goals.
4.3 Experimental Complexity Revisited
We evaluated the experimental complexity of the whole
multiagent system when all the rules are activated. In this case, the
metric used is the number of messages exchanged in the system. Once
again the system has been executed with input data sets ranging
from ten to one hundred terms. The given value is the average of
message amount sent in the system as a whole for one hundred runs
without user interaction. It results in the plots of figure 10.
Curve number 1 represents the average of the value obtained.
Curve number 2 represents the average of the value obtained when
only the distributed clustering algorithm is activated, not the full
rule set. Curve number 3 represents the polynomial minimizing the
error with curve number 1. The highest degree term of this
polynomial is in n3
, then our multi-agent system has a O(n3
) complexity
1290 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
0
5000
10000
15000
20000
25000
10 20 30 40 50 60 70 80 90 100
Amountofmessages
Amount of input terms
1. Dynamo, all rules (on average, with min and max)
2. Distributed clustering only (on average)
2. Cubic polynomial
Figure 10: Experimental results
on average. Moreover, let"s note the very small variation of the
average performances with the maximum and the minimum. In the
worst case for 100 terms, the variation is of 126.73 for an average
of 20,737.03 (around 0.6%) which proves the excellent stability of
the system.
Finally the extra head coverage rules are a real improvement on
the distributed algorithm alone. They introduce more constraints
and stability point is reached with less interactions and decision
making by the agents. It means that less messages are exchanged
in the system while obtaining a tree of higher quality for the
ontologist.
5. DISCUSSION & PERSPECTIVES
5.1 Current Limitation of our Approach
The most important limitation of our current algorithm is that
the result depends on the order the data gets added. When the
system works by itself on a fixed data set given during initialization,
the final result is equivalent to what we could obtain with a
centralized algorithm. On the contrary, adding a new item after a first
stabilization has an impact on the final result.
Figure 11: Concept agent tree after autonomous stabilization
of the system
To illustrate our claims, we present another example of the
working system. By using test data and letting the system work by itself,
we obtain the hierarchy of figure 11 after stabilization.
Figure 12: Concept agent tree after taking in account
"hepatitis"
Then, the ontologist interacts with the system and adds a new
concept described by the term "hepatitis" and linked to the root.
The system reacts and stabilizes, we then obtain figure 12 as a
result. "hepatitis" is located in the right branch, but we have not
obtained the same organization as the figure 6 of the previous
example. We need to improve our distributed algorithm to allow a
concept to move along a branch. We are currently working on the
required rules, but the comparison with centralized algorithm will
become very difficult. In particular since they will take into account
criteria ignored by the centralized algorithm.
5.2 Pruning for Ontologies Building
In section 3, we presented the distributed clustering algorithm
used in the Dynamo system. Since this work was first based on this
algorithm, it introduced a clear bias toward binary trees as a result.
But we have to keep in mind that we are trying to obtain taxonomies
which are more refined and concise. Although the head coverage
rule is an improvement because it is based on how the ontologists
generally work, it only addresses low level structuring but not the
intermediate levels of the tree.
By looking at figure 7, it is clear that some pruning could be
done in the taxonomy. In particular, since "lésion" moved,
"ConceptAgent:9" could be removed, it is not needed anymore.
Moreover the branch starting with "ConceptAgent:8" clearly respects the
constraint to make a binary tree, but it would be more useful to the
user in a more compact and meaningful form. In this case
"ConceptAgent:10" and "ConceptAgent:11" could probably be merged.
Currently, our system has the necessary rules to create
intermediate levels in the taxonomy, or to have concepts shifting towards
the leaf. As we pointed, it is not enough, so new rules are needed to
allow removing nodes from the tree, or move them toward the root.
Most of the work needed to develop those rules consists in finding
the relevant statistic information that will support the ontologist.
6. CONCLUSION
After being presented as a promising solution, ensuring model
quality and their terminological richness, ontology building from
textual corpus analysis is difficult and costly. It requires analyst
supervising and taking in account the ontology aim. Using
natural languages processing tools ease the knowledge localization in
texts through language uses. That said, those tools produce a huge
amount of lexical or grammatical data which is not trivial to
examine in order to define conceptual elements. Our contribution lies in
this step of the modeling process from texts, before any attempts to
normalize or formalize the result.
We proposed an approach based on an adaptive multi-agent
system to provide the ontologist with a first taxonomic structure of
concepts. Our system makes use of a terminological network
resulting from an analysis made by Syntex. The current state of our
software allows to produce simple structures, to propose them to
the ontologist and to make them evolve depending on the
modifications he made. Performances of the system are interesting and
some aspects are even comparable to their centralized counterpart.
Its strengths are mostly qualitative since it allows more subtle user
interactions and a progressive adaptation to new linguistic based
information.
From the point of view of ontology building, this work is a first
step showing the relevance of our approach. It must continue, both
to ensure a better robustness during classification, and to obtain
richer structures semantic wise than simple trees. From this
improvements we are mostly focusing on the pruning to obtain better
taxonomies. We"re currently working on the criterion to trigger
the complementary actions of the structure changes applied by our
clustering algorithm. In other words this algorithm introduces
inThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1291
termediate levels, and we need to be able to remove them if
necessary, in order to reach a dynamic equilibrium.
Also from the multi-agent engineering point of view, their use
in a dynamic ontology context has shown its relevance. This
dynamic ontologies can be seen as complex problem solving, in such
a case self-organization through cooperation has been an efficient
solution. And, more generally it"s likely to be interesting for other
design related tasks, even if we"re focusing only on knowledge
engineering in this paper. Of course, our system still requires more
evaluation and validation work to accurately determine the
advantages and flaws of this approach. We"re planning to work on such
benchmarking in the near future.
7. REFERENCES
[1] H. Assadi. Construction of a regional ontology from text and
its use within a documentary system. Proceedings of the
International Conference on Formal Ontology and
Information Systems - FOIS"98, pages 236-249, 1998.
[2] N. Aussenac-Gilles and D. Sörgel. Text analysis for ontology
and terminology engineering. Journal of Applied Ontology,
2005.
[3] J. Bao and V. Honavar. Collaborative ontology building with
wiki@nt. Proceedings of the Workshop on Evaluation of
Ontology-Based Tools (EON2004), 2004.
[4] C. Bernon, V. Camps, M.-P. Gleizes, and G. Picard.
Agent-Oriented Methodologies, chapter 7. Engineering
Self-Adaptive Multi-Agent Systems : the ADELFE
Methodology, pages 172-202. Idea Group Publishing, 2005.
[5] C. Brewster, F. Ciravegna, and Y. Wilks. Background and
foreground knowledge in dynamic ontology construction.
Semantic Web Workshop, SIGIR"03, August 2003.
[6] D. Faure and C. Nedellec. A corpus-based conceptual
clustering method for verb frames and ontology acquisition.
LREC workshop on adapting lexical and corpus resources to
sublanguages and applications, 1998.
[7] F. Gandon. Ontology Engineering: a Survey and a Return on
Experience. INRIA, 2002.
[8] J.-P. Georgé, G. Picard, M.-P. Gleizes, and P. Glize. Living
Design for Open Computational Systems. 12th IEEE
International Workshops on Enabling Technologies,
Infrastructure for Collaborative Enterprises, pages 389-394,
June 2003.
[9] M.-P. Gleizes, V. Camps, and P. Glize. A Theory of emergent
computation based on cooperative self-organization for
adaptive artificial systems. Fourth European Congress of
Systems Science, September 1999.
[10] J. Heflin and J. Hendler. Dynamic ontologies on the web.
American Association for Artificial Intelligence Conference,
2000.
[11] S. Le Moigno, J. Charlet, D. Bourigault, and M.-C. Jaulent.
Terminology extraction from text to build an ontology in
surgical intensive care. Proceedings of the AMIA 2002
annual symposium, 2002.
[12] K. Lister, L. Sterling, and K. Taveter. Reconciling
Ontological Differences by Assistant Agents. AAMAS"06,
May 2006.
[13] A. Maedche. Ontology learning for the Semantic Web.
Kluwer Academic Publisher, 2002.
[14] A. Maedche and S. Staab. Mining Ontologies from Text.
EKAW 2000, pages 189-202, 2000.
[15] C. D. Manning and H. Schütze. Foundations of Statistical
Natural Language Processing. The MIT Press, Cambridge,
Massachusetts, 1999.
[16] H. V. D. Parunak, R. Rohwer, T. C. Belding, and
S. Brueckner. Dynamic decentralized any-time hierarchical
clustering. 29th Annual International ACM SIGIR
Conference on Research & Development on Information
Retrieval, August 2006.
1292 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) | cooperation;parent adequacy function;ontology;dynamic equilibrium;hepatitis;emergent behavior;quantitative evaluation;black-box;model quality;multi-agent field;dynamo;terminological richness |
train_I-71 | A Formal Model for Situated Semantic Alignment | Ontology matching is currently a key technology to achieve the semantic alignment of ontological entities used by knowledge-based applications, and therefore to enable their interoperability in distributed environments such as multiagent systems. Most ontology matching mechanisms, however, assume matching prior integration and rely on semantics that has been coded a priori in concept hierarchies or external sources. In this paper, we present a formal model for a semantic alignment procedure that incrementally aligns differing conceptualisations of two or more agents relative to their respective perception of the environment or domain they are acting in. It hence makes the situation in which the alignment occurs explicit in the model. We resort to Channel Theory to carry out the formalisation. | 1. INTRODUCTION
An ontology is commonly defined as a specification of the
conceptualisation of a particular domain. It fixes the
vocabulary used by knowledge engineers to denote concepts and
their relations, and it constrains the interpretation of this
vocabulary to the meaning originally intended by knowledge
engineers. As such, ontologies have been widely adopted as
a key technology that may favour knowledge sharing in
distributed environments, such as multi-agent systems,
federated databases, or the Semantic Web. But the proliferation
of many diverse ontologies caused by different
conceptualisations of even the same domain -and their subsequent
specification using varying terminology- has highlighted
the need of ontology matching techniques that are
capable of computing semantic relationships between entities of
separately engineered ontologies. [5, 11]
Until recently, most ontology matching mechanisms
developed so far have taken a classical functional approach
to the semantic heterogeneity problem, in which ontology
matching is seen as a process taking two or more
ontologies as input and producing a semantic alignment of
ontological entities as output [3]. Furthermore, matching
often has been carried out at design-time, before
integrating knowledge-based systems or making them interoperate.
This might have been successful for clearly delimited and
stable domains and for closed distributed systems, but it is
untenable and even undesirable for the kind of applications
that are currently deployed in open systems. Multi-agent
communication, peer-to-peer information sharing, and
webservice composition are all of a decentralised, dynamic, and
open-ended nature, and they require ontology matching to
be locally performed during run-time. In addition, in many
situations peer ontologies are not even open for inspection
(e.g., when they are based on commercially confidential
information).
Certainly, there exist efforts to efficiently match
ontological entities at run-time, taking only those ontology
fragment that are necessary for the task at hand [10, 13, 9, 8].
Nevertheless, the techniques used by these systems to
establish the semantic relationships between ontological entities
-even though applied at run-time- still exploit a priori
defined concept taxonomies as they are represented in the
graph-based structures of the ontologies to be matched, use
previously existing external sources such as thesauri (e.g.,
WordNet) and upper-level ontologies (e.g., CyC or SUMO),
or resort to additional background knowledge repositories or
shared instances.
We claim that semantic alignment of ontological
terminology is ultimately relative to the particular situation in which
the alignment is carried out, and that this situation should
be made explicit and brought into the alignment
mechanism. Even two agents with identical conceptualisation
capabilities, and using exactly the same vocabulary to specify
their respective conceptualisations may fail to interoperate
1278
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
in a concrete situation because of their differing perception
of the domain. Imagine a situation in which two agents
are facing each other in front of a checker board. Agent
A1 may conceptualise a figure on the board as situated on
the left margin of the board, while agent A2 may
conceptualise the same figure as situated on the right. Although
the conceptualisation of ‘left" and ‘right" is done in exactly
the same manner by both agents, and even if both use the
terms left and right in their communication, they still will
need to align their respective vocabularies if they want to
successfully communicate to each other actions that change
the position of figures on the checker board. Their semantic
alignment, however, will only be valid in the scope of their
interaction within this particular situation or environment.
The same agents situated differently may produce a different
alignment.
This scenario is reminiscent to those in which a group of
distributed agents adapt to form an ontology and a shared
lexicon in an emergent, bottom-up manner, with only local
interactions and no central control authority [12]. This sort
of self-organised emergence of shared meaning is namely
ultimately grounded on the physical interaction of agents with
the environment. In this paper, however, we address the
case in which agents are already endowed with a top-down
engineered ontology (it can even be the same one), which
they do not adapt or refine, but for which they want to
find the semantic relationships with separate ontologies of
other agents on the grounds of their communication within a
specific situation. In particular, we provide a formal model
that formalises situated semantic alignment as a sequence of
information-channel refinements in the sense of Barwise and
Seligman"s theory of information flow [1]. This theory is
particularly useful for our endeavour because it models the flow
of information occurring in distributed systems due to the
particular situations -or tokens- that carry information.
Analogously, the semantic alignment that will allow
information to flow ultimately will be carried by the particular
situation agents are acting in.
We shall therefore consider a scenario with two or more
agents situated in an environment. Each agent will have its
own viewpoint of the environment so that, if the
environment is in a concrete state, both agents may have different
perceptions of this state. Because of these differences there
may be a mismatch in the meaning of the syntactic
entities by which agents describe their perceptions (and which
constitute the agents" respective ontologies). We state that
these syntactic entities can be related according to the
intrinsic semantics provided by the existing relationship
between the agents" viewpoint of the environment. The
existence of this relationship is precisely justified by the fact that
the agents are situated and observe the same environment.
In Section 2 we describe our formal model for Situated
Semantic Alignment (SSA). First, in Section 2.1 we associate
a channel to the scenario under consideration and show how
the distributed logic generated by this channel provides the
logical relationships between the agents" viewpoints of the
environment. Second, in Section 2.2 we present a method by
which agents obtain approximations of this distributed logic.
These approximations gradually become more reliable as the
method is applied. In Section 3 we report on an application
of our method. Conclusions and further work are analyzed
in Section 4. Finally, an appendix summarizes the terms and
theorems of Channel theory used along the paper. We do not
assume any knowledge of Channel Theory; we restate basic
definitions and theorems in the appendix, but any detailed
exposition of the theory is outside the scope of this paper.
2. A FORMAL MODEL FOR SSA
2.1 The Logic of SSA
Consider a scenario with two agents A1 and A2 situated
in an environment E (the generalization to any numerable
set of agents is straightforward). We associate a numerable
set S of states to E and, at any given instant, we suppose
E to be in one of these states. We further assume that
each agent is able to observe the environment and has its
own perception of it. This ability is faithfully captured by
a surjective function seei : S → Pi, where i ∈ {1, 2}, and
typically see1 and see2 are different.
According to Channel Theory, information is only viable
where there is a systematic way of classifying some range
of things as being this way or that, in other words, where
there is a classification (see appendix A). So in order to be
within the framework of Channel Theory, we must associate
classifications to the components of our system.
For each i ∈ {1, 2}, we consider a classification Ai that
models Ai"s viewpoint of E. First, tok(Ai) is composed of
Ai"s perceptions of E states, that is, tok(Ai) = Pi. Second,
typ(Ai) contains the syntactic entities by which Ai describes
its perceptions, the ones constituting the ontology of Ai.
Finally, |=Ai synthesizes how Ai relates its perceptions with
these syntactic entities.
Now, with the aim of associating environment E with a
classification E we choose the power classification of S as E,
which is the classification whose set of types is equal to 2S
,
whose tokens are the elements of S, and for which a token
e is of type ε if e ∈ ε. The reason for taking the power
classification is because there are no syntactic entities that
may play the role of types for E since, in general, there is no
global conceptualisation of the environment. However, the
set of types of the power classification includes all possible
token configurations potentially described by types. Thus
tok(E) = S, typ(E) = 2S
and e |=E ε if and only if e ∈ ε.
The notion of channel (see appendix A) is fundamental in
Barwise and Seligman"s theory. The information flow among
the components of a distributed system is modelled in terms
of a channel and the relationships among these components
are expressed via infomorphisms (see appendix A) which
provide a way of moving information between them.
The information flow of the scenario under consideration
is accurately described by channel E = {fi : Ai → E}i∈{1,2}
defined as follows:
• ˆfi(α) = {e ∈ tok(E) | seei(e) |=Ai α} for each α ∈
typ(Ai)
• ˇfi(e) = seei(e) for each e ∈ tok(E)
where i ∈ {1, 2}. Definition of ˇfi seems natural while ˆfi is
defined in such a way that the fundamental property of the
infomorphisms is fulfilled:
ˇfi(e) |=Ai α iff seei(e) |=Ai α (by definition of ˇfi)
iff e ∈ ˆfi(α) (by definition of ˆfi)
iff e |=E
ˆfi(α) (by definition of |=E)
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1279
Consequently, E is the core of channel E and a state
e ∈ tok(E) connects agents" perceptions ˇf1(e) and ˇf2(e) (see
Figure 1).
typ(E)
typ(A1)
ˆf1
99ttttttttt
typ(A2)
ˆf2
eeJJJJJJJJJ
tok(E)
|=E
ˇf1yyttttttttt
ˇf2 %%JJJJJJJJJ
tok(A1)
|=A1
tok(A2)
|=A2
Figure 1: Channel E
E explains the information flow of our scenario by virtue
of agents A1 and A2 being situated and perceiving the same
environment E. We want to obtain meaningful relations
among agents" syntactic entities, that is, agents" types. We
state that meaningfulness must be in accord with E.
The sum operation (see appendix A) gives us a way of
putting the two agents" classifications of channel E together
into a single classification, namely A1 +A2, and also the two
infomorphisms together into a single infomorphism, f1 +f2 :
A1 + A2 → E.
A1 + A2 assembles agents" classifications in a very coarse
way. tok(A1 + A2) is the cartesian product of tok(A1) and
tok(A2), that is, tok(A1 + A2) = { p1, p2 | pi ∈ Pi}, so a
token of A1 + A2 is a pair of agents" perceptions with no
restrictions. typ(A1 + A2) is the disjoint union of typ(A1)
and typ(A2), and p1, p2 is of type i, α if pi is of type
α. We attach importance to take the disjoint union because
A1 and A2 could use identical types with the purpose of
describing their respective perceptions of E.
Classification A1 + A2 seems to be the natural place in
which to search for relations among agents" types. Now,
Channel Theory provides a way to make all these relations
explicit in a logical fashion by means of theories and local
logics (see appendix A). The theory generated by the sum
classification, Th(A1 + A2), and hence its logic generated,
Log(A1 + A2), involve all those constraints among agents"
types valid according to A1 +A2. Notice however that these
constraints are obvious. As we stated above, meaningfulness
must be in accord with channel E.
Classifications A1 + A2 and E are connected via the sum
infomorphism, f = f1 + f2, where:
• ˆf( i, α ) = ˆfi(α) = {e ∈ tok(E) | seei(e) |=Ai α} for
each i, α ∈ typ(A1 + A2)
• ˇf(e) = ˇf1(e), ˇf2(e) = see1(e), see2(e) for each e ∈
tok(E)
Meaningful constraints among agents" types are in accord
with channel E because they are computed making use of f
as we expound below.
As important as the notion of channel is the concept of
distributed logic (see appendix A). Given a channel C and
a logic L on its core, DLogC(L) represents the reasoning
about relations among the components of C justified by L.
If L = Log(C), the distributed logic, we denoted by Log(C),
captures in a logical fashion the information flow inherent
in the channel.
In our case, Log(E) explains the relationship between the
agents" viewpoints of the environment in a logical fashion.
On the one hand, constraints of Th(Log(E)) are defined by:
Γ Log(E) Δ if ˆf[Γ] Log(E)
ˆf[Δ] (1)
where Γ, Δ ⊆ typ(A1 + A2). On the other hand, the set of
normal tokens, NLog(E), is equal to the range of function ˇf:
NLog(E) = ˇf[tok(E)]
= { see1(e), see2(e) | e ∈ tok(E)}
Therefore, a normal token is a pair of agents" perceptions
that are restricted by coming from the same environment
state (unlike A1 + A2 tokens).
All constraints of Th(Log(E)) are satisfied by all normal
tokens (because of being a logic). In this particular case, this
condition is also sufficient (the proof is straightforward); as
alternative to (1) we have:
Γ Log(E) Δ iff for all e ∈ tok(E),
if (∀ i, γ ∈ Γ)[seei(e) |=Ai γ]
then (∃ j, δ ∈ Δ)[seej(e) |=Aj δ] (2)
where Γ, Δ ⊆ typ(A1 + A2).
Log(E) is the logic of SSA. Th(Log(E)) comprises the
most meaningful constraints among agents" types in accord
with channel E. In other words, the logic of SSA contains
and also justifies the most meaningful relations among those
syntactic entities that agents use in order to describe their
own environment perceptions.
Log(E) is complete since Log(E) is complete but it is not
necessarily sound because although Log(E) is sound, ˇf is
not surjective in general (see appendix B). If Log(E) is also
sound then Log(E) = Log(A1 +A2) (see appendix B). That
means there is no significant relation between agents" points
of view of the environment according to E. It is just the fact
that Log(E) is unsound what allows a significant relation
between the agents" viewpoints. This relation is expressed
at the type level in terms of constraints by Th(Log(E)) and
at the token level by NLog(E).
2.2 Approaching the logic of SSA
through communication
We have dubbed Log(E) the logic of SSA. Th(Log(E))
comprehends the most meaningful constraints among agents"
types according to E. The problem is that neither agent
can make use of this theory because they do not know E
completely. In this section, we present a method by which
agents obtain approximations to Th(Log(E)). We also prove
these approximations gradually become more reliable as the
method is applied.
Agents can obtain approximations to Th(Log(E)) through
communication. A1 and A2 communicate by exchanging
information about their perceptions of environment states.
This information is expressed in terms of their own
classification relations. Specifically, if E is in a concrete state e,
we assume that agents can convey to each other which types
are satisfied by their respective perceptions of e and which
are not. This exchange generates a channel C = {fi : Ai →
1280 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
C}i∈{1,2} and Th(Log(C)) contains the constraints among
agents" types justified by the fact that agents have observed
e. Now, if E turns to another state e and agents proceed
as before, another channel C = {fi : Ai → C }i∈{1,2} gives
account of the new situation considering also the previous
information. Th(Log(C )) comprises the constraints among
agents" types justified by the fact that agents have observed
e and e . The significant point is that C is a refinement of
C (see appendix A). Theorem 2.1 below ensures that the
refined channel involves more reliable information.
The communication supposedly ends when agents have
observed all the environment states. Again this situation can
be modeled by a channel, call it C∗
= {f∗
i : Ai → C∗
}i∈{1,2}.
Theorem 2.2 states that Th(Log(C∗
)) = Th(Log(E)).
Theorem 2.1 and Theorem 2.2 assure that applying the
method agents can obtain approximations to Th(Log(E))
gradually more reliable.
Theorem 2.1. Let C = {fi : Ai → C}i∈{1,2} and C =
{fi : Ai → C }i∈{1,2} be two channels. If C is a refinement
of C then:
1. Th(Log(C )) ⊆ Th(Log(C))
2. NLog(C ) ⊇ NLog(C)
Proof. Since C is a refinement of C then there exists a
refinement infomorphism r from C to C; so fi = r ◦ fi . Let
A =def A1 + A2, f =def f1 + f2 and f =def f1 + f2.
1. Let Γ and Δ be subsets of typ(A) and assume that
Γ Log(C ) Δ, which means ˆf [Γ] C
ˆf [Δ]. We have
to prove Γ Log(C) Δ, or equivalently, ˆf[Γ] C
ˆf[Δ].
We proceed by reductio ad absurdum. Suppose c ∈
tok(C) does not satisfy the sequent ˆf[Γ], ˆf[Δ] . Then
c |=C
ˆf(γ) for all γ ∈ Γ and c |=C
ˆf(δ) for all δ ∈ Δ.
Let us choose an arbitrary γ ∈ Γ. We have that
γ = i, α for some α ∈ typ(Ai) and i ∈ {1, 2}. Thus
ˆf(γ) = ˆf( i, α ) = ˆfi(α) = ˆr ◦ ˆfi (α) = ˆr( ˆfi (α)).
Therefore:
c |=C
ˆf(γ) iff c |=C ˆr( ˆfi (α))
iff ˇr(c) |=C
ˆfi (α)
iff ˇr(c) |=C
ˆf ( i, α )
iff ˇr(c) |=C
ˆf (γ)
Consequently, ˇr(c) |=C
ˆf (γ) for all γ ∈ Γ. Since
ˆf [Γ] C
ˆf [Δ] then there exists δ∗
∈ Δ such that
ˇr(c) |=C
ˆf (δ∗
). A sequence of equivalences similar to
the above one justifies c |=C
ˆf(δ∗
), contradicting that c
is a counterexample to ˆf[Γ], ˆf[Δ] . Hence Γ Log(C) Δ
as we wanted to prove.
2. Let a1, a2 ∈ tok(A) and assume a1, a2 ∈ NLog(C).
Therefore, there exists c token in C such that a1, a2 =
ˇf(c). Then we have ai = ˇfi(c) = ˇfi ◦ ˇr(c) = ˇfi (ˇr(c)),
for i ∈ {1, 2}. Hence a1, a2 = ˇf (ˇr(c)) and a1, a2 ∈
NLog(C ). Consequently, NLog(C ) ⊇ NLog(C) which
concludes the proof.
Remark 2.1. Theorem 2.1 asserts that the more refined
channel gives more reliable information. Even though its
theory has less constraints, it has more normal tokens to
which they apply.
In the remainder of the section, we explicitly describe the
process of communication and we conclude with the proof
of Theorem 2.2.
Let us assume that typ(Ai) is finite for i ∈ {1, 2} and S
is infinite numerable, though the finite case can be treated
in a similar form. We also choose an infinite numerable set
of symbols {cn
| n ∈ N}1
.
We omit informorphisms superscripts when no confusion
arises. Types are usually denoted by greek letters and tokens
by latin letters so if f is an infomorphism, f(α) ≡ ˆf(α) and
f(a) ≡ ˇf(a).
Agents communication starts from the observation of E.
Let us suppose that E is in state e1
∈ S = tok(E). A1"s
perception of e1
is f1(e1
) and A2"s perception of e1
is f2(e1
).
We take for granted that A1 can communicate A2 those
types that are and are not satisfied by f1(e1
) according to
its classification A1. So can A2 do. Since both typ(A1) and
typ(A2) are finite, this process eventually finishes. After
this communication a channel C1
= {f1
i : Ai → C1
}i=1,2
arises (see Figure 2).
C1
A1
f1
1
==||||||||
A2
f1
2
aaCCCCCCCC
Figure 2: The first communication stage
On the one hand, C1
is defined by:
• tok(C1
) = {c1
}
• typ(C1
) = typ(A1 + A2)
• c1
|=C1 i, α if fi(e1
) |=Ai α
(for every i, α ∈ typ(A1 + A2))
On the other hand, f1
i , with i ∈ {1, 2}, is defined by:
• f1
i (α) = i, α
(for every α ∈ typ(Ai))
• f1
i (c1
) = fi(e1
)
Log(C1
) represents the reasoning about the first stage of
communication. It is easy to prove that Th(Log(C1
)) =
Th(C1
). The significant point is that both agents know C1
as the result of the communication. Hence they can compute
separately theory Th(C1
) = typ(C1
), C1 which contains
the constraints among agents" types justified by the fact that
agents have observed e1
.
Now, let us assume that E turns to a new state e2
. Agents
can proceed as before, exchanging this time information
about their perceptions of e2
. Another channel C2
= {f2
i :
Ai → C2
}i∈{1,2} comes up. We define C2
so as to take also
into account the information provided by the previous stage
of communication.
On the one hand, C2
is defined by:
• tok(C2
) = {c1
, c2
}
1
We write these symbols with superindices because we limit
the use of subindices for what concerns to agents. Note this
set is chosen with the same cardinality of S.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1281
• typ(C2
) = typ(A1 + A2)
• ck
|=C2 i, α if fi(ek
) |=Ai α
(for every k ∈ {1, 2} and i, α ∈ typ(A1 + A2))
On the other hand, f2
i , with i ∈ {1, 2}, is defined by:
• f2
i (α) = i, α
(for every α ∈ typ(Ai))
• f2
i (ck
) = fi(ek
)
(for every k ∈ {1, 2})
Log(C2
) represents the reasoning about the former and
the later communication stages. Th(Log(C2
)) is equal to
Th(C2
) = typ(C2
), C2 , then it contains the constraints
among agents" types justified by the fact that agents have
observed e1
and e2
. A1 and A2 knows C2
so they can use
these constraints. The key point is that channel C2
is a
refinement of C1
. It is easy to check that f1
defined as
the identity function on types and the inclusion function on
tokens is a refinement infomorphism (see at the bottom of
Figure 3). By Theorem 2.1, C2
constraints are more reliable
than C1
constraints.
In the general situation, once the states e1
, e2
, . . . , en−1
(n ≥ 2) have been observed and a new state en
appears,
channel Cn
= {fn
i : Ai → Cn
}i∈{1,2} informs about agents
communication up to that moment. Cn
definition is
similar to the previous ones and analogous remarks can be
made (see at the top of Figure 3). Theory Th(Log(Cn
)) =
Th(Cn
) = typ(Cn
), Cn contains the constraints among
agents" types justified by the fact that agents have observed
e1
, e2
, . . . , en
.
Cn
fn−1
A1
fn−1
1
99PPPPPPPPPPPPP
fn
1
UUnnnnnnnnnnnnn
f2
1
%%44444444444444444444444444
f1
1
"",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, A2
fn
2
ggPPPPPPPPPPPPP
fn−1
2
wwnnnnnnnnnnnnn
f2
2
ÕÕ
f1
2
ØØ
Cn−1
.
.
.
C2
f1
C1
Figure 3: Agents communication
Remember we have assumed that S is infinite numerable.
It is therefore unpractical to let communication finish when
all environment states have been observed by A1 and A2.
At that point, the family of channels {Cn
}n∈N would inform
of all the communication stages. It is therefore up to the
agents to decide when to stop communicating should a good
enough approximation have been reached for the purposes of
their respective tasks. But the study of possible termination
criteria is outside the scope of this paper and left for future
work. From a theoretical point of view, however, we can
consider the channel C∗
= {f∗
i : Ai → C∗
}i∈{1,2} which
informs of the end of the communication after observing all
environment states.
On the one hand, C∗
is defined by:
• tok(C∗
) = {cn
| n ∈ N}
• typ(C∗
) = typ(A1 + A2)
• cn
|=C∗ i, α if fi(en
) |=Ai α
(for n ∈ N and i, α ∈ typ(A1 + A2))
On the other hand, f∗
i , with i ∈ {1, 2}, is defined by:
• f∗
i (α) = i, α
(for α ∈ typ(Ai))
• f∗
i (cn
) = fi(en
)
(for n ∈ N)
Theorem below constitutes the cornerstone of the model
exposed in this paper. It ensures, together with Theorem
2.1, that at each communication stage agents obtain a theory
that approximates more closely to the theory generated by
the logic of SSA.
Theorem 2.2. The following statements hold:
1. For all n ∈ N, C∗
is a refinement of Cn
.
2. Th(Log(E)) = Th(C∗
) = Th(Log(C∗
)).
Proof.
1. It is easy to prove that for each n ∈ N, gn
defined as the
identity function on types and the inclusion function
on tokens is a refinement infomorphism from C∗
to Cn
.
2. The second equality is straightforward; the first one
follows directly from:
cn
|=C∗ i, α iff ˇfi(en
) |=Ai α
(by definition of |=C∗ )
iff en
|=E
ˆfi(α)
(because fi is infomorphim)
iff en
|=E
ˆf( i, α )
(by definition of ˆf)
E
C∗
gn
A1
fn
1
99OOOOOOOOOOOOO
f∗
1
UUooooooooooooo
f1
cc
A2
f∗
2
ggOOOOOOOOOOOOO
fn
2
wwooooooooooooo
f2
?????????????????
Cn
1282 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
3. AN EXAMPLE
In the previous section we have described in great detail
our formal model for SSA. However, we have not tackled
the practical aspect of the model yet. In this section, we
give a brushstroke of the pragmatic view of our approach.
We study a very simple example and explain how agents
can use those approximations of the logic of SSA they can
obtain through communication.
Let us reflect on a system consisting of robots located in
a two-dimensional grid looking for packages with the aim of
moving them to a certain destination (Figure 4). Robots
can carry only one package at a time and they can not move
through a package.
Figure 4: The scenario
Robots have a partial view of the domain and there exist
two kinds of robots according to the visual field they have.
Some robots are capable of observing the eight adjoining
squares but others just observe the three squares they have
in front (see Figure 5). We call them URDL (shortened
form of Up-Right-Down-Left) and LCR (abbreviation for
Left-Center-Right) robots respectively.
Describing the environment states as well as the robots"
perception functions is rather tedious and even unnecessary.
We assume the reader has all those descriptions in mind.
All robots in the system must be able to solve package
distribution problems cooperatively by communicating their
intentions to each other. In order to communicate, agents
send messages using some ontology. In our scenario, there
coexist two ontologies, the UDRL and LCR ontologies. Both
of them are very simple and are just confined to describe
what robots observe.
Figure 5: Robots field of vision
When a robot carrying a package finds another package
obstructing its way, it can either go around it or, if there is
another robot in its visual field, ask it for assistance. Let
us suppose two URDL robots are in a situation like the one
depicted in Figure 6. Robot1 (the one carrying a package)
decides to ask Robot2 for assistance and sends a request.
This request is written below as a KQML message and it
should be interpreted intuitively as: Robot2, pick up the
package located in my Up square, knowing that you are
located in my Up-Right square.
`
request
:sender Robot1
:receiver Robot2
:language Packages distribution-language
:ontology URDL-ontology
:content (pick up U(Package) because UR(Robot2)
´
Figure 6: Robot assistance
Robot2 understands the content of the request and it can
use a rule represented by the following constraint:
1, UR(Robot2) , 2, UL(Robot1) , 1, U(Package)
2, U(Package)
The above constraint should be interpreted intuitively as:
if Robot2 is situated in Robot1"s Up-Right square, Robot1
is situated in Robot2"s Up-Left square and a package is
located in Robot1"s Up square, then a package is located
in Robot2"s Up square.
Now, problems arise when a LCR robot and a URDL
robot try to interoperate. See Figure 7. Robot1 sends a
request of the form:
`
request
:sender Robot1
:receiver Robot2
:language Packages distribution-language
:ontology LCR-ontology
:content (pick up R(Robot2) because C(Package)
´
Robot2 does not understand the content of the request but
they decide to begin a process of alignment -corresponding
with a channel C1
. Once finished, Robot2 searches in Th(C1
)
for constraints similar to the expected one, that is, those of
the form:
1, R(Robot2) , 2, UL(Robot1) , 1, C(Package)
C1 2, λ(Package)
where λ ∈ {U, R, D, L, UR, DR, DL, UL}. From these, only
the following constraints are plausible according to C1
:
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1283
Figure 7: Ontology mismatch
1, R(Robot2) , 2, UL(Robot1) , 1, C(Package)
C1 2, U(Package)
1, R(Robot2) , 2, UL(Robot1) , 1, C(Package)
C1 2, L(Package)
1, R(Robot2) , 2, UL(Robot1) , 1, C(Package)
C1 2, DR(Package)
If subsequently both robots adopting the same roles take
part in a situation like the one depicted in Figure 8, a new
process of alignment -corresponding with a channel C2
- takes
place. C2
also considers the previous information and hence
refines C1
. The only constraint from the above ones that
remains plausible according to C2
is :
1, R(Robot2) , 2, UL(Robot1) , 1, C(Package)
C2 2, U(Package)
Notice that this constraint is an element of the theory of the
distributed logic. Agents communicate in order to cooperate
successfully and success is guaranteed using constrains of the
distributed logic.
Figure 8: Refinement
4. CONCLUSIONS AND FURTHER WORK
In this paper we have exposed a formal model of semantic
alignment as a sequence of information-channel refinements
that are relative to the particular states of the environment
in which two agents communicate and align their respective
conceptualisations of these states. Before us, Kent [6] and
Kalfoglou and Schorlemmer [4, 10] have applied Channel
Theory to formalise semantic alignment using also Barwise
and Seligman"s insight to focus on tokens as the enablers
of information flow. Their approach to semantic alignment,
however, like most ontology matching mechanisms
developed to date (regardless of whether they follow a functional,
design-time-based approach, or an interaction-based,
runtime-based approach), still defines semantic alignment in
terms of a priori design decisions such as the concept
taxonomy of the ontologies or the external sources brought into
the alignment process. Instead the model we have presented
in this paper makes explicit the particular states of the
environment in which agents are situated and are attempting
to gradually align their ontological entities.
In the future, our effort will focus on the practical side of
the situated semantic alignment problem. We plan to
further refine the model presented here (e.g., to include
pragmatic issues such as termination criteria for the alignment
process) and to devise concrete ontology negotiation
protocols based on this model that agents may be able to enact.
The formal model exposed in this paper will constitute a
solid base of future practical results.
Acknowledgements
This work is supported under the UPIC project, sponsored
by Spain"s Ministry of Education and Science under grant
number TIN2004-07461-C02- 02 and also under the
OpenKnowledge Specific Targeted Research Project (STREP),
sponsored by the European Commission under contract
number FP6-027253. Marco Schorlemmer is supported by a
Ram´on y Cajal Research Fellowship from Spain"s Ministry
of Education and Science, partially funded by the European
Social Fund.
5. REFERENCES
[1] J. Barwise and J. Seligman. Information Flow: The
Logic of Distributed Systems. Cambridge University
Press, 1997.
[2] C. Ghidini and F. Giunchiglia. Local models
semantics, or contextual reasoning = locality +
compatibility. Artificial Intelligence, 127(2):221-259,
2001.
[3] F. Giunchiglia and P. Shvaiko. Semantic matching.
The Knowledge Engineering Review, 18(3):265-280,
2004.
[4] Y. Kalfoglou and M. Schorlemmer. IF-Map: An
ontology-mapping method based on information-flow
theory. In Journal on Data Semantics I, LNCS 2800,
2003.
[5] Y. Kalfoglou and M. Schorlemmer. Ontology mapping:
The sate of the art. The Knowledge Engineering
Review, 18(1):1-31, 2003.
[6] R. E. Kent. Semantic integration in the Information
Flow Framework. In Semantic Interoperability and
Integration, Dagstuhl Seminar Proceedings 04391,
2005.
[7] D. Lenat. CyC: A large-scale investment in knowledge
infrastructure. Communications of the ACM, 38(11),
1995.
[8] V. L´opez, M. Sabou, and E. Motta. PowerMap:
Mapping the real Semantic Web on the fly.
Proceedings of the ISWC"06, 2006.
[9] F. McNeill. Dynamic Ontology Refinement. PhD
1284 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
thesis, School of Informatics, The University of
Edinburgh, 2006.
[10] M. Schorlemmer and Y. Kalfoglou. Progressive
ontology alignment for meaning coordination: An
information-theoretic foundation. In 4th Int. Joint
Conf. on Autonomous Agents and Multiagent Systems,
2005.
[11] P. Shvaiko and J. Euzenat. A survey of schema-based
matching approaches. In Journal on Data Semantics
IV, LNCS 3730, 2005.
[12] L. Steels. The Origins of Ontologies and
Communication Conventions in Multi-Agent Systems.
In Journal of Autonomous Agents and Multi-Agent
Systems, 1(2), 169-194, 1998.
[13] J. van Diggelen et al. ANEMONE: An Effective
Minimal Ontology Negotiation Environment In 5th
Int. Joint Conf. on Autonomous Agents and
Multiagent Systems, 2006
APPENDIX
A. CHANNEL THEORY TERMS
Classification: is a tuple A = tok(A), typ(A), |=A where
tok(A) is a set of tokens, typ(A) is a set of types and
|=A is a binary relation between tok(A) and typ(A). If
a |=A α then a is said to be of type α.
Infomorphism: f : A → B from classifications A to B is
a contravariant pair of functions f = ˆf, ˇf , where ˆf :
typ(A) → typ(B) and ˇf : tok(B) → tok(A), satisfying
the following fundamental property:
ˇf(b) |=A α iff b |=B
ˆf(α)
for each token b ∈ tok(B) and each type α ∈ typ(A).
Channel: consists of two infomorphisms C = {fi : Ai →
C}i∈{1,2} with a common codomain C, called the core
of C. C tokens are called connections and a connection
c is said to connect tokens ˇf1(c) and ˇf2(c).2
Sum: given classifications A and B, the sum of A and B,
denoted by A + B, is the classification with tok(A +
B) = tok(A) × tok(B) = { a, b | a ∈ tok(A) and b ∈
tok(B)}, typ(A + B) = typ(A) typ(B) = { i, γ |
i = 1 and γ ∈ typ(A) or i = 2 and γ ∈ typ(B)} and
relation |=A+B defined by:
a, b |=A+B 1, α if a |=A α
a, b |=A+B 2, β if b |=B β
Given infomorphisms f : A → C and g : B → C,
the sum f + g : A + B → C is defined on types by
ˆ(f + g)( 1, α ) = ˆf(α) and ˆ(f + g)( 2, β ) = ˆg(β), and
on tokens by ˇ(f + g)(c) = ˇf(c), ˇg(c) .
Theory: given a set Σ, a sequent of Σ is a pair Γ, Δ of
subsets of Σ. A binary relation between subsets of
Σ is called a consequence relation on Σ. A theory is a
pair T = Σ, where is a consequence relation on
Σ. A sequent Γ, Δ of Σ for which Γ Δ is called a
constraint of the theory T. T is regular if it satisfies:
1. Identity: α α
2. Weakening: if Γ Δ, then Γ, Γ Δ, Δ
2
In fact, this is the definition of a binary channel. A channel
can be defined with an arbitrary index set.
3. Global Cut: if Γ, Π0 Δ, Π1 for each partition
Π0, Π1 of Π (i.e., Π0 ∪ Π1 = Π and Π0 ∩ Π1 = ∅),
then Γ Δ
for all α ∈ Σ and all Γ, Γ , Δ, Δ , Π ⊆ Σ.3
Theory generated by a classification: let A be a
classification. A token a ∈ tok(A) satisfies a sequent Γ, Δ
of typ(A) provided that if a is of every type in Γ then
it is of some type in Δ. The theory generated by A,
denoted by Th(A), is the theory typ(A), A where
Γ A Δ if every token in A satisfies Γ, Δ .
Local logic: is a tuple L = tok(L), typ(L), |=L , L , NL
where:
1. tok(L), typ(L), |=L is a classification denoted by
Cla(L),
2. typ(L), L is a regular theory denoted by Th(L),
3. NL is a subset of tok(L), called the normal tokens
of L, which satisfy all constraints of Th(L).
A local logic L is sound if every token in Cla(L) is
normal, that is, NL = tok(L). L is complete if every
sequent of typ(L) satisfied by every normal token is a
constraint of Th(L).
Local logic generated by a classification: given a
classification A, the local logic generated by A, written
Log(A), is the local logic on A (i.e., Cla(Log(A)) =
A), with Th(Log(A)) = Th(A) and such that all its
tokens are normal, i.e., NLog(A) = tok(A).
Inverse image: given an infomorphism f : A → B and
a local logic L on B, the inverse image of L under
f, denoted f−1
[L], is the local logic on A such that
Γ f−1[L] Δ if ˆf[Γ] L
ˆf[Δ] and Nf−1[L] = ˇf[NL ] =
{a ∈ tok(A) | a = ˇf(b) for some b ∈ NL }.
Distributed logic: let C = {fi : Ai → C}i∈{1,2} be a
channel and L a local logic on its core C, the distributed
logic of C generated by L, written DLogC(L), is the
inverse image of L under the sum f1 + f2.
Refinement: let C = {fi : Ai → C}i∈{1,2} and C = {fi :
Ai → C }i∈{1,2} be two channels with the same
component classifications A1 and A2. A refinement
infomorphism from C to C is an infomorphism r : C → C
such that for each i ∈ {1, 2}, fi = r ◦fi (i.e., ˆfi = ˆr ◦ ˆfi
and ˇfi = ˇfi ◦ˇr). Channel C is a refinement of C if there
exists a refinement infomorphism r from C to C.
B. CHANNEL THEORY THEOREMS
Theorem B.1. The logic generated by a classification is
sound and complete. Furthermore, given a classification A
and a logic L on A, L is sound and complete if and only if
L = Log(A).
Theorem B.2. Let L be a logic on a classification B and
f : A → B an infomorphism.
1. If L is complete then f−1
[L] is complete.
2. If L is sound and ˇf is surjective then f−1
[L] is sound.
3
All theories considered in this paper are regular.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1285 | constraint;information-channel refinement;ontology;distributed logic;semantic alignment;distribute logic;knowledge-based system;channel refinement;sum infomorphism;multi-agent system;semantic web;disjoint union;federated database |
train_I-72 | Learning Consumer Preferences Using Semantic Similarity | In online, dynamic environments, the services requested by consumers may not be readily served by the providers. This requires the service consumers and providers to negotiate their service needs and offers. Multiagent negotiation approaches typically assume that the parties agree on service content and focus on finding a consensus on service price. In contrast, this work develops an approach through which the parties can negotiate the content of a service. This calls for a negotiation approach in which the parties can understand the semantics of their requests and offers and learn each other"s preferences incrementally over time. Accordingly, we propose an architecture in which both consumers and producers use a shared ontology to negotiate a service. Through repetitive interactions, the provider learns consumers" needs accurately and can make better targeted offers. To enable fast and accurate learning of preferences, we develop an extension to Version Space and compare it with existing learning techniques. We further develop a metric for measuring semantic similarity between services and compare the performance of our approach using different similarity metrics. | 1. INTRODUCTION
Current approaches to e-commerce treat service price as the
primary construct for negotiation by assuming that the service content
is fixed [9]. However, negotiation on price presupposes that other
properties of the service have already been agreed upon.
Nevertheless, many times the service provider may not be offering the exact
requested service due to lack of resources, constraints in its
business policy, and so on [3]. When this is the case, the producer and
the consumer need to negotiate the content of the requested service
[15].
However, most existing negotiation approaches assume that all
features of a service are equally important and concentrate on the
price [5, 2]. However, in reality not all features may be relevant and
the relevance of a feature may vary from consumer to consumer.
For instance, completion time of a service may be important for one
consumer whereas the quality of the service may be more important
for a second consumer. Without doubt, considering the preferences
of the consumer has a positive impact on the negotiation process.
For this purpose, evaluation of the service components with
different weights can be useful. Some studies take these weights as a
priori and uses the fixed weights [4]. On the other hand, mostly
the producer does not know the consumer"s preferences before the
negotiation. Hence, it is more appropriate for the producer to learn
these preferences for each consumer.
Preference Learning: As an alternative, we propose an
architecture in which the service providers learn the relevant features
of a service for a particular customer over time. We represent
service requests as a vector of service features. We use an ontology
in order to capture the relations between services and to construct
the features for a given service. By using a common ontology, we
enable the consumers and producers to share a common
vocabulary for negotiation. The particular service we have used is a wine
selling service. The wine seller learns the wine preferences of the
customer to sell better targeted wines. The producer models the
requests of the consumer and its counter offers to learn which
features are more important for the consumer. Since no information is
present before the interactions start, the learning algorithm has to
be incremental so that it can be trained at run time and can revise
itself with each new interaction.
Service Generation: Even after the producer learns the important
features for a consumer, it needs a method to generate offers that
are the most relevant for the consumer among its set of possible
services. In other words, the question is how the producer uses the
information that was learned from the dialogues to make the best
offer to the consumer. For instance, assume that the producer has
learned that the consumer wants to buy a red wine but the producer
can only offer rose or white wine. What should the producer"s offer
1301
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
contain; white wine or rose wine? If the producer has some domain
knowledge about semantic similarity (e.g., knows that the red and
rose wines are taste-wise more similar than white wine), then it can
generate better offers. However, in addition to domain knowledge,
this derivation requires appropriate metrics to measure similarity
between available services and learned preferences.
The rest of this paper is organized as follows: Section 2 explains
our proposed architecture. Section 3 explains the learning
algorithms that were studied to learn consumer preferences. Section 4
studies the different service offering mechanisms. Section 5
contains the similarity metrics used in the experiments. The details of
the developed system is analyzed in Section 6. Section 7 provides
our experimental setup, test cases, and results. Finally, Section 8
discusses and compares our work with other related work.
2. ARCHITECTURE
Our main components are consumer and producer agents, which
communicate with each other to perform content-oriented
negotiation. Figure 1 depicts our architecture. The consumer agent
represents the customer and hence has access to the preferences of the
customer. The consumer agent generates requests in accordance
with these preferences and negotiates with the producer based on
these preferences. Similarly, the producer agent has access to the
producer"s inventory and knows which wines are available or not.
A shared ontology provides the necessary vocabulary and hence
enables a common language for agents. This ontology describes
the content of the service. Further, since an ontology can represent
concepts, their properties and their relationships semantically, the
agents can reason the details of the service that is being negotiated.
Since a service can be anything such as selling a car, reserving a
hotel room, and so on, the architecture is independent of the
ontology used. However, to make our discussion concrete, we use the
well-known Wine ontology [19] with some modification to
illustrate our ideas and to test our system. The wine ontology describes
different types of wine and includes features such as color, body,
winery of the wine and so on. With this ontology, the service that
is being negotiated between the consumer and the producer is that
of selling wine.
The data repository in Figure 1 is used solely by the producer
agent and holds the inventory information of the producer. The
data repository includes information on the products the producer
owns, the number of the products and ratings of those products.
Ratings indicate the popularity of the products among customers.
Those are used to decide which product will be offered when there
exists more than one product having same similarity to the request
of the consumer agent.
The negotiation takes place in a turn-taking fashion, where the
consumer agent starts the negotiation with a particular service
request. The request is composed of significant features of the
service. In the wine example, these features include color, winery and
so on. This is the particular wine that the customer is interested in
purchasing. If the producer has the requested wine in its inventory,
the producer offers the wine and the negotiation ends. Otherwise,
the producer offers an alternative wine from the inventory. When
the consumer receives a counter offer from the producer, it will
evaluate it. If it is acceptable, then the negotiation will end.
Otherwise, the customer will generate a new request or stick to the
previous request. This process will continue until some service is
accepted by the consumer agent or all possible offers are put
forward to the consumer by the producer.
One of the crucial challenges of the content-oriented negotiation
is the automatic generation of counter offers by the service
producer. When the producer constructs its offer, it should consider
Figure 1: Proposed Negotiation Architecture
three important things: the current request, consumer preferences
and the producer"s available services. Both the consumer"s current
request and the producer"s own available services are accessible by
the producer. However, the consumer"s preferences in most cases
will not be available. Hence, the producer will have to understand
the needs of the consumer from their interactions and generate a
counter offer that is likely to be accepted by the consumer. This
challenge can be studied in three stages:
• Preference Learning: How can the producers learn about
each customer"s preferences based on requests and counter
offers? (Section 3)
• Service Offering: How can the producers revise their offers
based on the consumer"s preferences that they have learned
so far? (Section 4)
• Similarity Estimation: How can the producer agent estimate
similarity between the request and available services?
(Section 5)
3. PREFERENCE LEARNING
The requests of the consumer and the counter offers of the
producer are represented as vectors, where each element in the vector
corresponds to the value of a feature. The requests of the consumers
represent individual wine products whereas their preferences are
constraints over service features. For example, a consumer may
have preference for red wine. This means that the consumer is
willing to accept any wine offered by the producers as long as the
color is red. Accordingly, the consumer generates a request where
the color feature is set to red and other features are set to arbitrary
values, e.g. (Medium, Strong, Red).
At the beginning of negotiation, the producer agent does not
know the consumer"s preferences but will need to learn them
using information obtained from the dialogues between the producer
and the consumer. The preferences denote the relative importance
of the features of the services demanded by the consumer agents.
For instance, the color of the wine may be important so the
consumer insists on buying the wine whose color is red and rejects all
1302 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Table 1: How DCEA works
Type Sample The most The most
general set specific set
+ (Full,Strong,White) {(?, ?, ?)} {(Full,Strong,White)}
{{(?-Full), ?, ? },
- (Full,Delicate,Rose) {?, (?-Delicate), ?}, {(Full,Strong,White)}
{?, ?, (?-Rose)}}
{{(?-Full), ?, ?}, {{(Full,Strong,White)},
+ (Medium,Moderate,Red) {?,(?-Delicate), ?}, {(Medium,Moderate,Red)}}
{?, ?, (?-Rose)}}
the offers involving the wine whose color is white or rose. On the
contrary, the winery may not be as important as the color for this
customer, so the consumer may have a tendency to accept wines
from any winery as long as the color is red.
To tackle this problem, we propose to use incremental learning
algorithms [6]. This is necessary since no training data is
available before the interactions start. We particularly investigate two
approaches. The first one is inductive learning. This technique is
applied to learn the preferences as concepts. We elaborate on
Candidate Elimination Algorithm (CEA) for Version Space [10]. CEA
is known to perform poorly if the information to be learned is
disjunctive. Interestingly, most of the time consumer preferences are
disjunctive. Say, we are considering an agent that is buying wine.
The consumer may prefer red wine or rose wine but not white wine.
To use CEA with such preferences, a solid modification is
necessary. The second approach is decision trees. Decision trees can
learn from examples easily and classify new instances as positive
or negative. A well-known incremental decision tree is ID5R [18].
However, ID5R is known to suffer from high computational
complexity. For this reason, we instead use the ID3 algorithm [13] and
iteratively build decision trees to simulate incremental learning.
3.1 CEA
CEA [10] is one of the inductive learning algorithms that learns
concepts from observed examples. The algorithm maintains two
sets to model the concept to be learned. The first set is the most
general set G. G contains hypotheses about all the possible values
that the concept may obtain. As the name suggests, it is a
generalization and contains all possible values unless the values have
been identified not to represent the concept. The second set is the
most specific set S. S contains only hypotheses that are known to
identify the concept that is being learned. At the beginning of the
algorithm, G is initialized to cover all possible concepts while S is
initialized to be empty.
During the interactions, each request of the consumer can be
considered as a positive example and each counter offer generated by
the producer and rejected by the consumer agent can be thought of
as a negative example. At each interaction between the producer
and the consumer, both G and S are modified. The negative
samples enforce the specialization of some hypotheses so that G does
not cover any hypothesis accepting the negative samples as
positive. When a positive sample comes, the most specific set S should
be generalized in order to cover the new training instance. As a
result, the most general hypotheses and the most special hypotheses
cover all positive training samples but do not cover any negative
ones. Incrementally, G specializes and S generalizes until G and
S are equal to each other. When these sets are equal, the algorithm
converges by means of reaching the target concept.
3.2 Disjunctive CEA
Unfortunately, CEA is primarily targeted for conjunctive
concepts. On the other hand, we need to learn disjunctive concepts in
the negotiation of a service since consumer may have several
alternative wishes. There are several studies on learning disjunctive
concepts via Version Space. Some of these approaches use multiple
version space. For instance, Hong et al. maintain several version
spaces by split and merge operation [7]. To be able to learn
disjunctive concepts, they create new version spaces by examining the
consistency between G and S.
We deal with the problem of not supporting disjunctive concepts
of CEA by extending our hypothesis language to include
disjunctive hypothesis in addition to the conjunctives and negation. Each
attribute of the hypothesis has two parts: inclusive list, which holds
the list of valid values for that attribute and exclusive list, which is
the list of values which cannot be taken for that feature.
EXAMPLE 1. Assume that the most specific set is {(Light,
Delicate, Red)} and a positive example, (Light, Delicate, White) comes.
The original CEA will generalize this as (Light, Delicate, ?),
meaning the color can take any value. However, in fact, we only know
that the color can be red or white. In the DCEA, we generalize it as
{(Light, Delicate, [White, Red] )}. Only when all the values exist
in the list, they will be replaced by ?. In other words, we let the
algorithm generalize more slowly than before.
We modify the CEA algorithm to deal with this change. The
modified algorithm, DCEA, is given as Algorithm 1. Note that
compared to the previous studies of disjunctive versions, our
approach uses only a single version space rather than multiple version
space. The initialization phase is the same as the original algorithm
(lines 1, 2). If any positive sample comes, we add the sample to the
special set as before (line 4). However, we do not eliminate the
hypotheses in G that do not cover this sample since G now contains a
disjunction of many hypotheses, some of which will be conflicting
with each other. Removing a specific hypothesis from G will result
in loss of information, since other hypotheses are not guaranteed
to cover it. After some time, some hypotheses in S can be merged
and can construct one hypothesis (lines 6, 7).
When a negative sample comes, we do not change S as before.
We only modify the most general hypotheses not to cover this
negative sample (lines 11-15). Different from the original CEA, we
try to specialize the G minimally. The algorithm removes the
hypothesis covering the negative sample (line 13). Then, we generate
new hypotheses as the number of all possible attributes by using
the removed hypothesis.
For each attribute in the negative sample, we add one of them
at each time to the exclusive list of the removed hypothesis. Thus,
all possible hypotheses that do not cover the negative sample are
generated (line 14). Note that, exclusive list contains the values that
the attribute cannot take. For example, consider the color attribute.
If a hypothesis includes red in its exclusive list and ? in its inclusive
list, this means that color may take any value except red.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1303
Algorithm 1 Disjunctive Candidate Elimination Algorithm
1: G ←the set of maximally general hypotheses in H
2: S ←the set of maximally specific hypotheses in H
3: For each training example, d
4: if d is a positive example then
5: Add d to S
6: if s in S can be combined with d to make one element then
7: Combine s and d into sd {sd is the rule covers s and d}
8: end if
9: end if
10: if d is a negative example then
11: For each hypothesis g in G does cover d
12: * Assume : g = (x1, x2, ..., xn) and d = (d1, d2, ..., dn)
13: - Remove g from G
14: - Add hypotheses g1, g2, gn where g1= (x1-d1, x2,..., xn),
g2= (x1, x2-d2,..., xn),..., and gn= (x1, x2,..., xn-dn)
15: - Remove from G any hypothesis that is less general than
another hypothesis in G
16: end if
EXAMPLE 2. Table 1 illustrates the first three interactions and
the workings of DCEA. The most general set and the most specific
set show the contents of G and S after the sample comes in. After
the first positive sample, S is generalized to also cover the instance.
The second sample is negative. Thus, we replace (?, ?, ?) by three
disjunctive hypotheses; each hypothesis being minimally
specialized. In this process, at each time one attribute value of negative
sample is applied to the hypothesis in the general set. The third
sample is positive and generalizes S even more.
Note that in Table 1, we do not eliminate {(?-Full), ?, ?} from
the general set while having a positive sample such as (Full, Strong,
White). This stems from the possibility of using this rule in the
generation of other hypotheses. For instance, if the example continues
with a negative sample (Full, Strong, Red), we can specialize the
previous rule such as {(?-Full), ?, (?-Red)}. By Algorithm 1, we
do not miss any information.
3.3 ID3
ID3 [13] is an algorithm that constructs decision trees in a
topdown fashion from the observed examples represented in a vector
with attribute-value pairs. Applying this algorithm to our system
with the intention of learning the consumer"s preferences is
appropriate since this algorithm also supports learning disjunctive
concepts in addition to conjunctive concepts.
The ID3 algorithm is used in the learning process with the
purpose of classification of offers. There are two classes: positive and
negative. Positive means that the service description will possibly
be accepted by the consumer agent whereas the negative implies
that it will potentially be rejected by the consumer. Consumer"s
requests are considered as positive training examples and all rejected
counter-offers are thought as negative ones.
The decision tree has two types of nodes: leaf node in which the
class labels of the instances are held and non-leaf nodes in which
test attributes are held. The test attribute in a non-leaf node is one of
the attributes making up the service description. For instance, body,
flavor, color and so on are potential test attributes for wine service.
When we want to find whether the given service description is
acceptable, we start searching from the root node by examining the
value of test attributes until reaching a leaf node.
The problem with this algorithm is that it is not an
incremental algorithm, which means all the training examples should exist
before learning. To overcome this problem, the system keeps
consumer"s requests throughout the negotiation interaction as positive
examples and all counter-offers rejected by the consumer as
negative examples. After each coming request, the decision tree is
rebuilt. Without doubt, there is a drawback of reconstruction such
as additional process load. However, in practice we have evaluated
ID3 to be fast and the reconstruction cost to be negligible.
4. SERVICE OFFERING
After learning the consumer"s preferences, the producer needs to
make a counter offer that is compatible with the consumer"s
preferences.
4.1 Service Offering via CEA and DCEA
To generate the best offer, the producer agent uses its service
ontology and the CEA algorithm. The service offering mechanism
is the same for both the original CEA and DCEA, but as explained
before their methods for updating G and S are different.
When producer receives a request from the consumer, the
learning set of the producer is trained with this request as a positive
sample. The learning components, the most specific set S and the
most general set G are actively used in offering service. The most
general set, G is used by the producer in order to avoid offering the
services, which will be rejected by the consumer agent. In other
words, it filters the service set from the undesired services, since
G contains hypotheses that are consistent with the requests of the
consumer. The most specific set, S is used in order to find best
offer, which is similar to the consumer"s preferences. Since the most
specific set S holds the previous requests and the current request,
estimating similarity between this set and every service in the
service list is very convenient to find the best offer from the service
list.
When the consumer starts the interaction with the producer agent,
producer agent loads all related services to the service list object.
This list constitutes the provider"s inventory of services. Upon
receiving a request, if the producer can offer an exactly matching
service, then it does so. For example, for a wine this corresponds
to selling a wine that matches the specified features of the
consumer"s request identically. When the producer cannot offer the
service as requested, it tries to find the service that is most similar
to the services that have been requested by the consumer during the
negotiation. To do this, the producer has to compute the similarity
between the services it can offer and the services that have been
requested (in S).
We compute the similarities in various ways as will be explained
in Section 5. After the similarity of the available services with the
current S is calculated, there may be more than one service with
the maximum similarity. The producer agent can break the tie in a
number of ways. Here, we have associated a rating value with each
service and the producer prefers the higher rated service to others.
4.2 Service Offering via ID3
If the producer learns the consumer"s preferences with ID3, a
similar mechanism is applied with two differences. First, since ID3
does not maintain G, the list of unaccepted services that are
classified as negative are removed from the service list. Second, the
similarities of possible services are not measured with respect to S,
but instead to all previously made requests.
4.3 Alternative Service Offering Mechanisms
In addition to these three service offering mechanisms (Service
Offering with CEA, Service Offering with DCEA, and Service
Offering with ID3), we include two other mechanisms..
1304 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
• Random Service Offering (RO): The producer generates a
counter offer randomly from the available service list,
without considering the consumer"s preferences.
• Service Offering considering only the current request (SCR):
The producer selects a counter offer according to the
similarity of the consumer"s current request but does not consider
previous requests.
5. SIMILARITY ESTIMATION
Similarity can be estimated with a similarity metric that takes
two entries and returns how similar they are. There are several
similarity metrics used in case based reasoning system such as weighted
sum of Euclidean distance, Hamming distance and so on [12].
The similarity metric affects the performance of the system while
deciding which service is the closest to the consumer"s request. We
first analyze some existing metrics and then propose a new
semantic similarity metric named RP Similarity.
5.1 Tversky"s Similarity Metric
Tversky"s similarity metric compares two vectors in terms of
the number of exactly matching features [17]. In Equation (1),
common represents the number of matched attributes whereas
different represents the number of the different attributes. Our
current assumption is that α and β is equal to each other.
SMpq =
α(common)
α(common) + β(different)
(1)
Here, when two features are compared, we assign zero for
dissimilarity and one for similarity by omitting the semantic closeness
among the feature values.
Tversky"s similarity metric is designed to compare two feature
vectors. In our system, whereas the list of services that can be
offered by the producer are each a feature vector, the most specific
set S is not a feature vector. S consists of hypotheses of feature
vectors. Therefore, we estimate the similarity of each hypothesis
inside the most specific set S and then take the average of the
similarities.
EXAMPLE 3. Assume that S contains the following two
hypothesis: { {Light, Moderate, (Red, White)} , {Full, Strong, Rose}}.
Take service s as (Light, Strong, Rose). Then the similarity of the
first one is equal to 1/3 and the second one is equal to 2/3 in
accordance with Equation (1). Normally, we take the average of it
and obtain (1/3 + 2/3)/2, equally 1/2. However, the first
hypothesis involves the effect of two requests and the second hypothesis
involves only one request. As a result, we expect the effect of the
first hypothesis to be greater than that of the second. Therefore,
we calculate the average similarity by considering the number of
samples that hypotheses cover.
Let ch denote the number of samples that hypothesis h covers
and (SM(h,service)) denote the similarity of hypothesis h with the
given service. We compute the similarity of each hypothesis with
the given service and weight them with the number of samples they
cover. We find the similarity by dividing the weighted sum of the
similarities of all hypotheses in S with the service by the number
of all samples that are covered in S.
AV G−SM(service,S) =
|S|
|h| (ch ∗ SM(h,service))
|S|
|h| ch
(2)
Figure 2: Sample taxonomy for similarity estimation
EXAMPLE 4. For the above example, the similarity of (Light,
Strong, Rose) with the specific set is (2 ∗ 1/3 + 2/3)/3, equally
4/9. The possible number of samples that a hypothesis covers can
be estimated with multiplying cardinalities of each attribute. For
example, the cardinality of the first attribute is two and the others
is equal to one for the given hypothesis such as {Light, Moderate,
(Red, White)}. When we multiply them, we obtain two (2 ∗ 1 ∗ 1 =
2).
5.2 Lin"s Similarity Metric
A taxonomy can be used while estimating semantic similarity
between two concepts. Estimating semantic similarity in a Is-A
taxonomy can be done by calculating the distance between the nodes
related to the compared concepts. The links among the nodes can
be considered as distances. Then, the length of the path between the
nodes indicates how closely similar the concepts are. An
alternative estimation to use information content in estimation of semantic
similarity rather than edge counting method, was proposed by Lin
[8]. The equation (3) [8] shows Lin"s similarity where c1 and c2
are the compared concepts and c0 is the most specific concept that
subsumes both of them. Besides, P(C) represents the probability
of an arbitrary selected object belongs to concept C.
Similarity(c1, c2) =
2 × log P(c0)
log P(c1) + log P(c2)
(3)
5.3 Wu & Palmer"s Similarity Metric
Different from Lin, Wu and Palmer use the distance between the
nodes in IS-A taxonomy [20]. The semantic similarity is
represented with Equation (4) [20]. Here, the similarity between c1 and
c2 is estimated and c0 is the most specific concept subsuming these
classes. N1 is the number of edges between c1 and c0. N2 is the
number of edges between c2 and c0. N0 is the number of IS-A
links of c0 from the root of the taxonomy.
SimW u&P almer(c1, c2) =
2 × N0
N1 + N2 + 2 × N0
(4)
5.4 RP Semantic Metric
We propose to estimate the relative distance in a taxonomy
between two concepts using the following intuitions. We use Figure
2 to illustrate these intuitions.
• Parent versus grandparent: Parent of a node is more
similar to the node than grandparents of that. Generalization of
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1305
a concept reasonably results in going further away that
concept. The more general concepts are, the less similar they
are. For example, AnyWineColor is parent of ReddishColor
and ReddishColor is parent of Red. Then, we expect the
similarity between ReddishColor and Red to be higher than that
of the similarity between AnyWineColor and Red.
• Parent versus sibling: A node would have higher similarity
to its parent than to its sibling. For instance, Red and Rose
are children of ReddishColor. In this case, we expect the
similarity between Red and ReddishColor to be higher than
that of Red and Rose.
• Sibling versus grandparent: A node is more similar to it"s
sibling then to its grandparent. To illustrate, AnyWineColor
is grandparent of Red, and Red and Rose are siblings.
Therefore, we possibly anticipate that Red and Rose are more
similar than AnyWineColor and Red.
As a taxonomy is represented in a tree, that tree can be traversed
from the first concept being compared through the second concept.
At starting node related to the first concept, the similarity value is
constant and equal to one. This value is diminished by a constant
at each node being visited over the path that will reach to the node
including the second concept. The shorter the path between the
concepts, the higher the similarity between nodes.
Algorithm 2 Estimate-RP-Similarity(c1,c2)
Require: The constants should be m > n > m2
where m, n ∈
R[0, 1]
1: Similarity ← 1
2: if c1 is equal to c2 then
3: Return Similarity
4: end if
5: commonParent ← findCommonParent(c1, c2)
{commonParent is the most specific concept that covers
both c1 and c2}
6: N1 ← findDistance(commonParent, c1)
7: N2 ← findDistance(commonParent, c2) {N1 & N2 are
the number of links between the concept and parent concept}
8: if (commonParent == c1) or (commonParent == c2) then
9: Similarity ← Similarity ∗ m(N1+N2)
10: else
11: Similarity ← Similarity ∗ n ∗ m(N1+N2−2)
12: end if
13: Return Similarity
Relative distance between nodes c1 and c2 is estimated in the
following way. Starting from c1, the tree is traversed to reach c2.
At each hop, the similarity decreases since the concepts are getting
farther away from each other. However, based on our intuitions,
not all hops decrease the similarity equally.
Let m represent the factor for hopping from a child to a parent
and n represent the factor for hopping from a sibling to another
sibling. Since hopping from a node to its grandparent counts as
two parent hops, the discount factor of moving from a node to its
grandparent is m2
. According to the above intuitions, our constants
should be in the form m > n > m2
where the value of m and n
should be between zero and one. Algorithm 2 shows the distance
calculation.
According to the algorithm, firstly the similarity is initialized
with the value of one (line 1). If the concepts are equal to each other
then, similarity will be one (lines 2-4). Otherwise, we compute the
common parent of the two nodes and the distance of each concept
to the common parent without considering the sibling (lines 5-7).
If one of the concepts is equal to the common parent, then there
is no sibling relation between the concepts. For each level, we
multiply the similarity by m and do not consider the sibling factor
in the similarity estimation. As a result, we decrease the similarity
at each level with the rate of m (line9). Otherwise, there has to be
a sibling relation. This means that we have to consider the effect of
n when measuring similarity. Recall that we have counted N1+N2
edges between the concepts. Since there is a sibling relation, two of
these edges constitute the sibling relation. Hence, when calculating
the effect of the parent relation, we use N1+N2 −2 edges (line 11).
Some similarity estimations related to the taxonomy in Figure 2
are given in Table 2. In this example, m is taken as 2/3 and n is
taken as 4/7.
Table 2: Sample similarity estimation over sample taxonomy
Similarity(ReddishColor, Rose) = 1 ∗ (2/3) = 0.6666667
Similarity(Red, Rose) = 1 ∗ (4/7) = 0.5714286
Similarity(AnyW ineColor,Rose) = 1 ∗ (2/3)2
= 0.44444445
Similarity(W hite,Rose) = 1 ∗ (2/3) ∗ (4/7) = 0.3809524
For all semantic similarity metrics in our architecture, the
taxonomy for features is held in the shared ontology. In order to evaluate
the similarity of feature vector, we firstly estimate the similarity for
feature one by one and take the average sum of these similarities.
Then the result is equal to the average semantic similarity of the
entire feature vector.
6. DEVELOPED SYSTEM
We have implemented our architecture in Java. To ease testing
of the system, the consumer agent has a user interface that allows
us to enter various requests. The producer agent is fully automated
and the learning and service offering operations work as explained
before. In this section, we explain the implementation details of the
developed system.
We use OWL [11] as our ontology language and JENA as our
ontology reasoner. The shared ontology is the modified version of
the Wine Ontology [19]. It includes the description of wine as a
concept and different types of wine. All participants of the
negotiation use this ontology for understanding each other. According
to the ontology, seven properties make up the wine concept. The
consumer agent and the producer agent obtain the possible values
for the these properties by querying the ontology. Thus, all
possible values for the components of the wine concept such as color,
body, sugar and so on can be reached by both agents. Also a
variety of wine types are described in this ontology such as Burgundy,
Chardonnay, CheninBlanc and so on. Intuitively, any wine type
described in the ontology also represents a wine concept. This allows
us to consider instances of Chardonnay wine as instances of Wine
class.
In addition to wine description, the hierarchical information of
some features can be inferred from the ontology. For instance,
we can represent the information Europe Continent covers
Western Country. Western Country covers French Region, which covers
some territories such as Loire, Bordeaux and so on. This
hierarchical information is used in estimation of semantic similarity. In this
part, some reasoning can be made such as if a concept X covers Y
and Y covers Z, then concept X covers Z. For example, Europe
Continent covers Bordeaux.
1306 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
For some features such as body, flavor and sugar, there is no
hierarchical information, but their values are semantically leveled.
When that is the case, we give the reasonable similarity values for
these features. For example, the body can be light, medium, or
strong. In this case, we assume that light is 0.66 similar to medium
but only 0.33 to strong.
WineStock Ontology is the producer"s inventory and describes
a product class as WineProduct. This class is necessary for the
producer to record the wines that it sells. Ontology involves the
individuals of this class. The individuals represent available services
that the producer owns. We have prepared two separate WineStock
ontologies for testing. In the first ontology, there are 19 available
wine products and in the second ontology, there are 50 products.
7. PERFORMANCE EVALUATION
We evaluate the performance of the proposed systems in respect
to learning technique they used, DCEA and ID3, by comparing
them with the CEA, RO (for random offering), and SCR (offering
based on current request only).
We apply a variety of scenarios on this dataset in order to see the
performance differences. Each test scenario contains a list of
preferences for the user and number of matches from the product list.
Table 3 shows these preferences and availability of those products
in the inventory for first five scenarios. Note that these preferences
are internal to the consumer and the producer tries to learn these
during negotiation.
Table 3: Availability of wines in different test scenarios
ID Preference of consumer Availability (out of 19)
1 Dry wine 15
2 Red and dry wine 8
3 Red, dry and moderate wine 4
4 Red and strong wine 2
5 Red or rose, and strong 3
7.1 Comparison of Learning Algorithms
In comparison of learning algorithms, we use the five scenarios
in Table 3. Here, first we use Tversky"s similarity measure. With
these test cases, we are interested in finding the number of
iterations that are required for the producer to generate an acceptable
offer for the consumer. Since the performance also depends on the
initial request, we repeat our experiments with different initial
requests. Consequently, for each case, we run the algorithms five
times with several variations of the initial requests. In each
experiment, we count the number of iterations that were needed to reach
an agreement. We take the average of these numbers in order to
evaluate these systems fairly. As is customary, we test each
algorithm with the same initial requests.
Table 4 compares the approaches using different learning
algorithm. When the large parts of inventory is compatible with the
customer"s preferences as in the first test case, the performance of
all techniques are nearly same (e.g., Scenario 1). As the number of
compatible services drops, RO performs poorly as expected. The
second worst method is SCR since it only considers the customer"s
most recent request and does not learn from previous requests.
CEA gives the best results when it can generate an answer but
cannot handle the cases containing disjunctive preferences, such as the
one in Scenario 5. ID3 and DCEA achieve the best results. Their
performance is comparable and they can handle all cases including
Scenario 5.
Table 4: Comparison of learning algorithms in terms of average
number of interactions
Run DCEA SCR RO CEA ID3
Scenario 1: 1.2 1.4 1.2 1.2 1.2
Scenario 2: 1.4 1.4 2.6 1.4 1.4
Scenario 3: 1.4 1.8 4.4 1.4 1.4
Scenario 4: 2.2 2.8 9.6 1.8 2
Scenario 5: 2 2.6 7.6 1.75+ No offer 1.8
Avg. of all cases: 1.64 2 5.08 1.51+No offer 1.56
7.2 Comparison of Similarity Metrics
To compare the similarity metrics that were explained in
Section 5, we fix the learning algorithm to DCEA. In addition to the
scenarios shown in Table 3, we add following five new scenarios
considering the hierarchical information.
• The customer wants to buy wine whose winery is located in
California and whose grape is a type of white grape.
Moreover, the winery of the wine should not be expensive. There
are only four products meeting these conditions.
• The customer wants to buy wine whose color is red or rose
and grape type is red grape. In addition, the location of wine
should be in Europe. The sweetness degree is wished to be
dry or off dry. The flavor should be delicate or moderate
where the body should be medium or light. Furthermore, the
winery of the wine should be an expensive winery. There are
two products meeting all these requirements.
• The customer wants to buy moderate rose wine, which is
located around French Region. The category of winery should
be Moderate Winery. There is only one product meeting
these requirements.
• The customer wants to buy expensive red wine, which is
located around California Region or cheap white wine, which
is located in around Texas Region. There are five available
products.
• The customer wants to buy delicate white wine whose
producer in the category of Expensive Winery. There are two
available products.
The first seven scenarios are tested with the first dataset that
contains a total of 19 services and the last three scenarios are tested
with the second dataset that contains 50 services.
Table 5 gives the performance evaluation in terms of the number
of interactions needed to reach a consensus. Tversky"s metric gives
the worst results since it does not consider the semantic similarity.
Lin"s performance are better than Tversky but worse than others.
Wu Palmer"s metric and RP similarity measure nearly give the same
performance and better than others. When the results are examined,
considering semantic closeness increases the performance.
8. DISCUSSION
We review the recent literature in comparison to our work. Tama
et al. [16] propose a new approach based on ontology for
negotiation. According to their approach, the negotiation protocols used
in e-commerce can be modeled as ontologies. Thus, the agents can
perform negotiation protocol by using this shared ontology without
the need of being hard coded of negotiation protocol details. While
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1307
Table 5: Comparison of similarity metrics in terms of number
of interactions
Run Tversky Lin Wu Palmer RP
Scenario 1: 1.2 1.2 1 1
Scenario 2: 1.4 1.4 1.6 1.6
Scenario 3: 1.4 1.8 2 2
Scenario 4: 2.2 1 1.2 1.2
Scenario 5: 2 1.6 1.6 1.6
Scenario 6: 5 3.8 2.4 2.6
Scenario 7: 3.2 1.2 1 1
Scenario 8: 5.6 2 2 2.2
Scenario 9: 2.6 2.2 2.2 2.6
Scenario 10: 4.4 2 2 1.8
Average of all cases: 2.9 1.82 1.7 1.76
Tama et al. model the negotiation protocol using ontologies, we
have instead modeled the service to be negotiated. Further, we have
built a system with which negotiation preferences can be learned.
Sadri et al. study negotiation in the context of resource
allocation [14]. Agents have limited resources and need to require
missing resources from other agents. A mechanism which is based on
dialogue sequences among agents is proposed as a solution. The
mechanism relies on observe-think-action agent cycle. These
dialogues include offering resources, resource exchanges and offering
alternative resource. Each agent in the system plans its actions to
reach a goal state. Contrary to our approach, Sadri et al."s study is
not concerned with learning preferences of each other.
Brzostowski and Kowalczyk propose an approach to select an
appropriate negotiation partner by investigating previous multi-attribute
negotiations [1]. For achieving this, they use case-based reasoning.
Their approach is probabilistic since the behavior of the partners
can change at each iteration. In our approach, we are interested in
negotiation the content of the service. After the consumer and
producer agree on the service, price-oriented negotiation mechanisms
can be used to agree on the price.
Fatima et al. study the factors that affect the negotiation such as
preferences, deadline, price and so on, since the agent who
develops a strategy against its opponent should consider all of them [5].
In their approach, the goal of the seller agent is to sell the service
for the highest possible price whereas the goal of the buyer agent
is to buy the good with the lowest possible price. Time interval
affects these agents differently. Compared to Fatima et al. our focus
is different. While they study the effect of time on negotiation, our
focus is on learning preferences for a successful negotiation.
Faratin et al. propose a multi-issue negotiation mechanism, where
the service variables for the negotiation such as price, quality of
the service, and so on are considered traded-offs against each other
(i.e., higher price for earlier delivery) [4]. They generate a
heuristic model for trade-offs including fuzzy similarity estimation and a
hill-climbing exploration for possibly acceptable offers. Although
we address a similar problem, we learn the preferences of the
customer by the help of inductive learning and generate counter-offers
in accordance with these learned preferences. Faratin et al. only
use the last offer made by the consumer in calculating the
similarity for choosing counter offer. Unlike them, we also take into
account the previous requests of the consumer. In their experiments,
Faratin et al. assume that the weights for service variables are fixed
a priori. On the contrary, we learn these preferences over time.
In our future work, we plan to integrate ontology reasoning into
the learning algorithm so that hierarchical information can be learned
from subsumption hierarchy of relations. Further, by using
relationships among features, the producer can discover new
knowledge from the existing knowledge. These are interesting directions
that we will pursue in our future work.
9. REFERENCES
[1] J. Brzostowski and R. Kowalczyk. On possibilistic
case-based reasoning for selecting partners for
multi-attribute agent negotiation. In Proceedings of the 4th
Intl. Joint Conference on Autonomous Agents and
MultiAgent Systems (AAMAS), pages 273-278, 2005.
[2] L. Busch and I. Horstman. A comment on issue-by-issue
negotiations. Games and Economic Behavior, 19:144-148,
1997.
[3] J. K. Debenham. Managing e-market negotiation in context
with a multiagent system. In Proceedings 21st International
Conference on Knowledge Based Systems and Applied
Artificial Intelligence, ES"2002:, 2002.
[4] P. Faratin, C. Sierra, and N. R. Jennings. Using similarity
criteria to make issue trade-offs in automated negotiations.
Artificial Intelligence, 142:205-237, 2002.
[5] S. Fatima, M. Wooldridge, and N. Jennings. Optimal agents
for multi-issue negotiation. In Proceeding of the 2nd Intl.
Joint Conference on Autonomous Agents and MultiAgent
Systems (AAMAS), pages 129-136, 2003.
[6] C. Giraud-Carrier. A note on the utility of incremental
learning. AI Communications, 13(4):215-223, 2000.
[7] T.-P. Hong and S.-S. Tseng. Splitting and merging version
spaces to learn disjunctive concepts. IEEE Transactions on
Knowledge and Data Engineering, 11(5):813-815, 1999.
[8] D. Lin. An information-theoretic definition of similarity. In
Proc. 15th International Conf. on Machine Learning, pages
296-304. Morgan Kaufmann, San Francisco, CA, 1998.
[9] P. Maes, R. H. Guttman, and A. G. Moukas. Agents that buy
and sell. Communications of the ACM, 42(3):81-91, 1999.
[10] T. M. Mitchell. Machine Learning. McGraw Hill, NY, 1997.
[11] OWL. OWL: Web ontology language guide, 2003.
http://www.w3.org/TR/2003/CR-owl-guide-20030818/.
[12] S. K. Pal and S. C. K. Shiu. Foundations of Soft Case-Based
Reasoning. John Wiley & Sons, New Jersey, 2004.
[13] J. R. Quinlan. Induction of decision trees. Machine Learning,
1(1):81-106, 1986.
[14] F. Sadri, F. Toni, and P. Torroni. Dialogues for negotiation:
Agent varieties and dialogue sequences. In ATAL 2001,
Revised Papers, volume 2333 of LNAI, pages 405-421.
Springer-Verlag, 2002.
[15] M. P. Singh. Value-oriented electronic commerce. IEEE
Internet Computing, 3(3):6-7, 1999.
[16] V. Tamma, S. Phelps, I. Dickinson, and M. Wooldridge.
Ontologies for supporting negotiation in e-commerce.
Engineering Applications of Artificial Intelligence,
18:223-236, 2005.
[17] A. Tversky. Features of similarity. Psychological Review,
84(4):327-352, 1977.
[18] P. E. Utgoff. Incremental induction of decision trees.
Machine Learning, 4:161-186, 1989.
[19] Wine, 2003.
http://www.w3.org/TR/2003/CR-owl-guide20030818/wine.rdf.
[20] Z. Wu and M. Palmer. Verb semantics and lexical selection.
In 32nd. Annual Meeting of the Association for
Computational Linguistics, pages 133 -138, 1994.
1308 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) | incremental decision tree;candidate elimination algorithm;rp similarity;disjunctive cea;consumer preference;disjunctive hypothesis;service;ontology;decision tree;semantic similarity;price;datum repository;negotiation;preference learning;inductive learn;multiple version space;learning set;similarity metric;consumer agent;id3 |
train_I-73 | Exchanging Reputation Values among Heterogeneous Agent Reputation Models: An Experience on ART Testbed | In open MAS it is often a problem to achieve agents' interoperability. The heterogeneity of its components turns the establishment of interaction or cooperation among them into a non trivial task, since agents may use different internal models and the decision about trust other agents is a crucial condition to the formation of agents' cooperation. In this paper we propose the use of an ontology to deal with this issue. We experiment this idea by enhancing the ART reputation model with semantic data obtained from this ontology. This data is used during interaction among heterogeneous agents when exchanging reputation values and may be used for agents that use different reputation models. | 1. INTRODUCTION
Open multiagent systems (MAS) are composed of autonomous
distributed agents that may enter and leave the agent society at
their will because open systems have no centralized control over
the development of its parts [1]. Since agents are considered as
autonomous entities, we cannot assume that there is a way to
control their internal behavior. These features are interesting to
obtain flexible and adaptive systems but they also create new risks
about the reliability and the robustness of the system. Solutions to
this problem have been proposed by the way of trust models
where agents are endowed with a model of other agents that
allows them to decide if they can or cannot trust another agent.
Such trust decision is very important because it is an essential
condition to the formation of agents' cooperation. The trust
decision processes use the concept of reputation as the basis of a
decision. Reputation is a subject that has been studied in several
works [4][5][8][9] with different approaches, but also with
different semantics attached to the reputation concept. Casare and
Sichman [2][3] proposed a Functional Ontology of Reputation
(FORe) and some directions about how it could be used to allow
the interoperability among different agent reputation models. This
paper describes how the FORe can be applied to allow
interoperability among agents that have different reputation
models. An outline of this approach is sketched in the context of a
testbed for the experimentation and comparison of trust models,
the ART testbed [6].
2. THE FUNCTIONAL ONTOLOGY OF
REPUTATION (FORe)
In the last years several computational models of reputation have
been proposed [7][10][13][14]. As an example of research
produced in the MAS field we refer to three of them: a cognitive
reputation model [5], a typology of reputation [7] and the
reputation model used in the ReGret system [9][10]. Each model
includes its own specific concepts that may not exist in other
models, or exist with a different name. For instance, Image and
Reputation are two central concepts in the cognitive reputation
model. These concepts do not exist in the typology of reputation
or in the ReGret model. In the typology of reputation, we can find
some similar concepts such as direct reputation and indirect
reputation but there are some slight semantic differences. In the
same way, the ReGret model includes four kinds of reputation
(direct, witness, neighborhood and system) that overlap with the
concepts of other models but that are not exactly the same.
The Functional Ontology of Reputation (FORe) was defined as a
common semantic basis that subsumes the concepts of the main
reputation models. The FORe includes, as its kernel, the following
concepts: reputation nature, roles involved in reputation formation
and propagation, information sources for reputation, evaluation of
reputation, and reputation maintenance. The ontology concept
ReputationNature is composed of concepts such as
IndividualReputation, GroupReputation and ProductReputation.
Reputation formation and propagation involves several roles,
played by the entities or agents that participate in those processes.
The ontology defines the concepts ReputationProcess and
ReputationRole. Moreover, reputation can be classified according
to the origin of beliefs and opinions that can derive from several
sources. The ontology defines the concept ReputationType which
can be PrimaryReputation or SecondaryReputation.
PrimaryReputation is composed of concepts ObservedReputation
and DirectReputation and the concept SecondaryReputation is
composed of concepts such as PropagatedReputation and
CollectiveReputation. More details about the FORe can be found
on [2][3].
3. MAPPING THE AGENT REPUTATION
MODELS TO THE FORe
Visser et al [12] suggest three different ways to support semantic
integration of different sources of information: a centralized
approach, where each source of information is related to one
common domain ontology; a decentralized approach, where every
source of information is related to its own ontology; and a hybrid
approach, where every source of information has its own ontology
and the vocabulary of these ontologies are related to a common
ontology. This latter organizes the common global vocabulary in
order to support the source ontologies comparison. Casare and
Sichman [3] used the hybrid approach to show that the FORe
serves as a common ontology for several reputation models.
Therefore, considering the ontologies which describe the agent
reputation models we can define a mapping between these
ontologies and the FORe whenever the ontologies use a common
vocabulary. Also, the information concerning the mappings
between the agent reputation models and the FORe can be directly
inferred by simply classifying the resulting ontology from the
integration of a given reputation model ontology and the FORe in
an ontology tool with reasoning engine.
For instance, a mapping between the Cognitive Reputation Model
ontology and the FORe relates the concepts Image and Reputation
to PrimaryReputation and SecondaryReputation from FORe,
respectively. Also, a mapping between the Typology of
Reputation and the FORe relates the concepts Direct Reputation
and Indirect Reputation to PrimaryReputation and
SecondaryReputation from FORe, respectively. Nevertheless, the
concepts Direct Trust and Witness Reputation from the Regret
System Reputation Model are mapped to PrimaryReputation and
PropagatedReputation from FORe. Since PropagatedReputation is
a sub-concept of SecondaryReputation, it can be inferred that
Witness Reputation is also mapped to SecondaryReputation.
4. EXPERIMENTAL SCENARIOS USING
THE ART TESTBED
To exemplify the use of mappings from last section, we define a
scenario where several agents are implemented using different
agent reputation models. This scenario includes the agents"
interaction during the simulation of the game defined by ART [6]
in order to describe the ways interoperability is possible between
different trust models using the FORe.
4.1 The ART testbed
The ART testbed provides a simulation engine on which several
agents, using different trust models, may run. The simulation
consists in a game where the agents have to decide to trust or not
other agents. The game"s domain is art appraisal, in which agents
are required to evaluate the value of paintings based on
information exchanged among other agents during agents"
interaction. The information can be an opinion transaction, when
an agent asks other agents to help it in its evaluation of a painting;
or a reputation transaction, when the information required is
about the reputation of another agent (a target) for a given era.
More details about the ART testbed can be found in [6].
The ART common reputation model was enhanced with semantic
data obtained from FORe. A general agent architecture for
interoperability was defined [11] to allow agents to reason about
the information received from reputation interactions. This
architecture contains two main modules: the Reputation Mapping
Module (RMM) which is responsible for mapping concepts
between an agent reputation model and FORe; and the Reputation
Reasoning Module (RRM) which is responsible for deal with
information about reputation according to the agent reputation
model.
4.2 Reputation transaction scenarios
While including the FORe to the ART common reputation model,
we have incremented it to allow richer interactions that involve
reputation transaction. In this section we describe scenarios
concerning reputation transactions in the context of ART testbed,
but the first is valid for any kind of reputation transaction and the
second is specific for the ART domain.
4.2.1 General scenario
Suppose that agents A, B and C are implemented according to the
aforementioned general agent architecture with the enhanced ART
common reputation model, using different reputation models.
Agent A uses the Typology of Reputation model, agent B uses the
Cognitive Reputation Model and agent C uses the ReGret System
model. Consider the interaction about reputation where agents A
and B receive from agent C information about the reputation of
agent Y. A big picture of this interaction is showed in Figure 2.
ReGret
Ontology
(Y, value=0.8,
witnessreputation)
C
Typol.
Ontology
(Y, value=0.8,
propagatedreputation)
A
CogMod.
Ontology
(Y, value=0.8,
reputation)
B
(Y, value=0.8,
PropagatedReputation)
(Y, value=0.8,
PropagatedReputation)
ReGret
Ontology
(Y, value=0.8,
witnessreputation)
C
ReGret
Ontology
(Y, value=0.8,
witnessreputation)
ReGret
Ontology
(Y, value=0.8,
witnessreputation)
(Y, value=0.8,
witnessreputation)
C
Typol.
Ontology
(Y, value=0.8,
propagatedreputation)
A
Typol.
Ontology
(Y, value=0.8,
propagatedreputation)
Typol.
Ontology
(Y, value=0.8,
propagatedreputation)
(Y, value=0.8,
propagatedreputation)
A
CogMod.
Ontology
(Y, value=0.8,
reputation)
B
CogMod.
Ontology
(Y, value=0.8,
reputation)
CogMod.
Ontology
(Y, value=0.8,
reputation)
(Y, value=0.8,
reputation)
B
(Y, value=0.8,
PropagatedReputation)
(Y, value=0.8,
PropagatedReputation)
(Y, value=0.8,
PropagatedReputation)
(Y, value=0.8,
PropagatedReputation)
Figure 1. Interaction about reputation
The information witness reputation from agent C is treated by its
RMM and is sent as PropagatedReputation to both agents. The
corresponding information in agent A reputation model is
propagated reputation and in agent B reputation model is
reputation. The way agents A and B make use of the information
depends on their internal reputation model and their RRM
implementation.
1048 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
4.2.2 ART scenario
Considering the same agents A and B and the art appraisal domain
of ART, another interesting scenario describes the following
situation: agent A asks to agent B information about agents it
knows that have skill on some specific painting era. In this case
agent A wants information concerning the direct reputation agent
B has about agents that have skill on an specific era, such as
cubism. Following the same steps of the previous scenario, agent
A message is prepared in its RRM using information from its
internal model. A big picture of this interaction is in Figure 2.
Typol.
Ontology
(agent = ?, value = ?,
skill = cubism,
reputation = directreputation)
A
(agent = ?, value = ?, skill = cubism,
reputation = PrimaryReputation)
CogMod.
Ontology
(agent = ?, value = ?,
skill = cubism,
reputation = image)
B
Typol.
Ontology
(agent = ?, value = ?,
skill = cubism,
reputation = directreputation)
A
(agent = ?, value = ?, skill = cubism,
reputation = PrimaryReputation)
CogMod.
Ontology
(agent = ?, value = ?,
skill = cubism,
reputation = image)
B
Figure 2. Interaction about specific types of reputation values
Agent B response to agent A is processed in its RRM and it is
composed of tuples (agent, value, cubism, image) , where
the pair (agent, value) is composed of all agents and associated
reputation values whose agent B knows their expertise about
cubism by its own opinion. This response is forwarded to the
RMM in order to be translated to the enriched common model and
to be sent to agent A. After receiving the information sent by
agent B, agent A processes it in its RMM and translates it to its
own reputation model to be analyzed by its RRM.
5. CONCLUSION
In this paper we present a proposal for reducing the
incompatibility between reputation models by using a general
agent architecture for reputation interaction which relies on a
functional ontology of reputation (FORe), used as a globally
shared reputation model. A reputation mapping module allows
agents to translate information from their internal reputation
model into the shared model and vice versa. The ART testbed has
been enriched to use the ontology during agent transactions. Some
scenarios were described to illustrate our proposal and they seem
to be a promising way to improve the process of building
reputation just using existing technologies.
6. ACKNOWLEDGMENTS
Anarosa A. F. Brandão is supported by CNPq/Brazil grant
310087/2006-6 and Jaime Sichman is partially supported by
CNPq/Brazil grants 304605/2004-2, 482019/2004-2 and
506881/2004-1. Laurent Vercouter was partially supported by
FAPESP grant 2005/02902-5.
7. REFERENCES
[1] Agha, G. A. Abstracting Interaction Patterns: A
Programming Paradigm for Open Distributed Systems, In
(Eds) E. Najm and J.-B. Stefani, Formal Methods for Open
Object-based Distributed Systems IFIP Transactions, 1997,
Chapman Hall.
[2] Casare,S. and Sichman, J.S. Towards a Functional Ontology
of Reputation, In Proc of the 4th
Intl Joint Conference on
Autonomous Agents and Multi Agent Systems (AAMAS"05),
Utrecht, The Netherlands, 2005, v.2, pp. 505-511.
[3] Casare, S. and Sichman, J.S. Using a Functional Ontology of
Reputation to Interoperate Different Agent Reputation
Models, Journal of the Brazilian Computer Society, (2005),
11(2), pp. 79-94.
[4] Castelfranchi, C. and Falcone, R. Principles of trust in MAS:
cognitive anatomy, social importance and quantification. In
Proceedings of ICMAS"98, Paris, 1998, pp. 72-79.
[5] Conte, R. and Paolucci, M. Reputation in Artificial Societies:
Social Beliefs for Social Order, Kluwer Publ., 2002.
[6] Fullam, K.; Klos, T.; Muller, G.; Sabater, J.; Topol, Z.;
Barber, S.;Rosenchein, J.; Vercouter, L. and Voss, M. A
specification of the agent reputation and trust (art) testbed:
experimentation and competition for trust in agent societies.
In Proc. of the 4th
Intl. Joint Conf on Autonomous Agents
and Multiagent Systems (AAMAS"05), ACM, 2005, 512-158.
[7] Mui, L.; Halberstadt, A.; Mohtashemi, M. Notions of
Reputation in Multi-Agents Systems: A Review. In: Proc of
1st Intl. Joint Conf. on Autonomous Agents and Multi-agent
Systems (AAMAS 2002), Bologna, Italy, 2002, 1, 280-287.
[8] Muller, G. and Vercouter, L. Decentralized monitoring of
agent communication with a reputation model. In Trusting
Agents for Trusting Electronic Societies, LNCS 3577, 2005,
pp. 144-161.
[9] Sabater, J. and Sierra, C. ReGret: Reputation in gregarious
societies. In Müller, J. et al (Eds) Proc. of the 5th
Intl. Conf.
on Autonomous Agents, Canada, 2001, ACM, 194-195.
[10] Sabater, J. and Sierra, C. Review on Computational Trust
and Reputation Models. In: Artificial Intelligence Review,
Kluwer Acad. Publ., (2005), v. 24, n. 1, pp. 33 - 60.
[11] Vercouter,L, Casare, S., Sichman, J. and Brandão, A. An
experience on reputation models interoperability based on a
functional ontology In Proc. of the 20th
IJCAI, Hyderabad,
India, 2007, pp.617-622.
[12] Visser, U.; Stuckenschmidt, H.; Wache, H. and Vogele, T.
Enabling technologies for inter-operability. In: In U. Visser
and H. Pundt, Eds, Workshop on the 14th Intl Symp. of
Computer Science for Environmental Protection, Bonn,
Germany, 2000, pp. 35-46.
[13] Yu, B. and Singh, M.P. An Evidential Model of Distributed
Reputation Management. In: Proc. of the 1st Intl Joint Conf.
on Autonomous Agents and Multi-agent Systems (AAMAS
2002), Bologna, Italy, 2002, part 1, pp. 294 - 301.
[14] Zacharia, G. and Maes, P. Trust Management Through
Reputation Mechanisms. In: Applied Artificial Intelligence,
14(9), 2000, pp. 881-907.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1049 | interoperability;reputation model;agent architecture;functional ontology of reputation;ontology;heterogeneous agent;reputation value;autonomous distributed agent;reputation formation;reputation;art testbed;trust;art testb;multiagent system |
train_I-74 | On the relevance of utterances in formal inter-agent dialogues | Work on argumentation-based dialogue has defined frameworks within which dialogues can be carried out, established protocols that govern dialogues, and studied different properties of dialogues. This work has established the space in which agents are permitted to interact through dialogues. Recently, there has been increasing interest in the mechanisms agents might use to choose how to act - the rhetorical manoeuvring that they use to navigate through the space defined by the rules of the dialogue. Key in such considerations is the idea of relevance, since a usual requirement is that agents stay focussed on the subject of the dialogue and only make relevant remarks. Here we study several notions of relevance, showing how they can be related to both the rules for carrying out dialogues and to rhetorical manoeuvring. | 1. INTRODUCTION
Finding ways for agents to reach agreements in
multiagent systems is an area of active research. One mechanism
for achieving agreement is through the use of argumentation
- where one agent tries to convince another agent of
something during the course of some dialogue. Early examples of
argumentation-based approaches to multiagent agreement
include the work of Dignum et al. [7], Kraus [14],
Parsons and Jennings [16], Reed [23], Schroeder et al. [25] and
Sycara [26].
The work of Walton and Krabbe [27], popularised in the
multiagent systems community by Reed [23], has been
particularly influential in the field of argumentation-based
dialogue. This work influenced the field in a number of ways,
perhaps most deeply in framing multi-agent interactions as
dialogue games in the tradition of Hamblin [13]. Viewing
dialogues in this way, as in [2, 21], provides a powerful
framework for analysing the formal properties of dialogues, and
for identifying suitable protocols under which dialogues can
be conducted [18, 20]. The dialogue game view overlaps with
work on conversation policies (see, for example, [6, 10]), but
differs in considering the entire dialogue rather than dialogue
segments.
In this paper, we extend the work of [18] by considering
the role of relevance - the relationship between utterances
in a dialogue. Relevance is a topic of increasing interest
in argumentation-based dialogue because it relates to the
scope that an agent has for applying strategic manoeuvering
to obtain the outcomes that it requires [19, 22, 24]. Our
work identifes the limits on such rhetorical manoeuvering,
showing when it can and cannot have an effect.
2. BACKGROUND
We begin by introducing the formal system of
argumentation that underpins our approach, as well as the
corresponding terminology and notation, all taken from [2, 8, 17].
A dialogue is a sequence of messages passed between two
or more members of a set of agents A. An agent α maintains
a knowledge base, Σα, containing formulas of a propositional
language L and having no deductive closure. Agent α also
maintains the set of its past utterances, called the
commitment store, CSα. We refer to this as an agent"s public
knowledge, since it contains information that is shared with
other agents. In contrast, the contents of Σα are private
to α.
Note that in the description that follows, we assume that
is the classical inference relation, that ≡ stands for logical
equivalence, and we use Δ to denote all the information
available to an agent. Thus in a dialogue between two agents
α and β, Δα = Σα ∪ CSα ∪ CSβ, so the commitment store
CSα can be loosely thought of as a subset of Δα consisting of
the assertions that have been made public. In some dialogue
games, such as those in [18] anything in CSα is either in Σα
or can be derived from it. In other dialogue games, such as
1006
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
those in [2], CSα may contain things that cannot be derived
from Σα.
Definition 2.1. An argument A is a pair (S, p) where p
is a formula of L and S a subset of Δ such that (i) S is
consistent; (ii) S p; and (iii) S is minimal, so no proper
subset of S satisfying both (1) and (2) exists.
S is called the support of A, written S = Support(A) and p
is the conclusion of A, written p = Conclusion(A). Thus we
talk of p being supported by the argument (S, p).
In general, since Δ may be inconsistent, arguments in
A(Δ), the set of all arguments which can be made from Δ,
may conflict, and we make this idea precise with the notion
of undercutting:
Definition 2.2. Let A1 and A2 be arguments in A(Δ).
A1 undercuts A2 iff ∃¬p ∈ Support(A2) such that p ≡
Conclusion(A1).
In other words, an argument is undercut if and only if there
is another argument which has as its conclusion the negation
of an element of the support for the first argument.
To capture the fact that some beliefs are more strongly
held than others, we assume that any set of beliefs has a
preference order over it. We consider all information
available to an agent, Δ, to be stratified into non-overlapping
subsets Δ1, . . . , Δn such that beliefs in Δi are all equally
preferred and are preferred over elements in Δj where i > j.
The preference level of a nonempty subset S ⊂ Δ, where
different elements s ∈ S may belong to different layers Δi,
is valued at the highest numbered layer which has a member
in S and is referred to as level(S). In other words, S is only
as strong as its weakest member. Note that the strength of
a belief as used in this context is a separate concept from
the notion of support discussed earlier.
Definition 2.3. Let A1 and A2 be arguments in A(Δ).
A1 is preferred to A2 according to Pref , A1
Pref
A2, iff
level(Support(A1)) > level(Support(A2)). If A1 is
preferred to A2, we say that A1 is stronger than A2.
We can now define the argumentation system we will use:
Definition 2.4. An argumentation system is a triple:
A(Δ), Undercut, Pref
such that:
• A(Δ) is a set of the arguments built from Δ,
• Undercut is a binary relation representing the defeat
relationship between arguments, Undercut ⊆ A(Δ) ×
A(Δ), and
• Pref is a pre-ordering on A(Δ) × A(Δ).
The preference order makes it possible to distinguish
different types of relations between arguments:
Definition 2.5. Let A1, A2 be two arguments of A(Δ).
• If A2 undercuts A1 then A1 defends itself against A2
iff A1
Pref
A2. Otherwise, A1 does not defend itself.
• A set of arguments A defends A1 iff for every A2 that
undercuts A1, where A1 does not defend itself against
A2, then there is some A3 ∈ A such that A3 undercuts
A2 and A2 does not defend itself against A3.
We write AUndercut,Pref to denote the set of all non-undercut
arguments and arguments defending themselves against all
their undercutting arguments. The set A(Δ) of acceptable
arguments of the argumentation system
A(Δ), Undercut, Pref
is [1] the least fixpoint of a function F:
A ⊆ A(Δ)
F(A) = {(S, p) ∈ A(Δ) | (S, p) is defended by A}
Definition 2.6. The set of acceptable arguments for an
argumentation system A(Δ), Undercut, Pref is recursively
defined as:
A(Δ) =
[
Fi≥0(∅)
= AUndercut,Pref ∪
h[
Fi≥1(AUndercut,Pref )
i
An argument is acceptable if it is a member of the acceptable
set, and a proposition is acceptable if it is the conclusion of
an acceptable argument.
An acceptable argument is one which is, in some sense,
proven since all the arguments which might undermine it
are themselves undermined.
Definition 2.7. If there is an acceptable argument for a
proposition p, then the status of p is accepted, while if there
is not an acceptable argument for p, the status of p is not
accepted.
Argument A is said to affect the status of another argument
A if changing the status of A will change the status of A .
3. DIALOGUES
Systems like those described in [2, 18], lay down sets of
locutions that agents can make to put forward propositions
and the arguments that support them, and protocols that
define precisely which locutions can be made at which points
in the dialogue. We are not concerned with such a level
of detail here. Instead we are interested in the interplay
between arguments that agents put forth. As a result, we
will consider only that agents are allowed to put forward
arguments. We do not discuss the detail of the mechanism
that is used to put these arguments forward - we just
assume that arguments of the form (S, p) are inserted into
an agent"s commitment store where they are then visible to
other agents.
We then have a typical definition of a dialogue:
Definition 3.1. A dialogue D is a sequence of moves:
m1, m2, . . . , mn.
A given move mi is a pair α, Ai where Ai is an argument
that α places into its commitment store CSα.
Moves in an argumentation-based dialogue typically attack
moves that have been made previously. While, in general,
a dialogue can include moves that undercut several
arguments, in the remainder of this paper, we will only consider
dialogues that put forward moves that undercut at most
one argument. For now we place no additional constraints
on the moves that make up a dialogue. Later we will see
how different restrictions on moves lead to different kinds of
dialogue.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1007
The sequence of arguments put forward in the dialogue
is determined by the agents who are taking part in the
dialogue, but they are usually not completely free to choose
what arguments they make. As indicated earlier, their choice
is typically limited by a protocol. If we write the sequence
of n moves m1, m2, . . . , mn as mn, and denote the empty
sequence as m0, then we can define a profocol in the following
way:
Definition 3.2. A protocol P is a function on a sequence
of moves mi in a dialogue D that, for all i ≥ 0, identifies
a set of possible moves Mi+1 from which the mi+1th move
may be drawn:
P : mi → Mi+1
In other words, for our purposes here, at every point in
a dialogue, a protocol determines a set of possible moves
that agents may make as part of the dialogue. If a dialogue
D always picks its moves m from the set M identified by
protocol P, then D is said to conform to P.
Even if a dialogue conforms to a protocol, it is typically
the case that the agent engaging in the dialogue has to make
a choice of move - it has to choose which of the moves in M
to make. This excercise of choice is what we refer to as an
agent"s use of rhetoric (in its oratorical sense of influencing
the thought and conduct of an audience). Some of our
results will give a sense of how much scope an agent has to
exercise rhetoric under different protocols.
As arguments are placed into commitment stores, and
hence become public, agents can determine the relationships
between them. In general, after several moves in a
dialogue, some arguments will undercut others. We will denote
the set of arguments {A1, A2, . . . , Aj} asserted after moves
m1, m2, . . . , mj of a dialogue to be Aj - the relationship of
the arguments in Aj can be described as an argumentation
graph, similar to those described in, for example, [3, 4, 9]:
Definition 3.3. An argumentation graph AG over a set
of arguments A is a directed graph (V, E) such that every
vertex v, v ∈ V denotes one argument A ∈ A, every
argument A is denoted by one vertex v, and every directed edge
e ∈ E from v to v denotes that v undercuts v .
We will use the term argument graph as a synonym for
argumentation graph.
Note that we do not require that the argumentation graph
is connected. In other words the notion of an argumentation
graph allows for the representation of arguments that do
not relate, by undercutting or being undercut, to any other
arguments (we will come back to this point very shortly).
We adapt some standard graph theoretic notions in order
to describe various aspects of the argumentation graph. If
there is an edge e from vertex v to vertex v , then v is said
to be the parent of v and v is said to be the child of v.
In a reversal of the usual notion, we define a root of an
argumentation graph1
as follows:
Definition 3.4. A root of an argumentation graph AG =
(V, E) is a node v ∈ V that has no children.
Thus a root of a graph is a node to which directed edges
may be connected, but from which no directed edges
connect to other nodes. Thus a root is a node representing an
1
Note that we talk of a root rather than the root - as defined,
an argumentation graph need not be a tree.
v v"
Figure 1: An example argument graph
argument that is undercut, but which itself does no
undercutting. Similarly:
Definition 3.5. A leaf of an argumentation graph AG =
(V, E) is a node v ∈ V that has no parents.
Thus a leaf in an argumentation graph represents an
argument that undercuts another argument, but does no
undercutting. Thus in Figure 1, v is a root, and v is a leaf. The
reason for the reversal of the usual notions of root and leaf
is that, as we shall see, we will consider dialogues to
construct argumentation graphs from the roots (in our sense)
to the leaves. The reversal of the terminology means that it
matches the natural process of tree construction.
Since, as described above, argumentation graphs are
allowed to be not connected (in the usual graph theory sense),
it is helpful to distinguish nodes that are connected to other
nodes, in particular to the root of the tree. We say that node
v is connected to node v if and only if there is a path from
v to v . Since edges represent undercut relations, the notion
of connectedness between nodes captures the influence that
one argument may have on another:
Proposition 3.1. Given an argumentation graph AG, if
there is any argument A, denoted by node v that affects the
status of another argument A , denoted by v , then v is
connected to v . The converse does not hold.
Proof. Given Definitions 2.5 and 2.6, the only ways in
which A can affect the status of A is if A either undercuts
A , or if A undercuts some argument A that undercuts A ,
or if A undercuts some A that undercuts some A that
undercuts A , and so on. In all such cases, a sequence of
undercut relations relates the two arguments, and if they are
both in an argumentation graph, this means that they are
connected.
Since the notion of path ignores the direction of the
directed arcs, nodes v and v are connected whether the edge
between them runs from v to v or vice versa. Since A only
undercuts A if the edge runs from v to v , we cannot infer
that A will affect the status of A from information about
whether or not they are connected.
The reason that we need the concept of the argumentation
graph is that the properties of the argumentation graph tell
us something about the set of arguments A the graph
represents. When that set of arguments is constructed through a
dialogue, there is a relationship between the structure of the
argumentation graph and the protocol that governs the
dialogue. It is the extent of the relationship between structure
and protocol that is the main subject of this paper. To study
this relationship, we need to establish a correspondence
between a dialogue and an argumentation graph. Given the
definitions we have so far, this is simple:
Definition 3.6. A dialogue D, consisting of a sequence
of moves mn, and an argument graph AG = (V, E)
correspond to one another iff ∀m ∈ mn, the argument Ai that
1008 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
is advanced at move mi is represented by exactly one node
v ∈ V , and ∀v ∈ V , v represents exactly one argument Ai
that has been advanced by a move m ∈ mn.
Thus a dialogue corresponds to an argumentation graph if
and only if every argument made in the dialogue corresponds
to a node in the graph, and every node in the graph
corresponds to an argument made in the dialogue. This
one-toone correspondence allows us to consider each node v in the
graph to have an index i which is the index of the move in
the dialogue that put forward the argument which that node
represents. Thus we can, for example, refer to the third
node in the argumentation graph, meaning the node that
represents the argument put forward in the third move of
the dialogue.
4. RELEVANCE
Most work on dialogues is concerned with what we might
call coherent dialogues, that is dialogues in which the
participants are, as in the work of Walton and Krabbe [27],
focused on resolving some question through the dialogue2
To capture this coherence, it seems we need a notion of
relevance to constrain the statements made by agents. Here
we study three notions of relevance:
Definition 4.1. Consider a dialogue D, consisting of a
sequence of moves mi, with a corresponding argument graph
AG. The move mi+1, i > 1, is said to be relevant if one or
more of the following hold:
R1 Making mi+1 will change the status of the argument
denoted by the first node of AG.
R2 Making mi+1 will add a node vi+1 that is connected to
the first node of AG.
R3 Making mi+1 will add a node vi+1 that is connected to
the last node to be added to AG.
R2-relevance is the form of relevance defined by [3] in their
study of strategic and tactical reasoning3
. R1-relevance was
suggested by the notion used in [15], and though it differs
somewhat from that suggested there, we believe it captures
the essence of its predecessor.
Note that we only define relevance for the second move
of the dialogue onwards because the first move is taken to
identify the subject of the dialogue, that is, the central
question that the dialogue is intended to answer, and hence it
must be relevant to the dialogue, no matter what it is. In
assuming this, we focus our attention on the same kind of
dialogues as [18].
We can think of relevance as enforcing a form of
parsimony on a dialogue - it prevents agents from making
statements that do not bear on the current state of the dialogue.
This promotes efficiency, in the sense of limiting the
number of moves in the dialogue, and, as in [15], prevents agents
revealing information that they might better keep hidden.
Another form of parsimony is to insist that agents are not
allowed to put forward arguments that will be undercut by
arguments that have already been made during the dialogue.
We therefore distinguish such arguments.
2
See [11, 12] for examples of dialogues where this is not the case.
3
We consider such reasoning sub-types of rhetoric.
Definition 4.2. Consider a dialogue D, consisting of a
sequence of moves mi, with a corresponding argument graph
AG. The move mi+1 and the argument it puts forward,
Ai+1, are both said to be pre-empted, if Ai+1 is undercut by
some A ∈ Ai.
We use the term pre-empted because if such an argument
is put forward, it can seem as though another agent
anticipated the argument being made, and already made an
argument that would render it useless. In the rest of this
paper, we will only deal with protocols that permit moves
that are relevant, in any of the senses introduced above, and
are not allowed to be pre-empted. We call such protocols
basic protocols, and dialogues carried out under such protocols
basic dialogues.
The argument graph of a basic dialogue is somewhat
restricted.
Proposition 4.1. Consider a basic dialogue D. The
argumentation graph AG that corresponds to D is a tree with
a single root.
Proof. Recall that Definition 3.3 requires only that AG
be a directed graph. To show that it is a tree, we have to
show that it is acyclic and connected.
That the graph is connected follows from the construction
of the graph under a protocol that enforces relevance. If the
notion of relevance is R3, each move adds a node that is
connected to the previous node. If the notion of relevance is
R2, then every move adds a node that is connected to the
root, and thus is connected to some node in the graph. If the
notion of relevance is R1, then every move has to change the
status of the argument denoted by the root. Proposition 3.1
tells us that to affect the status of an argument A , the node
v representing the argument A that is effecting the change
has to be connected to v , the node representing A , and so
it follows that every new node added as a result of an
R1relevant move will be connected to the argumentation graph.
Thus AG is connected.
Since a basic dialogue does not allow moves that are
preempted, every edge that is added during construction is
directed from the node that is added to one already in the graph
(thus denoting that the argument A denoted by the added
node, v, undercuts the argument A denoted by the node to
which the connection is made, v , rather than the other way
around). Since every edge that is added is directed from the
new node to the rest of the graph, there can be no cycles.
Thus AG is a tree.
To show that AG has a single root, consider its
construction from the initial node. After m1 the graph has one node,
v1 that is both a root and a leaf. After m2, the graph is two
nodes connected by an edge, and v1 is now a root and not a
leaf. v2 is a leaf and not a root. However the third node is
added, the argument earlier in this proof demonstrates that
there will be a directed edge from it to some other node,
making it a leaf. Thus v1 will always be the only root. The ruling
out of pre-empted moves means that v1 will never cease to
be a root, and so the argumentation graph will always have
one root.
Since every argumentation graph constructed by a basic
dialogue is a tree with a single root, this means that the first
node of every argumentation graph is the root.
Although these results are straightforward to obtain, they
allow us to show how the notions of relevance are related.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1009
Proposition 4.2. Consider a basic dialogue D,
consisting of a sequence of moves mi, with a corresponding
argument graph AG.
1. Every move mi+1 that is R1-relevant is R2-relevant.
The converse does not hold.
2. Every move mi+1 that is R3-relevant is R2-relevant.
The converse does not hold.
3. Not every move mi+1 that is R1-relevant is R3-relevant,
and not every move mi+1 that is R3-relevant is
R1relevant
Proof. For 1, consider how move mi+1 can satisfy R1.
Proposition 3.1 tells us that if Ai+1 can change the status
of the argument denoted by the root v1 (which, as observed
above, is the first node) of AG, then vi+1 must be connected
to the root. This is precisely what is required to satisfy R2,
and the relatiosnhip is proved to hold.
To see that the converse does not hold, we have to consider
what it takes to change the status of r (since Proposition 3.1
tells us that connectedness is not enough to ensure a change
of status - if it did, R1 and R2 relevance would coincide).
For mi+1 to change the status of the root, it will have to (1)
make the argument A represented by r either unacceptable,
if it were acceptable before the move, or (2) acceptable if it
were unacceptable before the move. Given the definition of
acceptability, it can achieve (1) either by directly
undercutting the argument represented by r, in which case vi+1 will
be directly connected to r by some edge, or by undercutting
some argument A that is part of the set of non-undercut
arguments defending A. In the latter case, vi+1 will be
directly connected to the node representing A and by
Proposition 4.1 to r. To achieve (2), vi+1 will have to undercut
an argument A that is either currently undercutting A, or
is undercutting an argument that would otherwise defend A.
Now, further consider that mi+1 puts forward an argument
Ai+1 that undercuts the argument denoted by some node v ,
but this latter argument defends itself against Ai+1. In such
a case, the set of acceptable arguments will not change, and
so the status of Ar will not change. Thus a move that is
R2-relevant need not be R1-relevant.
For 2, consider that mi+1 can satisfy R3 simply by adding
a node that is connected to vi, the last node to be added
to AG. By Proposition 4.1, it is connected to r and so is
R2-relevant.
To see that the converse does not hold, consider that an
R2-relevant move can connect to any node in AG.
The first part of 3 follows by a similar argument to that we
just used - an R1-relevant move does not have to connect to
vi, just to some v that is part of the graph - and the second
part follows since a move that is R3-relevant may introduce
an argument Ai+1 that undercuts the argument Ai put
forward by the previous move (and so vi+1 is connected to vi),
but finds that Ai defends itself against Ai+1, preventing a
change of status at the root.
What is most interesting is not so much the results but
why they hold, since this reveals some aspects of the
interplay between relevance and the structure of argument
graphs. For example, to restate a case from the proof of
Proposition 4.2, a move that is R3-relevant by definition has
to add a node to the argument graph that is connected to the
last node that was added. Since a move that is R2-relevant
can add a node that connects anywhere on an argument
graph, any move that is R3-relevant will be R2-relevant,
but the converse does not hold.
It turns out that we can exploit the interplay between
structure and relevance that Propositions 4.1 and 4.2 have
started to illuminate to establish relationships between the
protocols that govern dialogues and the argument graphs
constructed during such dialogues. To do this we need to
define protocols in such a way that they refer to the structure
of the graph. We have:
Definition 4.3. A protocol is single-path if all dialogues
that conform to it construct argument graphs that have only
one branch.
Proposition 4.3. A basic protocol P is single-path if, for
all i, the set of permitted moves Mi at move i are all
R3relevant. The converse does not hold.
Proof. R3-relevance requires that every node added to
the argument graph be connected to the previous node.
Starting from the first node this recursively constructs a tree with
just one branch, and the relationship holds. The converse
does not hold because even if one or more moves in the
protocol are R1- or R2-relevant, it may be the case that, because
of an agent"s rhetorical choice or because of its knowledge,
every argument that is chosen to be put forward will
undercut the previous argument and so the argument graph is a
one-branch tree.
Looking for more complex kinds of protocol that construct
more complex kinds of argument graph, it is an obvious
move to turn to:
Definition 4.4. A basic protocol is multi-path if all
dialogues that conform to it can construct argument graphs that
are trees.
But, on reflection, since any graph with only one branch is
also a tree:
Proposition 4.4. Any single-path protocol is an instance
of a multi-path protocol.
and, furthermore:
Proposition 4.5. Any basic protocol P is multi-path.
Proof. Immediate from Proposition 4.1
So the notion of a multi-path protocol does not have much
traction. As a result we distinguish multi-path protocols
that permit dialogues that can construct trees that have
more than one branch as bushy protocols. We then have:
Proposition 4.6. A basic protocol P is bushy if, for some
i, the set of permitted moves Mi at move i are all R1- or
R2-relevant.
Proof. From Proposition 4.3 we know that if all moves
are R3-relevant then we"ll get a tree with one branch, and
from Proposition 4.1 we know that all basic protocols will
build an argument graph that is a tree, so providing we
exclude R3-relevant moves, we will get protocols that can build
multi-branch trees.
1010 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Of course, since, by Proposition 4.2, any move that is
R3relevant is R2-relevant and can quite possibly be R1-relevant
(all that Proposition 4.2 tells us is that there is no
guarantee that it will be), all that Proposition 4.6 tells us is that
dialogues that conform to bushy protocols may have more
than one branch. All we can do is to identify a bound on
the number of branches:
Proposition 4.7. Consider a basic dialogue D that
includes m moves that are not R3-relevant, and has a
corresponding argumentation graph AG. The number of branches
in AG is less than or equal to m + 1.
Proof. Since it must connect a node to the last node
added to AG, an R3-relevant move can only extend an
existing branch. Since they do not have the same restriction,
R1 and R2-relevant moves may create a new branch by
connecting to a node that is not the last node added. Every
such move could create a new branch, and if they do, we
will have m branches. If there were R3-relevant moves
before any of these new-branch-creating moves, then these m
branches are in addition to the initial branch created by the
R3-relevant moves, and we have a maximum of m + 1
possible branches.
We distinguish bushy protocols from multi-path protocols,
and hence R1- and R2-relevance from R3-relevance, because
of the kinds of dialogue that R3-relevance enforces. In a
dialogue in which all moves must be R3-relevant, the
argumentation graph has a single branch - the dialogue consists of
a sequence of arguments each of which undercuts the
previous one and the last move to be made is the one that settles
the dialogue. This, as we will see next, means that such a
dialogue only allows a subset of all the moves that would
otherwise be possible.
5. COMPLETENESS
The above discussion of the difference between dialogues
carried out under single-path and bushy protocols brings us
to the consideration of what [18] called predeterminism,
but we now prefer to describe using the term
completeness. The idea of predeterminism, as described in [18],
captures the notion that, under some circumstances, the
result of a dialogue can be established without actually having
the dialogue - the agents have sufficiently little room for
rhetorical manoeuver that were one able to see the contents
of all the Σi of all the αi ∈ A, one would be able to
identify the outcome of any dialogue on a given subject4
. We
develop this idea by considering how the argument graphs
constructed by dialogues under different protocols compare
to benchmark complete dialogues. We start by developing
ideas of what complete might mean. One reasonable
definition is that:
Definition 5.1. A basic dialogue D between the set of
agents A with a corresponding argumentation graph AG is
topic-complete if no agent can construct an argument A that
undercuts any argument A represented by a node in AG.
The argumentation graph constructed by a topic-complete
dialogue is called a topic-complete argumentation graph and
is denoted AG(D)T .
4
Assuming that the Σi do not change during the dialogue, which is
the usual assumption in this kind of dialogue.
A dialogue is topic-complete when no agent can add
anything that is directly connected to the subject of the
dialogue. Some protocols will prevent agents from making
moves even though the dialogue is not topic-complete. To
distinguish such cases we have:
Definition 5.2. A basic dialogue D between the set of
agents A with a corresponding argumentation graph AG is
protocol-complete under a protocol P if no agent can make
a move that adds a node to the argumentation graph that is
permitted by P.
The argumentation graph constructed by a protocol-complete
dialogue is called a protocol-complete argumentation graph
and is denoted AG(D)P . Clearly:
Proposition 5.1. Any dialogue D under a basic protocol
P is protocol-complete if it is topic-complete. The converse
does not hold in general.
Proof. If D is topic-complete, no agent can make a move
that will extend the argumentation graph. This means that
no agent can make a move that is permitted by a basic
protocol, and so D is also protocol complete.
The converse does not hold since some basic dialogues
(under a protocol that only permits R3-relevant moves, for
example) will not permit certain moves (like the addition of
a node that connects to the root of the argumentation graph
after more than two moves) that would be allowed in a
topiccomplete dialogue.
Corollary 5.1. For a basic dialogue D, AG(D)P is a
sub-graph of AG(D)T .
Obviously, from the definition of a sub-graph, the converse
of Corollary 5.1 does not hold in general.
The important distinction between topic- and
protocolcompleteness is that the former is determined purely by the
state of the dialogue - as captured by the argumentation
graph - and is thus independent of the protocol, while the
latter is determined entirely by the protocol. Any time that
a dialogue ends in a state of protocol-completeness rather
than topic completeness, it is ending when agents still have
things to say but can"t because the protocol won"t allow
them to.
With these definitions of completeness, our task is to
relate topic-completeness - the property that ensures that
agents can say everything that they have to say in a dialogue
that is, in some sense, important - to the notions of
relevance we have developed - which determine what agents
are allowed to say. When we need very specific conditions to
make protocol-complete dialogues topic-complete, it means
that agents have lots of room for rhetorical maneouver when
those conditions are not in force. That is there are many
ways they can bring dialogues to a close before everything
that can be said has been said. Where few conditions are
required, or conditions are absent, then dialogues between
agents with the same knowledge will always play out the
same way, and rhetoric has no place. We have:
Proposition 5.2. A protocol-complete basic dialogue D
under a protocol which only allows R3-relevant moves will
be topic-complete only when AG(D)T has a single branch
in which the nodes are labelled in increasing order from the
root.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1011
Proof. Given what we know about R3-relevance, the
condition on AG(D)P having a single branch is obvious. This
is not a sufficient condition on its own because certain
protocols may prevent - through additional restrictions, like
strict turn-taking in a multi-party dialogue - all the nodes
in AG(D)T , which is not subject to such restrictions, being
added to the graph. Only when AG(D)T includes the nodes
in the exact order that the corresponding arguments are put
forward is it necessary that a topic-complete argumentation
graph be constructed.
Given Proposition 5.1, these are the conditions under which
dialogues conducted under the notion of R3-relevance will
always be predetermined, and given how restrictive the
conditions are, such dialogues seem to have plenty of room for
rhetoric to play a part.
To find similar conditions for dialogues composed of
R1and R2-relevant moves, we first need to distinguish between
them. We can do this in terms of the structure of the
argumentation graph:
Proposition 5.3. Consider a basic dialogue D, with
argumentation graph AG which has root r denoting an
argument A. If argument A , denoted by node v is an an
R2relevant move m, m is not R1-relevant if and only if:
1. there are two nodes v and v on the path between v
and r, and the argument denoted by v defends itself
against the argument denoted by v ; or
2. there is an argument A , denoted by node v , that
affects the status of A, and the path from v to r has one
or more nodes in common with the path from v to r.
Proof. For the first condition, consider that since AG is
a tree, v is connected to r. Thus there is a series of undercut
relations between A and A , and this corrresponds to a path
through AG. If this path is the only branch in the tree, then
A will affect the status of A unless the chain of affect
is broken by an undercut that can"t change the status of the
undercut argument because the latter defends itself.
For the second condition, as for the first, the only way
that A cannot affect the status of A is if something is
blocking its influence. If this is not due to defending against,
it must be because there is some node u on the path that
represents an argument whose status is fixed somehow, and
that must mean that there is another chain of undercut
relations, another branch of the tree, that is incident at u. Since
this second branch denotes another chain of arguments, and
these affect the status of the argument denoted by u, they
must also affect the status of A. Any of these are the A in
the condition.
So an R2-relevant move m is not R1-relevant if either its
effect is blocked because an argument upstream is not strong
enough, or because there is another line of argument that
is currently determining the status of the argument at the
root. This, in turn, means that if the effect is not due to
defending against, then there is an alternative move that
is R1-relevant - a move that undercuts A in the second
condition above5
. We can now show
5
Though whether the agent in question can make such a move is
another question.
Proposition 5.4. A protocol-complete basic dialogue D
will always be topic-complete under a protocol which only
includes R2-relevant moves and allows every R2-relevant move
to be made.
The restriction on R2-relevant rules is exactly that for
topiccompleteness, so a dialogue that has only R2-relevant moves
will continue until every argument that any agent can make
has been put forward. Given this, and what we revealed
about R1-relevance in Proposition 5.3, we can see that:
Proposition 5.5. A protocol-complete basic dialogue D
under a protocol which only includes R1-relevant moves will
be topic-complete if AG(D)T :
1. includes no path with adjacent nodes v, denoting A,
and v , denoting A , such that A undercuts A and A
is stronger that A; and
2. is such that the nodes in every branch have consecutive
indices and no node with degree greater than two is an
odd number of arcs from a leaf node.
Proof. The first condition rules out the first condition
in Proposition 5.3, and the second deals with the situation
that leads to the second condition in Proposition 5.3. The
second condition ensures that each branch is constructed in
full before any new branch is added, and when a new branch
is added, the argument that is undercut as part of the
addition will be acceptable, and so the addition will change the
status of the argument denoted by that node, and hence the
root. With these conditions, every move required to
construct AG(D)T will be permitted and so the dialogue will be
topic-complete when every move has been completed.
The second part of this result only identifies one possible
way to ensure that the second condition in Proposition 5.3
is met, so the converse of this result does not hold.
However, what we have is sufficient to answer the
question about predetermination that we started with. For
dialogues to be predetermined, every move that is R2-relevant
must be made. In such cases every dialogue is topic
complete. If we do not require that all R2-relevant moves are
made, then there is some room for rhetoric - the way in
which alternative lines of argument are presented becomes
an issue. If moves are forced to be R3-relevant, then there
is considerable room for rhetorical play.
6. SUMMARY
This paper has studied the different ideas of relevance in
argumentation-based dialogue, identifying the relationship
between these ideas, and showing how they can impact the
extent to which the way that agents choose moves in a
dialogue - what some authors have called the strategy and
tactics of a dialogue. This extends existing work on
relvance, such as [3, 15] by showing how different notions of
relevance can have an effect on the outcome of a dialogue,
in particular when they render the outcome predetermined.
This connection extends the work of [18] which considered
dialogue outcome, but stopped short of identifying the
conditions under which it is predetermined.
There are two ways we are currently trying to extend this
work, both of which will generalise the results and extend its
applicability. First, we want to relax the restrictions that
1012 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
we have imposed, the exclusion of moves that attack
several arguments (without which the argument graph can be
mulitply-connected) and the exclusion of pre-empted moves,
without which the argument graph can have cycles.
Second, we want to extend the ideas of relevance to cope with
moves that do not only add undercutting arguments, but
also supporting arguments, thus taking account of bipolar
argumentation frameworks [5].
Acknowledgments
The authors are grateful for financial support received from
the EC, through project IST-FP6-002307, and from the NSF
under grants REC-02-19347 and NSF IIS-0329037. They are
also grateful to Peter Stone for a question, now several years
old, which this paper has finally answered.
7. REFERENCES
[1] L. Amgoud and C. Cayrol. On the acceptability of
arguments in preference-based argumentation
framework. In Proceedings of the 14th Conference on
Uncertainty in Artificial Intelligence, pages 1-7, 1998.
[2] L. Amgoud, S. Parsons, and N. Maudet. Arguments,
dialogue, and negotiation. In W. Horn, editor,
Proceedings of the Fourteenth European Conference on
Artificial Intelligence, pages 338-342, Berlin,
Germany, 2000. IOS Press.
[3] J. Bentahar, M. Mbarki, and B. Moulin. Strategic and
tactic reasoning for communicating agents. In
N. Maudet, I. Rahwan, and S. Parsons, editors,
Proceedings of the Third Workshop on Argumentation
in Muliagent Systems, Hakodate, Japan, 2006.
[4] P. Besnard and A. Hunter. A logic-based theory of
deductive arguments. Artificial Intelligence,
128:203-235, 2001.
[5] C. Cayrol, C. Devred, and M.-C. Lagasquie-Schiex.
Handling controversial arguments in bipolar
argumentation frameworks. In P. E. Dunne and
T. J. M. Bench-Capon, editors, Computational Models
of Argument: Proceedings of COMMA 2006, pages
261-272. IOS Press, 2006.
[6] B. Chaib-Draa and F. Dignum. Trends in agent
communication language. Computational Intelligence,
18(2):89-101, 2002.
[7] F. Dignum, B. Dunin-K¸eplicz, and R. Verbrugge.
Agent theory for team formation by dialogue. In
C. Castelfranchi and Y. Lesp´erance, editors, Seventh
Workshop on Agent Theories, Architectures, and
Languages, pages 141-156, Boston, USA, 2000.
[8] P. M. Dung. On the acceptability of arguments and its
fundamental role in nonmonotonic reasoning, logic
programming and n-person games. Artificial
Intelligence, 77:321-357, 1995.
[9] P. M. Dung, R. A. Kowalski, and F. Toni. Dialectic
proof procedures for assumption-based, admissable
argumentation. Artificial Intelligence, 170(2):114-159,
2006.
[10] R. A. Flores and R. C. Kremer. To commit or not to
commit. Computational Intelligence, 18(2):120-173,
2002.
[11] D. M. Gabbay and J. Woods. More on
non-cooperation in Dialogue Logic. Logic Journal of
the IGPL, 9(2):321-339, 2001.
[12] D. M. Gabbay and J. Woods. Non-cooperation in
Dialogue Logic. Synthese, 127(1-2):161-186, 2001.
[13] C. L. Hamblin. Mathematical models of dialogue.
Theoria, 37:130-155, 1971.
[14] S. Kraus, K. Sycara, and A. Evenchik. Reaching
agreements through argumentation: a logical model
and implementation. Artificial Intelligence,
104(1-2):1-69, 1998.
[15] N. Oren, T. J. Norman, and A. Preece. Loose lips sink
ships: A heuristic for argumentation. In N. Maudet,
I. Rahwan, and S. Parsons, editors, Proceedings of the
Third Workshop on Argumentation in Muliagent
Systems, Hakodate, Japan, 2006.
[16] S. Parsons and N. R. Jennings. Negotiation through
argumentation - a preliminary report. In Proceedings
of Second International Conference on Multi-Agent
Systems, pages 267-274, 1996.
[17] S. Parsons, M. Wooldridge, and L. Amgoud. An
analysis of formal inter-agent dialogues. In 1st
International Conference on Autonomous Agents and
Multi-Agent Systems. ACM Press, 2002.
[18] S. Parsons, M. Wooldridge, and L. Amgoud. On the
outcomes of formal inter-agent dialogues. In 2nd
International Conference on Autonomous Agents and
Multi-Agent Systems. ACM Press, 2003.
[19] H. Prakken. On dialogue systems with speech acts,
arguments, and counterarguments. In Proceedings of
the Seventh European Workshop on Logic in Artificial
Intelligence, Berlin, Germany, 2000. Springer Verlag.
[20] H. Prakken. Relating protocols for dynamic dispute
with logics for defeasible argumentation. Synthese,
127:187-219, 2001.
[21] H. Prakken and G. Sartor. Modelling reasoning with
precedents in a formal dialogue game. Artificial
Intelligence and Law, 6:231-287, 1998.
[22] I. Rahwan, P. McBurney, and E. Sonenberg. Towards
a theory of negotiation strategy. In I. Rahwan,
P. Moraitis, and C. Reed, editors, Proceedings of the
1st International Workshop on Argumentation in
Multiagent Systems, New York, NY, 2004.
[23] C. Reed. Dialogue frames in agent communications. In
Y. Demazeau, editor, Proceedings of the Third
International Conference on Multi-Agent Systems,
pages 246-253. IEEE Press, 1998.
[24] M. Rovatsos, I. Rahwan, F. Fisher, and G. Weiss.
Adaptive strategies for practical argument-based
negotiation. In I. Rahwan, P. Moraitis, and C. Reed,
editors, Proceedings of the 1st International Workshop
on Argumentation in Multiagent Systems, New York,
NY, 2004.
[25] M. Schroeder, D. A. Plewe, and A. Raab. Ultima
ratio: should Hamlet kill Claudius. In Proceedings of
the 2nd International Conference on Autonomous
Agents, pages 467-468, 1998.
[26] K. Sycara. Argumentation: Planning other agents"
plans. In Proceedings of the Eleventh Joint Conference
on Artificial Intelligence, pages 517-523, 1989.
[27] D. N. Walton and E. C. W. Krabbe. Commitment in
Dialogue: Basic Concepts of Interpersonal Reasoning.
State University of New York Press, Albany, NY,
USA, 1995.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1013 | node;status;relevance;graph;tree;leaf;dialogue;argument;argumentation;multiagent system |
train_I-75 | Hypotheses Refinement under Topological Communication Constraints | We investigate the properties of a multiagent system where each (distributed) agent locally perceives its environment. Upon perception of an unexpected event, each agent locally computes its favoured hypothesis and tries to propagate it to other agents, by exchanging hypotheses and supporting arguments (observations). However, we further assume that communication opportunities are severely constrained and change dynamically. In this paper, we mostly investigate the convergence of such systems towards global consistency. We first show that (for a wide class of protocols that we shall define), the communication constraints induced by the topology will not prevent the convergence of the system, at the condition that the system dynamics guarantees that no agent will ever be isolated forever, and that agents have unlimited time for computation and arguments exchange. As this assumption cannot be made in most situations though, we then set up an experimental framework aiming at comparing the relative efficiency and effectiveness of different interaction protocols for hypotheses exchange. We study a critical situation involving a number of agents aiming at escaping from a burning building. The results reported here provide some insights regarding the design of optimal protocol for hypotheses refinement in this context. | 1. INTRODUCTION
We consider a multiagent system where each (distributed)
agent locally perceives its environment, and we assume that
some unexpected event occurs in that system. If each agent
computes only locally its favoured hypothesis, it is only
natural to assume that agents will seek to coordinate and
refine their hypotheses by confronting their observations with
other agents. If, in addition, the communication
opportunities are severely constrained (for instance, agents can
only communicate when they are close enough to some other
agent), and dynamically changing (for instance, agents may
change their locations), it becomes crucial to carefully
design protocols that will allow agents to converge to some
desired state of global consistency. In this paper we
exhibit some sufficient conditions on the system dynamics and
on the protocol/strategy structures that allow to guarantee
that property, and we experimentally study some contexts
where (some of) these assumptions are relaxed.
While problems of diagnosis are among the venerable
classics in the AI tradition, their multiagent counterparts have
much more recently attracted some attention. Roos and
colleagues [8, 9] in particular study a situation where a number
of distributed entities try to come up with a satisfying global
diagnosis of the whole system. They show in particular that
the number of messages required to establish this global
diagnosis is bound to be prohibitive, unless the communication
is enhanced with some suitable protocol. However, they do
not put any restrictions on agents" communication options,
and do not assume either that the system is dynamic.
The benefits of enhancing communication with supporting
information to make convergence to a desired global state
of a system more efficient has often been put forward in the
literature. This is for instance one of the main idea
underlying the argumentation-based negotiation approach [7], where
the desired state is a compromise between agents with
conflicting preferences. Many of these works however make the
assumption that this approach is beneficial to start with,
and study the technical facets of the problem (or instead
emphasize other advantages of using argumentation).
Notable exceptions are the works of [3, 4, 2, 5], which studied in
contexts different from ours the efficiency of argumentation.
The rest of the paper is as follows. Section 2 specifies
the basic elements of our model, and Section 3 goes on to
presenting the different protocols and strategies used by the
agents to exchange hypotheses and observations. We put
special attention at clearly emphasizing the conditions on
the system dynamics and protocols/strategies that will be
exploited in the rest of the paper. Section 4 details one of
998
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
the main results of the paper, namely the fact that under the
aforementioned conditions, the constraints that we put on
the topology will not prevent the convergence of the system
towards global consistency, at the condition that no agent
ever gets completely lost forever in the system, and that
unlimited time is allowed for computation and argument
exchange. While the conditions on protocols and strategies are
fairly mild, it is also clear that these system requirements
look much more problematic, even frankly unrealistic in
critical situations where distributed approaches are precisely
advocated. To get a clearer picture of the situation induced
when time is a critical factor, we have set up an
experimental framework that we introduce and discuss in Section 5.
The critical situation involves a number of agents aiming
at escaping from a burning building. The results reported
here show that the effectiveness of argument exchange
crucially depends upon the nature of the building, and provide
some insights regarding the design of optimal protocol for
hypotheses refinement in this context.
2. BASIC NOTIONS
We start by defining the basic elements of our system.
Environment
Let O be the (potentially infinite) set of possible
observations. We assume the sensors of our agents to be perfect,
hence the observations to be certain. Let H be the set of
hypotheses, uncertain and revisable. Let Cons(h, O) be the
consistency relation, a binary relation between a hypothesis
h ∈ H and a set of observations O ⊆ O. In most cases, Cons
will refer to classical consistency relation, however, we may
overload its meaning and add some additional properties to
that relation (in which case we will mention it).
The environment may include some dynamics, and change
over the course of time. We define below sequences of time
points to deal with it:
Definition 1 (Sequence of time points). A
sequence of time points t1, t2, . . . , tn from t is an ordered
set of time points t1, t2, . . . , tn such that t1 ≥ t and
∀i ∈ [1, n − 1], ti+1 ≥ ti.
Agent
We take a system populated by n agents a1, . . . , an. Each
agent is defined as a tuple F, Oi, hi , where:
• F, the set of facts, common knowledge to all agents.
• Oi ∈ 2O
, the set of observations made by the agent
so far. We assume a perfect memory, hence this set
grows monotonically.
• hi ∈ H, the favourite hypothesis of the agent.
A key notion governing the formation of hypotheses is that
of consistency, defined below:
Definition 2 (Consistency). We say that:
• An agent is consistent (Cons(ai)) iff Cons(hi, Oi)
(that is, its hypothesis is consistent with its
observation set).
• An agent ai consistent with a partner agent aj iff
Cons(ai) and Cons(hi, Oj) (that is, this agent is
consistent and its hypothesis can explain the observation
set of the other agent).
• Two agents ai and aj are mutually consistent
(MCons(ai, aj)) iff Cons(ai, aj) and Cons(aj, ai).
• A system is consistent iff ∀(i, j)∈[1, n]2
it is the case
that MCons(ai, aj).
To ensure its consistency, each agent is equipped with an
abstract reasoning machinery that we shall call the
explanation function Eh. This (deterministic) function takes a
set of observation and returns a single prefered hypothesis
(2O
→ H). We assume h = Eh(O) to be consistent with
O by definition of Eh, so using this function on its
observation set to determine its favourite hypothesis is a sure way
for the agent to achieve consistency. Note however that an
hypothesis does not need to be generated by Eh to be
consistent with an observation set. As a concrete example of such
a function, and one of the main inspiration of this work,
one can cite the Theorist reasoning system [6] -as long as
it is coupled with a filter selecting a single prefered theory
among the ones initially selected by Theorist.
Note also that hi may only be modified as a consequence
of the application Eh. We refer to this as the autonomy of the
agent: no other agent can directly impose a given
hypothesis to an agent. As a consequence, only a new observation
(being it a new perception, or an observation communicated
by a fellow agent) can result in a modification of its prefered
hypothesis hi (but not necessarily of course).
We finally define a property of the system that we shall
use in the rest of the paper:
Definition 3 (Bounded Perceptions). A system
involves a bounded perception for agents iff ∃n0 s.t.
∀t| ∪N
i=1 Oi| ≤ n0. (That is, the number of observations to
be made by the agents in the system is not infinite.)
Agent Cycle
Now we need to see how these agents will evolve and interact
in their environment. In our context, agents evolve in a
dynamic environment, and we classicaly assume the following
system cycle:
1. Environment dynamics: the environment evolves
according to the defined rules of the system dynamics.
2. Perception step : agents get perceptions from the
environment. These perceptions are typically partial (e.g.
the agent can only see a portion of a map).
3. Reasoning step: agents compare perception with
predictions, seek explanations for (potential)
difference(s), refine their hypothesis, draw new conclusions.
4. Communication step: agents can communicate
hypotheses and observations with other agents through
a defined protocol. Any agent can only be involved in
one communication with another agent by step.
5. Action step: agents do some practical reasoning using
the models obtained from the previous steps and select
an action. They can then modify the environment by
executing it.
The communication of the agents will be further
constrained by topological consideration. At a given time, an
agent will only be able to communicate with a number of
neighbours. Its connexions with these others agents may
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 999
evolve with its situation in the environment. Typically, an
agent can only communicate with agents that it can sense,
but one could imagine evolving topological constraints on
communication based on a network of communications
between agents where the links are not always active.
Communication
In our system, agents will be able to communicate with each
other. However, due to the aforementionned topological
constraints, they will not be able to communicate with any
agents at anytime. Who an agent can communicate with
will be defined dynamically (for instance, this can be a
consequence of the agents being close enough to get in touch).
We will abstractly denote by C(ai, aj, t) the communication
property, in other words, the fact that agents ai and aj can
communicate at time t (note that this relation is assumed
to be symetric, but of course not transitive). We are now in
a position to define two essential properties of our system.
Definition 4 (Temporal Path). There exists a
temporal communication path at horizon tf (noted Ltf (aI , aJ ))
between ai and aj iff there exists a sequence of time points
t1, t2, . . . , tn from tf and a sequence of agents k1, k2, . . . , kn
s.t. (i) C(aI , ak1 , t1), (ii) C(akn , aJ , tn+1), (iii) ∀i ∈ [1, n],
C(aki , aki+1 , ti)
Intuitively, what this property says is that it is possible to
find a temporal path in the future that would allow to link
agent ai and aj via a sequence of intermediary agents. Note
that the time points are not necessarily successive, and that
the sequence of agents may involve the same agents several
times.
Definition 5 (Temporal Connexity). A system is
temporaly connex iff ∀t ∀(i, j)∈[1, n]2
Lt(ai, aj)
In short, a temporaly connex system guarantees that any
agent will be able to communicate with any other agents,
no matter how long it might take to do so, at any time. To
put it another way, it is never the case that an agent will be
isolated for ever from another agent of the system.
We will next discuss the detail of how communication
concretely takes place in our system. Remember that in this
paper, we only consider the case of bilateral exchanges (an
agent can only speak to a single other agent), and that we
also assume that any agent can only engage in a single
exchange in a given round.
3. PROTOCOLS AND STRATEGIES
In this section, we discuss the requirements of the
interaction protocols that govern the exchange of messages between
agents, and provide some example instantiation of such
protocols. To clarify the presentation, we distinguish two
levels: the local level, which is concerned with the regulation
of bilateral exchanges; and the global level,which essentially
regulates the way agents can actually engage into a
conversation. At each level, we separate what is specified by the
protocol, and what is left to agents" strategies.
Local Protocol and Strategies
We start by inspecting local protocols and strategies that
will regulate the communication between the agents of the
system. As we limit ourselves to bilateral communication,
these protocols will simply involve two agents. Such protocol
will have to meet one basic requirement to be satisfying.
• consistency (CONS)- a local protocol has to
guarantee the mutual consistency of agents upon termination
(which implies termination of course).
Figure 1: A Hypotheses Exchange Protocol [1]
One example such protocol is the protocol described in [1]
that is pictured in Fig. 1. To further illustrate how such
protocol can be used by agents, we give some details on a
possible strategy: upon receiving a hypothesis h1 (propose(h1) or
counterpropose(h1)) from a1, agent a2 is in state 2 and has
the following possible replies: counterexample (if the agent
knows an example contradicting the hypothesis, or not
explained by this hypothesis), challenge (if the agents lacks
evidence to accept this hypothesis), counterpropose (if the
agent agrees with the hypothesis but prefers another one),
or accept (if it is indeed as good as its favourite
hypothesis). This strategy guarantees, among other properties, the
eventual mutual logical consistency of the involved agents
[1].
Global Protocol
The global protocol regulates the way bilateral exchanges
will be initiated between agents. At each turn, agents will
concurrently send one weighted request to communicate to
other agents. This weight is a value measuring the agent"s
willingness to converse with the targeted agent (in practice,
this can be based on different heuristics, but we shall make
some assumptions on agents" strategies, see below). Sending
such a request is a kind of conditional commitment for the
agent. An agent sending a weighted request commits to
engage in conversation with the target if he does not receive
and accept himself another request. Once all request have
been received, each agent replies with either an acccept or
a reject. By answering with an accept, an agent makes a
full commitment to engage in conversation with the sender.
Therefore, it can only send one accept in a given round, as an
agent can only participate in one conversation per time step.
When all response have been received, each agent receiving
an accept can either initiate a conversation using the local
protocol or send a cancel if it has accepted another request.
At the end of all the bilateral exchanges, the agents
engaged in conversation are discarded from the protocol. Then
each of the remaining agents resends a request and the
process iterates until no more requests are sent.
Global Strategy
We now define four requirements for the strategies used by
agents, depending on their role in the protocol: two are
concerned with the requestee role (how to decide who the
1000 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
agent wishes to communicate with?), the other two with the
responder role (how to decide which communication request
to accept or not?).
• Willingness to solve inconsistancies (SOLVE)-agents
want to communicate with any other agents unless
they know they are mutually consistent.
• Focus on solving inconsistencies (FOCUS)-agents do
not request communication with an agent with whom
they know they are mutually consistent.
• Willingness to communicate (COMM)-agents cannot
refuse a weighted communication request, unless they
have just received or send a request with a greater
weight.
• Commitment to communication request
(REQU)agents cannot accept a weighted communication
request if they have themselves sent a communication
request with a greater weight. Therefore, they will
not cancel their request unless they have received a
communicational request with greater weight.
Now the protocol structure, together with the properties
COMM+REQU, ensure that a request can only be rejected
if its target agent engages in communication with another
agent. Suppose indeed that agent ai wants to communicate
with aj by sending a request with weight w. COMM
guarantees that an agent receiving a weighted request will either
accept this communication, accept a communication with a
greater weight or wait for the answer to a request with a
greater weight. This ensures that the request with
maximal weight will be accepted and not cancelled (as REQU
ensures that an agent sending a request can only cancel it if
he accepts another request with greater weight). Therefore
at least two agents will engage in conversation per round
of the global protocol. As the protocol ensures that ai can
resend its request while aj is not engaged in a conversation,
there will be a turn in which aj must engage in a
conversation, either with ai or another agent.
These requirements concern request sending and
acceptation, but agents also need some strategy of weight
attribution. We describe below an altruist strategy, used in our
experiments. Being cooperative, an agent may want to know
more of the communication wishes of other agents in order to
improve the overall allocation of exchanges to agents. A
context request step is then added to the global protocol. Before
sending their chosen weighted request, agents attribute a
weight to all agents they are prepared to communicate with,
according to some internal factors. In the simplest case, this
weight will be 1 for all agent with whom the agent is not
sure of being mutually consistent (ensuring SOLVE), other
agent being not considered for communication (ensuring
FOCUS). The agent then sends a context request to all agents
with whom communication is considered. This request also
provides information about the sender (list of considered
communications along with their weight). After reception
of all the context requests, agents will either reply with a
deny, iff they are already engaged in a conversation (in which
case, the requesting agent will not consider communication
with them anymore in this turn), or an inform giving the
requester information about the requests it has sent and
received. When all replies have been received, each agent can
calculate the weight of all requests concerning it. It does so
by substracting from the weight of its request the weight of
all requests concerning either it or its target (that is, the
final weight of the request from ai to aj is Wi,j = wi,j +wj,i −
(
P
k∈R(i)−{j} wi,k +
P
k∈S(i)−{j} wk,i +
P
k∈R(j)−{i} wj,k +
P
k∈S(j)−{i} wk,j) where wi,j is the weight of the request of
ai to aj, R(i) is the set of indice of agents having received a
request from ai and S(i) is the set of indice of agents
having send a request to ai). It then finally sends a weighted
request to the agents who maximise this weight (or wait for
a request) as described in the global protocol.
4. (CONDITIONAL) CONVERGENCE TO
GLOBAL CONSISTENCY
In this section we will show that the requirements
regarding protocols and strategies just discussed will be sufficient
to ensure that the system will eventually converge towards
global consistency, under some conditions. We first show
that, if two agents are not mutually consistent at some time,
then there will be necessarily a time in the future such that
an agent will learn a new observation, being it because it is
new for the system, or by learning it from another agent.
Lemma 1. Let S be a system populated by n agents
a1, a2, ..., an, temporaly connex, and involving bounded
perceptions for these agents. Let n1 be the sum of
cardinalities of the intersection of pairwise observation sets.
(n1 =
P
(i,j)∈[1,n]2 |Oi ∩ Oj|) Let n2 be the cardinality of
the union of all agents" observations sets. (n2 = | ∪N
i=1 Oi|).
If ¬MCons(ai, aj) at time t0, there is necessarily a time
t > t0 s.t. either n1 or n2 will increase.
Proof. Suppose that there exist a time t0
and indices (i, j) s.t. ¬MCons(ai, aj). We will
use mt0 =
P
(k,l)∈[1,n]2 εComm(ak, al, t0) where
εComm(ak, al, t0) = 1 if ak and al have
communicated at least once since t0, and 0 otherwise.
Temporal connexity guarantees that there exist t1, ..., tm+1
and k1, ..., km s.t. C(ai, ak1 , t1), C(akm , aj, tm+1), and
∀p ∈ [1, m], C(akp , akp+1 , tp). Clearly, if MCons(ai, ak1 ),
MCons(akm , aj) and ∀p, MCons(akp , akp+1 ), we have
MCons(ai, aj) which contradicts our hypothesis (MCons
being transitive, MCons(ai, ak1 )∧MCons(ak1 , ak2 ) implies
that MCons(ai, ak2 ) and so on till MCons(ai, akm )∧
MCons(akm , aj) which implies MCons(ai, aj) ).
At least two agents are then necessarily
inconsistent (¬MCons(ai, ak1 ), or ¬MCons(akm , aj), or ∃p0 t.q.
¬MCons(akp0
, akp0+1 )). Let ak and al be these two
neighbours at a time t > t0
1
. The SOLVE property ensures
that either ak or al will send a communication request to
the other agent at time t . As shown before, this in turn
ensures that at least one of these agents will be involved in
a communication. Then there are two possibilities:
(case i) ak and al communicate at time t . In this case,
we know that ¬MCons(ak, al). This and the CONS
property ensures that at least one of the agents must change its
1
Strictly speaking, the transitivity of MCons only ensure
that ak and al are inconsistent at a time t ≥ t0 that can
be different from the time t at which they can
communicate. But if they become consistent between t and t (or
inconsistent between t and t ), it means that at least one
of them have changed its hypothesis between t and t , that
is, after t0. We can then apply the reasoning of case iib.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1001
hypothesis, which in turn, since agents are autonomous,
implies at least one exchange of observation. But then |Ok ∩Ol|
is bound to increase: n1(t ) > n1(t0).
(case ii) ak communicates with ap at time t . We then have
again two possibilities:
(case iia) ak and ap did not communicate since t0. But then
εComm(ak, ap, t0) had value 0 and takes value 1. Hence mt0
increases.
(case iib) ak and ap did communicate at some time t0 >
t0. The CONS property of the protocol ensures that
MCons(ak, ap) at that time. Now the fact that they
communicate and FOCUS implies that at least one of them did
change its hypothesis in the meantime. The fact that agents
are autonomous implies in turn that a new observation
(perceived or received from another agent) necessarily provoked
this change. The latter case would ensure the existence of a
time t > t0 and an agent aq s.t. either |Op ∩Oq| or |Ok ∩Oq|
increases of 1 at that time (implying n1(t ) > n1(t0)). The
former case means that the agent gets a new perception o
at time t . If that observation was unknown in the system
before, then n2(t ) > n2(t0). If some agent aq already knew
this observation before, then either Op ∩ Oq or Ok ∩ Oq
increases of 1 at time t (which implies that n1(t ) > n1(t0)).
Hence, ¬MCons(ai, aj) at time t0 guarantees that, either:
−∃t > t0 t.q. n1(t ) > n1(t0); or
−∃t > t0 t.q. n2(t ) > n2(t0); or
−∃t > t0 t.q. mt0 increases of 1 at time t .
By iterating the reasoning with t (but keeping t0 as the
time reference for mt0 ), we can eliminate the third case
(mt0 is integer and bounded by n2
, which means that
after a maximum of n2
iterations, we necessarily will be in
one of two other cases.) As a result, we have proven that if
¬MCons(ai, aj) at time t0, there is necessarily a time t s.t.
either n1 or n2 will increase.
Theorem 1 (Global consistency). Let S be a
system populated by n agents a1, a2, ..., an, temporaly connex,
and involving bounded perceptions for these agents. Let
Cons(ai, aj) be a transitive consistency property. Then any
protocol and strategies satisfying properties CONS, SOLVE,
FOCUS, COMM and REQU guarantees that the system will
converge towards global consistency.
Proof. For the sake of contradiction, let us assume
∃I, J ∈ [1, N] s.t. ∀t, ∃t0 > t, t.q. ¬Cons(aI , aJ , t0).
Using the lemma, this implies that ∃t > t0 s.t. either
n1(t ) > n1(t0) or n2(t ) > n2(t0). But we can apply
the same reasoning taking t = t , which would give us
t1 > t > t0 s.t. ¬Cons(aI , aJ , t1), which gives us t > t1
s.t. either n1(t ) > n1(t1) or n2(t ) > n2(t1). By
successive iterations we can then construct a sequence t0, t1, ..., tn,
which can be divided in two sub-sequences t0, t1, ...tn and
t0 , t1 , ..., tn s.t. n1(t0) < n1(t1) < ... < n1(tn) and
n2(t0 ) < n2(t1 ) < ... < n2(tn). One of these sub-sequences
has to be infinite. However, n1(ti) and n2(ti ) are strictly
growing, integer, and bounded, which implies that both are
finite. Contradiction.
What the previous result essentially shows is that, in a
system where no agent will be isolated from the rest of the
agents for ever, only very mild assumptions on the protocols
and strategies used by agents suffice to guarantee
convergence towards system consistency in a finite amount of time
(although it might take very long). Unfortunately, in many
critical situations, it will not be possible to assume this
temporal connexity. As distributed approaches as the one
advocated in this paper are precisely often presented as a
good way to tackle problems of reliability or problems of
dependence to a center that are of utmost importance in these
critical applications, it is certainly interesting to further
explore how such a system would behave when we relax this
assumption.
5. EXPERIMENTAL STUDY
This experiment involves agents trying to escape from a
burning building. The environment is described as a spatial
grid with a set of walls and (thankfully) some exits. Time
and space are considered discrete. Time is divided in rounds.
Agents are localised by their position on the spatial grid.
These agents can move and communicate with other agents.
In a round, an agent can move of one cell in any of the four
cardinal directions, provided it is not blocked by a wall. In
this application, agents communicate with any other agent
(but, recall, a single one) given that this agent is in view,
and that they have not yet exchanged their current favoured
hypothesis. Suddenly, a fire erupts in these premises. From
this moment, the fire propagates. Each round, for each cases
where there is fire, the fire propagates in the four directions.
However, the fire cannot propagate through a wall. If the
fire propagates in a case where an agent is positioned, that
agent burns and is considered dead. It can of course no
longer move nor communicate. If an agent gets to an exit,
it is considered saved, and can no longer be burned. Agents
know the environment and the rules governing the
dynamics of this environment, that is, they know the map as well
as the rules of fire propagation previously described. They
also locally perceive this environment, but cannot see
further than 3 cases away, in any direction. Walls also block
the line of view, preventing agents from seeing behind them.
Within their sight, they can see other agents and whether
or not the cases they see are on fire. All these perceptions
are memorised.
We now show how this instantiates the abstract
framework presented the paper.
• O = {Fire(x, y, t), NoFire(x, y, t), Agent(ai, x, y, t)}
Observations can then be positive (o ∈ P(O) iff ∃h ∈
H s.t. h |= o) or negative (o ∈ N(O) iff ∃h ∈ H s.t.
h |= ¬o).
• H={FireOrigin(x1, y1, t1)∧...∧FireOrigin(xl, yl, tl)}
Hypotheses are conjunctions of FireOrigins.
• Cons(h, O) consistency relation satisfies:
- coherence : ∀o ∈ N(O), h |= ¬o.
- completeness : ∀o ∈ P(O), h |= o.
- minimality : For all h ∈ H, if h is coherent and
complete for O, then h is prefered to h according
to the preference relation (h ≤p h ).2
2
Selects first the minimal number of origins, then the most
recent (least preemptive strategy [6]), then uses some
arbitrary fixed ranking to discriminate ex-aequo. The resulting
relation is a total order, hence minimality implies that there
will be a single h s.t.Cons(O, h) for a given O. This in
turn means that MCons(ai, aj) iff Cons(ai), Cons(aj), and
hi = hj. This relation is then transitive and symmetric.
1002 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
• Eh takes O as argument and returns min≤p of the
coherent and complete hypothesis for O
5.1 Experimental Evaluation
We will classically (see e.g. [3, 4]) assess the effectiveness
and efficiency of different interaction protocols.
Effectiveness of a protocol
The proportion of agents surviving the fire over the initial
number of agents involved in the experiment will determine
the effectiveness of a given protocol. If this value is high,
the protocol has been effective to propagate the information
and/or for the agents to refine their hypotheses and
determine the best way to the exit.
Efficiency of a protocol
Typically, the use of supporting information will involve a
communication overhead. We will assume here that the
efficiency of a given protocol is characterised by the data flow
induced by this protocol. In this paper we will only discuss
this aspect wrt. local protocols. The main measure that we
shall then use here is the mean total size of messages that
are exchanged by agents per exchange (hence taking into
account both the number of messages and the actual size of the
messages, because it could be that messages happen to be
very big, containing e.g. a large number of observations,
which could counter-balance a low number of messages).
5.2 Experimental Settings
The chosen experimental settings are the following:
• Environmental topology- Performances of
information propagation are highly constrained by the
environment topology. The perception skills of the agents
depend on the openness of the environment. With
a large number of walls the perceptions of agents are
limited, and also the number of possible inter-agent
communications, whereas an open environment will
provide optimal possibilities of perception and
information propagation. Thus, we propose a topological
index (see below) as a common basis to charaterize the
environments (maps) used during experimentations.
The topological index (TI) is the ratio of the number of
cells that can be perceived by agents summed up from
all possible positions, divided by the number of cells
that would be perceived from the same positions but
without any walls. (The closer to 1, the more open the
environment). We shall also use two additional, more
classical [10], measures: the characteristic path length3
(CPL) and the clustering coefficient4
(CC).
• Number of agents- The propagation of information
also depends on the initial number of agents involved
during an experimentation. For instance, the more
agents, the more potential communications there is.
This means that there will be more potential for
propagation, but also that the bilateral exchange restriction
will be more crucial.
3
The CPL is the median of the means of the shortest path
lengths connecting each node to all other nodes.
4
characterising the isolation degree of a region of an
environment in terms of acessibility (number of roads still usable
to reach this region).
Map T.I. (%) C.P.L. C.C.
69-1 69,23 4,5 0,69
69-2 68,88 4,38 0,65
69-3 69,80 4,25 0,67
53-1 53,19 5,6 0,59
53-2 53,53 6,38 0,54
53-3 53,92 6,08 0,61
38-1 38,56 8,19 0,50
38-2 38,56 7,3 0,50
38-3 38,23 8,13 0,50
Table 1: Topological Characteristics of the Maps
• Initial positions of the agents- Initial positions of the
agents have a significant influence on the overall
behavior of an instance of our system: being close from
an exit will (in general) ease the escape.
5.3 Experimental environments
We choose to realize experiments on three very
different topological indexes (69% for open environments, 53%
for mixed environments, and 38% for labyrinth-like
environments).
Figure 2: Two maps (left: TI=69%, right TI=38%)
We designed three different maps for each index (Fig. 2
shows two of them), containing the same maximum number
of agents (36 agents max.) with a maximum density of one
agent per cell, the same number of exits and a similar fire
origin (e.g. starting time and position). The three differents
maps of a given index are designed as follows. The first map
is a model of an existing building floor. The second map has
the same enclosure, exits and fire origin as the first one,
but the number and location of walls are different (wall
locations are designed by an heuristic which randomly creates
walls on the spatial grid such that no fully closed rooms are
created and that no exit is closed). The third map is
characterised by geometrical enclosure in wich walls location
is also designed with the aforementioned heuristic. Table 1
summarizes the different topological measures
characterizing these different maps. It is worth pointing out that the
values confirm the relevance of TI (maps with a high TI
have a low CPL and a high CC. However the CPL and CC
allows to further refine the difference between the maps, e.g.
between 53-1 and 53-2).
5.4 Experimental Results
For each triple of maps defined as above we conduct the
same experiments. In each experiment, the society differs in
terms of its initial proportion of involved agents, from 1%
to 100%. This initial proportion represents the percentage
of involved agents with regards to the possible maximum
number of agents. For each map and each initial proportion,
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1003
we select randomly 100 different initial agents" locations.
For each of those different locations we execute the system
one time for each different interaction protocol.
Effectiveness of Communication and Argumentation
The first experiment that we set up aims at testing how
effective is hypotheses exchange (HE), and in particular how
the topological aspects will affect this effectiveness. In order
to do so, we have computed the ratio of improvement offered
by that protocol over a situation where agents could simply
not communicate (no comm). To get further insights as
to what extent the hypotheses exchange was really crucial,
we also tested a much less elaborated protocol consisting
of mere observation exchanges (OE). More precisely, this
protocol requires that each agent stores any unexpected
observation that it perceives, and agents simply exchange
their respective lists of observations when they discuss. In
this case, the local protocol is different (note in
particular that it does not guarantee mutual consistency), but the
global protocol remains the same (at the only exception that
agents" motivation to communicate is to synchronise their
list of observations, not their hypothesis). If this protocol is
at best as effective as HE, it has the advantage of being more
efficient (this is obvious wrt the number of messages which
will be limited to 2, less straightforward as far as the size of
messages is concerned, but the rough observation that the
exchange of observations can be viewed as a flat version
of the challenge is helpful to see this). The results of these
experiments are reported in Fig. 3.
Figure 3: Comparative effectiveness ratio gain of
protocols when the proportion of agents augments
The first observation that needs to be made is that
communication improves the effectiveness of the process, and
this ratio increases as the number of agents grows in the
system. The second lesson that we learn here is that
closeness relatively makes communication more effective over non
communication. Maps exhibiting a T.I. of 38% are
constantly above the two others, and 53% are still slightly but
significantly better than 69%. However, these curves also
suggest, perhaps surprisingly, that HE outperforms OE in
precisely those situations where the ratio gain is less
important (the only noticeable difference occurs for rather open
maps where T.I. is 69%). This may be explained as follows:
when a map is open, agents have many potential explanation
candidates, and argumentation becomes useful to
discriminate between those. When a map is labyrinth-like, there are
fewer possible explanations to an unexpected event.
Importance of the Global Protocol
The second set of experiments seeks to evaluate the
importance of the design of the global protocol. We tested our
protocol against a local broadcast (LB) protocol. Local
broadcast means that all the neighbours agents perceived
by an agent will be involved in a communication with that
agent in a given round -we alleviate the constraint of a
single communication by agent. This gives us a rough upper
bound upon the possible ratio gain in the system (for a given
local protocol). Again, we evaluated the ratio gain induced
by that LB over our classical HE, for the three different
classes of maps. The results are reported in Fig. 4.
Figure 4: Ratio gain of local broadcast over
hypotheses exchange
Note to begin with that the ratio gain is 0 when the
proportion of agents is 5%, which is easily explained by the fact
that it corresponds to situations involving only two agents.
We first observe that all classes of maps witness a ratio
gain increasing when the proportion of agents augments: the
gain reaches 10 to 20%, depending on the class of maps
considered. If one compares this with the improvement reported
in the previous experiment, it appears to be of the same
magnitude. This illustrates that the design of the global
protocol cannot be ignored, especially when the proportion
of agents is high. However, we also note that the
effectiveness ratio gain curves have very different shapes in both
cases: the gain induced by the accuracy of the local protocol
increases very quickly with the proportion of agents, while
the curve is really smooth for the global one.
Now let us observe more carefully the results reported
here: the curve corresponding to a TI of 53% is above that
corresponding to 38%. This is so because the more open
a map, the more opportunities to communicate with more
than one agent (and hence benefits from broadcast).
However, we also observe that curve for 69% is below that for
53%. This is explained as follows: in the case of 69%, the
potential gain to be made in terms of surviving agents is
much lower, because our protocols already give rather
efficient outcomes anyway (quickly reaching 90%, see Fig. 3).
A simple rule of thumb could be that when the number of
agents is small, special attention should be put on the local
1004 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
protocol, whereas when that number is large, one should
carefully design the global one (unless the map is so open
that the protocol is already almost optimally efficient).
Efficiency of the Protocols
The final experiment reported here is concerned with the
analysis of the efficiency of the protocols. We analysis here
the mean size of the totality of the messages that are
exchanged by agents (mean size of exchanges, for short) using
the following protocols: HE, OE, and two variant
protocols. The first one is an intermediary restricted hypotheses
exchange protocol (RHE). RHE is as follows: it does not
involve any challenge nor counter-propose, which means that
agents cannot switch their role during the protocol (this
differs from RE in that respect). In short, RHE allows an agent
to exhaust its partner"s criticism, and eventually this
partner will come to adopt the agent"s hypothesis. Note that this
means that the autonomy of the agent is not preserved here
(as an agent will essentially accept any hypothesis it cannot
undermine), with the hope that the gain in efficiency will be
significant enough to compensate a loss in effectiveness. The
second variant protocol is a complete observation exchange
protocol (COE). COE uses the same principles as OE, but
includes in addition all critical negative examples (nofire) in
the exchange (thus giving all examples used as arguments
by the hypotheses exchanges protocol), hence improving
effectiveness. Results for map 69-1 are shown on Fig. 5.
Figure 5: Mean size of exchanges
First we can observe the fact that the ordering of the
protocols, from the least efficient to the most efficient, is COE,
HE, RHE and then OE. HE being more efficient than COE
proves that the argumentation process gains efficiency by
selecting when it is needed to provide negative example, which
have less impact that positive ones in our specific testbed.
However, by communicating hypotheses before eventually
giving observation to support it (HE) instead of directly
giving the most crucial observations (OE), the argumentation
process doubles the size of data exchanges. It is the cost for
ensuring consistency at the end of the exchange (a property
that OE does not support). Also significant is the fact the
the mean size of exchanges is slightly higher when the
number of agents is small. This is explained by the fact that in
these cases only a very few agents have relevant informations
in their possession, and that they will need to communicate
a lot in order to come up with a common view of the
situation. When the number of agents increases, this knowledge
is distributed over more agents which need shorter
discussions to get to mutual consistency. As a consequence, the
relative gain in efficiency of using RHE appears to be better
when the number of agents is small: when it is high, they
will hardly argue anyway. Finally, it is worth noticing that
the standard deviation for these experiments is rather high,
which means that the conversation do not converge to any
stereotypic pattern.
6. CONCLUSION
This paper has investigated the properties of a
multiagent system where each (distributed) agent locally perceives
its environment, and tries to reach consistency with other
agents despite severe communication restrictions. In
particular we have exhibited conditions allowing convergence, and
experimentally investigated a typical situation where those
conditions cannot hold. There are many possible extensions
to this work, the first being to further investigate the
properties of different global protocols belonging to the class we
identified, and their influence on the outcome. There are in
particular many heuristics, highly dependent on the context
of the study, that could intuitively yield interesting results
(in our study, selecting the recipient on the basis of what can
be inferred from his observed actions could be such a
heuristic). One obvious candidate for longer term issues concern
the relaxation of the assumption of perfect sensing.
7. REFERENCES
[1] G. Bourgne, N. Maudet, and S. Pinson. When agents
communicate hypotheses in critical situations. In
Proceedings of DALT-2006, May 2006.
[2] P. Harvey, C. F. Chang, and A. Ghose. Support-based
distributed search: a new approach for multiagent
constraint processing. In Proceedings of AAMAS06,
2006.
[3] H. Jung and M. Tambe. Argumentation as distributed
constraint satisfaction: Applications and results. In
Proceedings of AGENTS01, 2001.
[4] N. C. Karunatillake and N. R. Jennings. Is it worth
arguing? In Proceedings of ArgMAS 2004, 2004.
[5] S. Onta˜n´on and E. Plaza. Arguments and
counterexamples in case-based joint deliberation. In
Proceedings of ArgMAS-2006, May 2006.
[6] D. Poole. Explanation and prediction: An architecture
for default and abductive reasoning. Computational
Intelligence, 5(2):97-110, 1989.
[7] I. Rahwan, S. D. Ramchurn, N. R. Jennings,
P. McBurney, S. Parsons, and L. Sonenberg.
Argumention-based negotiation. The Knowledge
Engineering Review, 4(18):345-375, 2003.
[8] N. Roos, A. ten Tije, and C. Witteveen. A protocol
for multi-agent diagnosis with spatially distributed
knowledge. In Proceedings of AAMAS03, 2003.
[9] N. Roos, A. ten Tije, and C. Witteveen. Reaching
diagnostic agreement in multiagent diagnosis. In
Proceedings of AAMAS04, 2004.
[10] T. Takahashi, Y. Kaneda, and N. Ito. Preliminary
study - using robocuprescue simulations for disasters
prevention. In Proceedings of SRMED2004, 2004.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1005 | global consistency;hypothesis exchange protocol;inter-agent communication;favoured hypothesis;bounded perception;context request step;negotiation and argumentation;bilateral exchange;temporal path;sequence of time point;agent communication language and protocol;time point sequence;topological constraint;mutual consistency;multiagent system;consistency;observation set |
train_I-76 | Negotiation by Abduction and Relaxation | This paper studies a logical framework for automated negotiation between two agents. We suppose an agent who has a knowledge base represented by a logic program. Then, we introduce methods of constructing counter-proposals in response to proposals made by an agent. To this end, we combine the techniques of extended abduction in artificial intelligence and relaxation in cooperative query answering for databases. These techniques are respectively used for producing conditional proposals and neighborhood proposals in the process of negotiation. We provide a negotiation protocol based on the exchange of these proposals and develop procedures for computing new proposals. | 1. INTRODUCTION
Automated negotiation has been received increasing
attention in multi-agent systems, and a number of frameworks
have been proposed in different contexts ([1, 2, 3, 5, 10, 11,
13, 14], for instance). Negotiation usually proceeds in a
series of rounds and each agent makes a proposal at every
round. An agent that received a proposal responds in two
ways. One is a critique which is a remark as to whether
or not (parts of) the proposal is accepted. The other is a
counter-proposal which is an alternative proposal made in
response to a previous proposal [13].
To see these proposals in one-to-one negotiation, suppose
the following negotiation dialogue between a buyer agent B
and a seller agent S. (Bi (or Si) represents an utterance of
B (or S) in the i-th round.)
B1: I want to buy a personal computer of the brand b1,
with the specification of CPU:1GHz, Memory:512MB,
HDD: 80GB, and a DVD-RW driver. I want to get it
at the price under 1200 USD.
S1: We can provide a PC with the requested specification
if you pay for it by cash. In this case, however, service
points are not added for this special discount.
B2: I cannot pay it by cash.
S2: In a normal price, the requested PC costs 1300 USD.
B3: I cannot accept the price. My budget is under 1200
USD.
S3: We can provide another computer with the requested
specification, except that it is made by the brand b2.
The price is exactly 1200 USD.
B4: I do not want a PC of the brand b2. Instead, I can
downgrade a driver from DVD-RW to CD-RW in my
initial proposal.
S4: Ok, I accept your offer.
In this dialogue, in response to the opening proposal B1,
the counter-proposal S1 is returned. In the rest of the
dialogue, B2, B3, S4 are critiques, while S2, S3, B4 are
counterproposals.
Critiques are produced by evaluating a proposal in a
knowledge base of an agent. In contrast, making counter-proposals
involves generating an alternative proposal which is more
favorable to the responding agent than the original one.
It is known that there are two ways of producing
counterproposals: extending the initial proposal or amending part
of the initial proposal. According to [13], the first type
appears in the dialogue: A: I propose that you provide me
with service X. B: I propose that I provide you with
service X if you provide me with service Z. The second type
is in the dialogue: A: I propose that I provide you with
service Y if you provide me with service X. B: I propose
that I provide you with service X if you provide me with
service Z. A negotiation proceeds by iterating such
give-andtake dialogues until it reaches an agreement/disagreement.
In those dialogues, agents generate (counter-)proposals by
reasoning on their own goals or objectives. The objective
of the agent A in the above dialogues is to obtain service
X. The agent B proposes conditions to provide the
service. In the process of negotiation, however, it may happen
that agents are obliged to weaken or change their initial
goals to reach a negotiated compromise. In the dialogue of
1022
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
a buyer agent and a seller agent presented above, a buyer
agent changes its initial goal by downgrading a driver from
DVD-RW to CD-RW. Such behavior is usually represented
as specific meta-knowledge of an agent or specified as
negotiation protocols in particular problems. Currently, there is
no computational logic for automated negotiation which has
general inference rules for producing (counter-)proposals.
The purpose of this paper is to mechanize a process of
building (counter-)proposals in one-to-one negotiation
dialogues. We suppose an agent who has a knowledge base
represented by a logic program. We then introduce
methods for generating three different types of proposals. First,
we use the technique of extended abduction in artificial
intelligence [8, 15] to construct a conditional proposal as an
extension of the original one. Second, we use the technique
of relaxation in cooperative query answering for databases
[4, 6] to construct a neighborhood proposal as an amendment
of the original one. Third, combining extended abduction
and relaxation, conditional neighborhood proposals are
constructed as amended extensions of the original proposal. We
develop a negotiation protocol between two agents based on
the exchange of these counter-proposals and critiques. We
also provide procedures for computing proposals in logic
programming.
This paper is organized as follows. Section 2 introduces
a logical framework used in this paper. Section 3 presents
methods for constructing proposals, and provides a
negotiation protocol. Section 4 provides methods for computing
proposals in logic programming. Section 5 discusses related
works, and Section 6 concludes the paper.
2. PRELIMINARIES
Logic programs considered in this paper are extended
disjunctive programs (EDP) [7]. An EDP (or simply a program)
is a set of rules of the form:
L1 ; · · · ; Ll ← Ll+1 , . . . , Lm, not Lm+1 , . . . , not Ln
(n ≥ m ≥ l ≥ 0) where each Li is a positive/negative
literal, i.e., A or ¬A for an atom A, and not is negation as
failure (NAF). not L is called an NAF-literal. The symbol
; represents disjunction. The left-hand side of the rule
is the head, and the right-hand side is the body. For each
rule r of the above form, head(r), body+
(r) and body−
(r)
denote the sets of literals {L1, . . . , Ll}, {Ll+1, . . . , Lm}, and
{Lm+1, . . . , Ln}, respectively. Also, not body−
(r) denotes
the set of NAF-literals {not Lm+1, . . . , not Ln}. A
disjunction of literals and a conjunction of (NAF-)literals in a rule
are identified with its corresponding sets of literals. A rule
r is often written as head(r) ← body+
(r), not body−
(r) or
head(r) ← body(r) where body(r) = body+
(r)∪not body−
(r).
A rule r is disjunctive if head(r) contains more than one
literal. A rule r is an integrity constraint if head(r) = ∅; and
r is a fact if body(r) = ∅. A program is NAF-free if no
rule contains NAF-literals. Two rules/literals are identified
with respect to variable renaming. A substitution is a
mapping from variables to terms θ = {x1/t1, . . . , xn/tn}, where
x1, . . . , xn are distinct variables and each ti is a term
distinct from xi. Given a conjunction G of (NAF-)literals, Gθ
denotes the conjunction obtained by applying θ to G. A
program, rule, or literal is ground if it contains no variable.
A program P with variables is a shorthand of its ground
instantiation Ground(P), the set of ground rules obtained
from P by substituting variables in P by elements of its
Herbrand universe in every possible way.
The semantics of an EDP is defined by the answer set
semantics [7]. Let Lit be the set of all ground literals in
the language of a program. Suppose a program P and a
set of literals S(⊆ Lit). Then, the reduct P S
is the
program which contains the ground rule head(r) ← body+
(r)
iff there is a rule r in Ground(P) such that body−
(r)∩S = ∅.
Given an NAF-free EDP P, Cn(P) denotes the smallest set
of ground literals which is (i) closed under P, i.e., for
every ground rule r in Ground(P), body(r) ⊆ Cn(P) implies
head(r) ∩ Cn(P) = ∅; and (ii) logically closed, i.e., it is
either consistent or equal to Lit. Given an EDP P and a set
S of literals, S is an answer set of P if S = Cn(P S
). A
program has none, one, or multiple answer sets in general.
An answer set is consistent if it is not Lit. A program P is
consistent if it has a consistent answer set; otherwise, P is
inconsistent.
Abductive logic programming [9] introduces a mechanism
of hypothetical reasoning to logic programming. An
abductive framework used in this paper is the extended
abduction introduced by Inoue and Sakama [8, 15]. An abductive
program is a pair P, H where P is an EDP and H is
a set of literals called abducibles. When a literal L ∈ H
contains variables, any instance of L is also an abducible.
An abductive program P, H is consistent if P is
consistent. Throughout the paper, abductive programs are
assumed to be consistent unless stated otherwise. Let G =
L1, . . . , Lm, not Lm+1, . . . , not Ln be a conjunction, where
all variables in G are existentially quantified at the front and
range-restricted, i.e., every variable in Lm+1, . . . , Ln appears
in L1, . . . , Lm. A set S of ground literals satisfies the
conjunction G if { L1θ, . . . , Lmθ } ⊆ S and { Lm+1θ, . . . , Lnθ }∩
S = ∅ for some ground instance Gθ with a substitution θ.
Let P, H be an abductive program and G a conjunction
as above. A pair (E, F) is an explanation of an observation
G in P, H if1
1. (P \ F) ∪ E has an answer set which satisfies G,
2. (P \ F) ∪ E is consistent,
3. E and F are sets of ground literals such that E ⊆ H\P
and F ⊆ H ∩ P.
When (P \ F) ∪ E has an answer set S satisfying the above
three conditions, S is called a belief set of an abductive
program P, H satisfying G (with respect to (E, F)). Note
that if P has a consistent answer set S satisfying G, S
is also a belief set of P, H satisfying G with respect to
(E, F) = (∅, ∅). Extended abduction introduces/removes
hypotheses to/from a program to explain an observation.
Note that normal abduction (as in [9]) considers only
introducing hypotheses to explain an observation. An
explanation (E, F) of an observation G is called minimal if for
any explanation (E , F ) of G, E ⊆ E and F ⊆ F imply
E = E and F = F.
Example 2.1. Consider the abductive program P, H :
P : flies(x) ← bird(x), not ab(x) ,
ab(x) ← broken-wing(x) ,
bird(tweety) ← , bird(opus) ← ,
broken-wing(tweety) ← .
H : broken-wing(x) .
The observation G = flies(tweety) has the minimal
explanation (E, F) = (∅, {broken-wing(tweety)}).
1
This defines credulous explanations [15]. Skeptical
explanations are used in [8].
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1023
3. NEGOTIATION
3.1 Conditional Proposals by Abduction
We suppose an agent who has a knowledge base
represented by an abductive program P, H . A program P
consists of two types of knowledge, belief B and desire D,
where B represents objective knowledge of an agent, while
D represents subjective knowledge in general. We define
P = B ∪ D, but do not distinguish B and D if such
distinction is not important in the context. In contrast, abducibles
H are used for representing permissible conditions to make
a compromise in the process of negotiation.
Definition 3.1. A proposal G is a conjunction of literals
and NAF-literals:
L1, . . . , Lm, not Lm+1, . . . , not Ln
where every variable in G is existentially quantified at the
front and range-restricted. In particular, G is called a
critique if G = accept or G = reject where accept and reject
are the reserved propositions. A counter-proposal is a
proposal made in response to a proposal.
Definition 3.2. A proposal G is accepted in an
abductive program P, H if P has an answer set satisfying G.
When a proposal is not accepted, abduction is used for
seeking conditions to make it acceptable.
Definition 3.3. Let P, H be an abductive program
and G a proposal. If (E, F) is a minimal explanation of
Gθ for some substitution θ in P, H , the conjunction G :
Gθ, E, not F
is called a conditional proposal (for G), where E, not F
represents the conjunction: A1, . . . , Ak, not Ak+1, . . . , not Al
for E = {A1, . . . , Ak} and F = { Ak+1, . . . , Al }.
Proposition 3.1. Let P, H be an abductive program
and G a proposal. If G is a conditional proposal, there is a
belief set S of P, H satisfying G .
Proof. When G = Gθ, E, not F, (P \ F) ∪ E has a
consistent answer set S satisfying Gθ and E ∩ F = ∅. In
this case, S satisfies Gθ, E, not F.
A conditional proposal G provides a minimal requirement
for accepting the proposal G. If Gθ has multiple minimal
explanations, several conditional proposals exist accordingly.
When (E, F) = (∅, ∅), a conditional proposal is used as a
new proposal made in response to the proposal G.
Example 3.1. An agent seeks a position of a research
assistant at the computer department of a university with
the condition that the salary is at least 50,000 USD per year.
The agent makes his/her request as the proposal:2
G = assist(compt dept), salary(x), x ≥ 50, 000.
The university has the abductive program P, H :
P : salary(40, 000) ← assist(compt dept), not has PhD,
salary(60, 000) ← assist(compt dept), has PhD,
salary(50, 000) ← assist(math dept),
salary(55, 000) ← system admin(compt dept),
2
For notational convenience, we often include mathematical
(in)equations in proposals/programs. They are written by
literals, for instance, x ≥ y by geq(x, y) with a suitable
definition of the predicate geq.
employee(x) ← assist(x),
employee(x) ← system admin(x),
assist(compt dept); assist(math dept)
; system admin(compt dept) ←,
H : has PhD,
where available positions are represented by disjunction.
According to P, the base salary of a research assistant at the
computer department is 40,000 USD, but if he/she has PhD,
it is 60,000 USD. In this case, (E, F) = ({has PhD}, ∅)
becomes the minimal explanation of Gθ = assist(compt dept),
salary(60, 000) with θ = { x/60, 000 }. Then, the
conditional proposal made by the university becomes
assist(compt dept), salary(60, 000), has PhD .
3.2 Neighborhood Proposals by Relaxation
When a proposal is unacceptable, an agent tries to
construct a new counter-proposal by weakening constraints in
the initial proposal. We use techniques of relaxation for
this purpose. Relaxation is used as a technique of
cooperative query answering in databases [4, 6]. When an original
query fails in a database, relaxation expands the scope of
the query by relaxing the constraints in the query. This
allows the database to return neighborhood answers which
are related to the original query. We use the technique for
producing proposals in the process of negotiation.
Definition 3.4. Let P, H be an abductive program
and G a proposal. Then, G is relaxed to G in the following
three ways:
Anti-instantiation: Construct G such that G θ = G for
some substitution θ.
Dropping conditions: Construct G such that G ⊂ G.
Goal replacement: If G is a conjunction G1, G2, where
G1 and G2 are conjunctions, and there is a rule L ←
G1 in P such that G1θ = G1 for some substitution θ,
then build G as Lθ, G2. Here, Lθ is called a replaced
literal.
In each case, every variable in G is existentially quantified
at the front and range-restricted.
Anti-instantiation replaces constants (or terms) with fresh
variables. Dropping conditions eliminates some conditions
in a proposal. Goal replacement replaces the condition G1
in G with a literal Lθ in the presence of a rule L ← G1 in P
under the condition G1θ = G1. All these operations
generalize proposals in different ways. Each G obtained by these
operations is called a relaxation of G. It is worth noting
that these operations are also used in the context of
inductive generalization [12]. The relaxed proposal can produce
new offers which are neighbor to the original proposal.
Definition 3.5. Let P, H be an abductive program
and G a proposal.
1. Let G be a proposal obtained by anti-instantiation. If
P has an answer set S which satisfies G θ for some
substitution θ and G θ = G, G θ is called a neighborhood
proposal by anti-instantiation.
2. Let G be a proposal obtained by dropping conditions.
If P has an answer set S which satisfies G θ for some
substitution θ, G θ is called a neighborhood proposal by
dropping conditions.
1024 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
3. Let G be a proposal obtained by goal replacement.
For a replaced literal L ∈ G and a rule H ← B in
P such that L = Hσ and (G \ {L}) ∪ Bσ = G for
some substitution σ, put G = (G \ {L}) ∪ Bσ. If
P has an answer set S which satisfies G θ for some
substitution θ, G θ is called a neighborhood proposal
by goal replacement.
Example 3.2. (cont. Example 3.1) Given the proposal
G = assist(compt dept), salary(x), x ≥ 50, 000,
• G1 = assist(w), salary(x), x ≥ 50, 000 is produced by
substituting compt dept with a variable w. As
G1θ1 = assist(math dept), salary(50, 000)
with θ1 = { w/math dept } is satisfied by an answer
set of P, G1θ1 becomes a neighborhood proposal by
anti-instantiation.
• G2 = assist(compt dept), salary(x) is produced by
dropping the salary condition x ≥ 50, 000. As
G2θ2 = assist(compt dept), salary(40, 000)
with θ2 = { x/40, 000 } is satisfied by an answer set of
P, G2θ2 becomes a neighborhood proposal by
dropping conditions.
• G3 = employee(compt dept), salary(x), x ≥ 50, 000 is
produced by replacing assist(compt dept) with
employee(compt dept) using the rule employee(x) ←
assist(x) in P. By G3 and the rule employee(x) ←
system admin(x) in P, G3 = sys admin(compt dept),
salary(x), x ≥ 50, 000 is produced. As
G3 θ3 = sys admin(compt dept), salary(55, 000)
with θ3 = { x/55, 000 } is satisfied by an answer set
of P, G3 θ3 becomes a neighborhood proposal by goal
replacement.
Finally, extended abduction and relaxation are combined to
produce conditional neighborhood proposals.
Definition 3.6. Let P, H be an abductive program
and G a proposal.
1. Let G be a proposal obtained by either anti-instantiation
or dropping conditions. If (E, F) is a minimal
explanation of G θ(= G) for some substitution θ, the
conjunction G θ, E, not F is called a conditional neighborhood
proposal by anti-instantiation/dropping conditions.
2. Let G be a proposal obtained by goal replacement.
Suppose G as in Definition 3.5(3). If (E, F) is a
minimal explanation of G θ for some substitution θ,
the conjunction G θ, E, not F is called a conditional
neighborhood proposal by goal replacement.
A conditional neighborhood proposal reduces to a
neighborhood proposal when (E, F) = (∅, ∅).
3.3 Negotiation Protocol
A negotiation protocol defines how to exchange proposals
in the process of negotiation. This section presents a
negotiation protocol in our framework. We suppose one-to-one
negotiation between two agents who have a common
ontology and the same language for successful communication.
Definition 3.7. A proposal L1, ..., Lm, not Lm+1, ..., not Ln
violates an integrity constraint ← body+
(r), not body−
(r) if
for any substitution θ, there is a substitution σ such that
body+
(r)σ ⊆ { L1θ, . . . , Lmθ }, body−
(r)σ∩{ L1θ, . . . , Lmθ } =
∅, and body−
(r)σ ⊆ { Lm+1θ, . . . , Lnθ }.
Integrity constraints are conditions which an agent should
satisfy, so that they are used to explain why an agent does
not accept a proposal.
A negotiation proceeds in a series of rounds. Each i-th
round (i ≥ 1) consists of a proposal Gi
1 made by one agent
Ag1 and another proposal Gi
2 made by the other agent Ag2.
Definition 3.8. Let P1, H1 be an abductive program
of an agent Ag1 and Gi
2 a proposal made by Ag2 at the i-th
round. A critique set of Ag1 (at the i-th round) is a set
CSi
1(P1, Gj
2) = CSi−1
1 (P1, Gj−1
2 ) ∪ { r | r is an integrity
constraint in P1 and Gj
2 violates r }
where j = i − 1 or i, and CS0
1 (P1, G0
2) = CS1
1 (P1, G0
2) = ∅.
A critique set of an agent Ag1 accumulates integrity
constraints which are violated by proposals made by another
agent Ag2. CSi
2(P2, Gj
1) is defined in the same manner.
Definition 3.9. Let Pk, Hk be an abductive program
of an agent Agk and Gj
a proposal, which is not a critique,
made by any agent at the j(≤ i)-th round. A negotiation set
of Agk (at the i-th round) is a triple NSi
k = (Si
c, Si
n, Si
cn),
where Si
c is the set of conditional proposals, Si
n is the set
of neighborhood proposals, and Si
cn is the set of conditional
neighborhood proposals, produced by Gj
and Pk, Hk .
A negotiation set represents the space of possible proposals
made by an agent. Si
x (x ∈ {c, n, cn}) accumulates proposals
produced by Gj
(1 ≤ j ≤ i) according to Definitions 3.3, 3.5,
and 3.6. Note that an agent can construct counter-proposals
by modifying its own previous proposals or another agent"s
proposals. An agent Agk accumulates proposals that are
made by Agk but are rejected by another agent, in the failed
proposal set FP i
k (at the i-th round), where FP 0
k = ∅.
Suppose two agents Ag1 and Ag2 who have abductive
programs P1, H1 and P2, H2 , respectively. Given a
proposal G1
1 which is satisfied by an answer set of P1, a
negotiation starts. In response to the proposal Gi
1 made by Ag1
at the i-th round, Ag2 behaves as follows.
1. If Gi
1 = accept, an agreement is reached and
negotiation ends in success.
2. Else if Gi
1 = reject, put FP i
2 = FPi−1
2 ∪{Gi−1
2 } where
{G0
2} = ∅. Proceed to the step 4(b).
3. Else if P2 has an answer set satisfying Gi
1, Ag2 returns
Gi
2 = accept to Ag1. Negotiation ends in success.
4. Otherwise, Ag2 behaves as follows. Put FP i
2 = FPi−1
2 .
(a) If Gi
1 violates an integrity constraint in P2, return
the critique Gi
2 = reject to Ag1, together with the
critique set CSi
2(P2, Gi
1).
(b) Otherwise, construct NSi
2 as follows.
(i) Produce Si
c. Let μ(Si
c) = { p | p ∈ Si
c \ FPi
2 and
p satisfies the constraints in CSi
1(P1, Gi−1
2 )}.
If μ(Si
c) = ∅, select one from μ(Si
c) and propose
it as Gi
2 to Ag1; otherwise, go to (ii).
(ii) Produce Si
n. If μ(Si
n) = ∅, select one from μ(Si
n)
and propose it as Gi
2 to Ag1; otherwise, go to (iii).
(iii) Produce Si
cn. If μ(Si
cn) = ∅, select one from
μ(Si
cn) and propose it as Gi
2 to Ag1; otherwise,
negotiation ends in failure. This means that Ag2
can make no counter-proposal or every
counterproposal made by Ag2 is rejected by Ag1.
In the step 4(a), Ag2 rejects the proposal Gi
1 and returns
the reason of rejection as a critique set. This helps for Ag1
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1025
in preparing a next counter-proposal. In the step 4(b), Ag2
constructs a new proposal. In its construction, Ag2 should
take care of the critique set CSi
1(P1, Gi−1
2 ), which
represents integrity constraints, if any, accumulated in previous
rounds, that Ag1 must satisfy. Also, FP i
2 is used for
removing proposals which have been rejected. Construction of
Si
x (x ∈ {c, n, cn}) in NSi
2 is incrementally done by adding
new counter-proposals produced by Gi
1 or Gi−1
2 to Si−1
x . For
instance, Si
n in NSi
2 is computed as
Si
n = Si−1
n ∪{ p | p is a neighborhood proposal made by Gi
1 }
∪ { p | p is a neighborhood proposal made by Gi−1
2 },
where S0
n = ∅. That is, Si
n is constructed from Si−1
n by
adding new proposals which are obtained by modifying the
proposal Gi
1 made by Ag1 at the i-th round or modifying
the proposal Gi−1
2 made by Ag2 at the (i − 1)-th round. Si
c
and Si
cn are obtained as well.
In the above protocol, an agent produces Si
c at first,
secondly Si
n, and finally Si
cn. This strategy seeks conditions
which satisfy the given proposal, prior to neighborhood
proposals which change the original one. Another strategy,
which prefers neighborhood proposals to conditional ones,
is also considered. Conditional neighborhood proposals are
to be considered in the last place, since they differ from the
original one to the maximal extent. The above protocol
produces the candidate proposals in Si
x for each x ∈ {c, n, cn}
at once. We can consider a variant of the protocol in which
each proposal in Si
x is constructed one by one (see
Example 3.3).
The above protocol is repeatedly applied to each one of
the two negotiating agents until a negotiation ends in
success/failure. Formally, the above negotiation protocol has
the following properties.
Theorem 3.2. Let Ag1 and Ag2 be two agents having
abductive programs P1, H1 and P2, H2 , respectively.
1. If P1, H1 and P2, H2 are function-free (i.e., both
Pi and Hi contain no function symbol), any
negotiation will terminate.
2. If a negotiation terminates with agreement on a
proposal G, both P1, H1 and P2, H2 have belief sets
satisfying G.
Proof. 1. When an abductive program is function-free,
abducibles and negotiation sets are both finite. Moreover, if
a proposal is once rejected, it is not proposed again by the
function μ. Thus, negotiation will terminate in finite steps.
2. When a proposal G is made by Ag1, P1, H1 has a
belief set satisfying G. If the agent Ag2 accepts the proposal
G, it is satisfied by an answer set of P2 which is also a belief
set of P2, H2 .
Example 3.3. Suppose a buying-selling situation in the
introduction. A seller agent has the abductive program
Ps, Hs in which Ps consists of belief Bs and desire Ds:
Bs : pc(b1, 1G, 512M, 80G) ; pc(b2, 1G, 512M, 80G) ←,(1)
dvd-rw ; cd-rw ←, (2)
Ds : normal price(1300) ←
pc(b1, 1G, 512M, 80G), dvd-rw, (3)
normal price(1200) ←
pc(b1, 1G, 512M, 80G), cd-rw, (4)
normal price(1200) ←
pc(b2, 1G, 512M, 80G), dvd-rw, (5)
price(x) ← normal price(x), add point, (6)
price(x ∗ 0.9) ←
normal price(x), pay cash, not add point,(7)
add point ←, (8)
Hs : add point, pay cash.
Here, (1) and (2) represent selection of products. The atom
pc(b1, 1G, 512M, 80G) represents that the seller agent has
a PC of the brand b1 such that CPU is 1GHz, memory is
512MB, and HDD is 80GB. Prices of products are
represented as desire of the seller. The rules (3) - (5) are normal
prices of products. A normal price is a selling price on the
condition that service points are added (6). On the other
hand, a discount price is applied if the paying method is cash
and no service point is added (7). The fact (8) represents
the addition of service points. This service would be
withdrawn in case of discount prices, so add point is specified as
an abducible.
A buyer agent has the abductive program Pb, Hb in
which Pb consists of belief Bb and desire Db:
Bb : drive ← dvd-rw, (9)
drive ← cd-rw, (10)
price(x) ←, (11)
Db : pc(b1, 1G, 512M, 80G) ←, (12)
dvd-rw ←, (13)
cd-rw ← not dvd-rw, (14)
← pay cash, (15)
← price(x), x > 1200, (16)
Hb : dvd-rw.
Rules (12) - (16) are the buyer"s desire. Among them, (15)
and (16) impose constraints for buying a PC. A DVD-RW
is specified as an abducible which is subject to concession.
(1st round) First, the following proposal is given by the
buyer agent:
G1
b : pc(b1, 1G, 512M, 80G), dvd-rw, price(x), x ≤ 1200.
As Ps has no answer set which satisfies G1
b , the seller agent
cannot accept the proposal. The seller takes an action of
making a counter-proposal and performs abduction. As a
result, the seller finds the minimal explanation (E, F) =
({ pay cash }, { add point }) which explains G1
b θ1 with θ1 =
{ x/1170 }. The seller constructs the conditional proposal:
G1
s : pc(b1, 1G, 512M, 80G), dvd-rw, price(1170),
pay cash, not add point
and offers it to the buyer.
(2nd round) The buyer does not accept G1
s because he/she
cannot pay it by cash (15). The buyer then returns the
critique G2
b = reject to the seller, together with the critique set
CS2
b (Pb, G1
s) = {(15)}. In response to this, the seller tries
to make another proposal which satisfies the constraint in
this critique set. As G1
s is stored in FP 2
s and no other
conditional proposal satisfying the buyer"s requirement exists,
the seller produces neighborhood proposals. He/she relaxes
G1
b by dropping x ≤ 1200 in the condition, and produces
pc(b1, 1G, 512M, 80G), dvd-rw, price(x).
As Ps has an answer set which satisfies
G2
s : pc(b1, 1G, 512M, 80G), dvd-rw, price(1300),
1026 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
the seller offers G2
s as a new counter-proposal.
(3rd round) The buyer does not accept G2
s because he/she
cannot pay more than 1200USD (16). The buyer again
returns the critique G3
b = reject to the seller, together with
the critique set CS3
b (Pb, G2
s) = CS2
b (Pb, G1
s) ∪ {(16)}. The
seller then considers another proposal by replacing b1 with
a variable w, G1
b now becomes
pc(w, 1G, 512M, 80G), dvd-rw, price(x), x ≤ 1200.
As Ps has an answer set which satisfies
G3
s : pc(b2, 1G, 512M, 80G), dvd-rw, price(1200),
the seller offers G3
s as a new counter-proposal.
(4th round) The buyer does not accept G3
s because a PC of
the brand b2 is out of his/her interest and Pb has no answer
set satisfying G3
s. Then, the buyer makes a concession by
changing his/her original goal. The buyer relaxes G1
b by goal
replacement using the rule (9) in Pb, and produces
pc(b1, 1G, 512M, 80G), drive, price(x), x ≤ 1200.
Using (10), the following proposal is produced:
pc(b1, 1G, 512M, 80G), cd-rw, price(x), x ≤ 1200.
As Pb \ { dvd-rw } has a consistent answer set satisfying the
above proposal, the buyer proposes the conditional
neighborhood proposal
G4
b : pc(b1, 1G, 512M, 80G), cd-rw, not dvd-rw,
price(x), x ≤ 1200
to the seller agent. Since Ps also has an answer set satisfying
G4
b , the seller accepts it and sends the message G4
s = accept
to the buyer. Thus, the negotiation ends in success.
4. COMPUTATION
In this section, we provide methods of computing
proposals in terms of answer sets of programs. We first introduce
some definitions from [15].
Definition 4.1. Given an abductive program P, H , the
set UR of update rules is defined as:
UR = { L ← not L, L ← not L | L ∈ H }
∪ { +L ← L | L ∈ H \ P }
∪ { −L ← not L | L ∈ H ∩ P } ,
where L, +L, and −L are new atoms uniquely associated
with every L ∈ H. The atoms +L and −L are called update
atoms.
By the definition, the atom L becomes true iff L is not
true. The pair of rules L ← not L and L ← not L specify
the situation that an abducible L is true or not. When
p(x) ∈ H and p(a) ∈ P but p(t) ∈ P for t = a, the rule
+L ← L precisely becomes +p(t) ← p(t) for any t = a. In
this case, the rule is shortly written as +p(x) ← p(x), x = a.
Generally, the rule becomes +p(x) ← p(x), x = t1, . . . , x =
tn for n such instances. The rule +L ← L derives the atom
+L if an abducible L which is not in P is to be true. In
contrast, the rule −L ← not L derives the atom −L if an
abducible L which is in P is not to be true. Thus, update
atoms represent the change of truth values of abducibles in
a program. That is, +L means the introduction of L, while
−L means the deletion of L. When an abducible L contains
variables, the associated update atom +L or −L is supposed
to have exactly the same variables. In this case, an update
atom is semantically identified with its ground instances.
The set of all update atoms associated with the abducibles
in H is denoted by UH, and UH = UH+
∪ UH−
where
UH+
(resp. UH−
) is the set of update atoms of the form
+L (resp. −L).
Definition 4.2. Given an abductive program P, H , its
update program UP is defined as the program
UP = (P \ H) ∪ UR .
An answer set S of UP is called U-minimal if there is no
answer set T of UP such that T ∩ UH ⊂ S ∩ UH.
By the definition, U-minimal answer sets exist whenever
UP has answer sets. Update programs are used for
computing (minimal) explanations of an observation. Given an
observation G as a conjunction of literals and NAF-literals
possibly containing variables, we introduce a new ground
literal O together with the rule O ← G. In this case, O
has an explanation (E, F) iff G has the same explanation.
With this replacement, an observation is assumed to be a
ground literal without loss of generality. In what follows,
E+
= { +L | L ∈ E } and F −
= { −L | L ∈ F } for E ⊆ H
and F ⊆ H.
Proposition 4.1. ([15]) Let P, H be an abductive
program, UP its update program, and G a ground literal
representing an observation. Then, a pair (E, F) is an
explanation of G iff UP ∪ { ← not G } has a consistent answer set
S such that E+
= S ∩ UH+
and F−
= S ∩ UH−
. In
particular, (E, F) is a minimal explanation iff S is a U-minimal
answer set.
Example 4.1. To explain the observation G = flies(t)
in the program P of Example 2.1, first construct the update
program UP of P:3
UP : flies(x) ← bird(x), not ab(x),
ab(x) ← broken-wing(x) ,
bird(t) ← , bird(o) ← ,
broken-wing(x) ← not broken-wing(x),
broken-wing(x) ← not broken-wing(x),
+broken-wing(x) ← broken-wing(x), x = t ,
−broken-wing(t) ← not broken-wing(t) .
Next, consider the program UP ∪ { ← not flies(t) }. It has
the single U-minimal answer set: S = { bird(t), bird(o), flies(t),
flies(o), broken-wing(t), broken-wing(o), −broken-wing(t) }.
The unique minimal explanation (E, F) = (∅, {broken-wing(t)})
of G is expressed by the update atom −broken-wing(t) in
S ∩ UH−
.
Proposition 4.2. Let P, H be an abductive program
and G a ground literal representing an observation. If P ∪
{ ← not G } has a consistent answer set S, G has the
minimal explanation (E, F) = (∅, ∅) and S satisfies G.
Now we provide methods for computing (counter-)proposals.
First, conditional proposals are computed as follows.
input : an abductive program P, H , a proposal G;
output : a set Sc of proposals.
If G is a ground literal, compute its minimal
explanation (E, F) in P, H using the update program. Put
G, E, not F in Sc. Else if G is a conjunction possibly
containing variables, consider the abductive program
3
t represents tweety and o represents opus.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1027
P ∪{ O ← G }, H with a ground literal O. Compute
a minimal explanation of O in P ∪ { O ← G }, H
using its update program. If O has a minimal
explanation (E, F) with a substitution θ for variables in G,
put Gθ, E, not F in Sc.
Next, neighborhood proposals are computed as follows.
input : an abductive program P, H , a proposal G;
output : a set Sn of proposals.
% neighborhood proposals by anti-instantiation;
Construct G by anti-instantiation. For a ground
literal O, if P ∪ { O ← G } ∪ { ← not O } has a
consistent answer set satisfying G θ with a substitution θ
and G θ = G, put G θ in Sn.
% neighborhood proposals by dropping conditions;
Construct G by dropping conditions. If G is a ground
literal and the program P ∪ { ← not G } has a
consistent answer set, put G in Sn. Else if G is a
conjunction possibly containing variables, do the following.
For a ground literal O, if P ∪{ O ← G }∪{ ← not O }
has a consistent answer set satisfying G θ with a
substitution θ, put G θ in Sn.
% neighborhood proposals by goal replacement;
Construct G by goal replacement. If G is a ground
literal and there is a rule H ← B in P such that G = Hσ
and Bσ = G for some substitution σ, put G = Bσ.
If P ∪ { ← not G } has a consistent answer set
satisfying G θ with a substitution θ, put G θ in Sn. Else
if G is a conjunction possibly containing variables,
do the following. For a replaced literal L ∈ G , if
there is a rule H ← B in P such that L = Hσ and
(G \ {L}) ∪ Bσ = G for some substitution σ, put
G = (G \ {L}) ∪ Bσ. For a ground literal O, if
P ∪ { O ← G } ∪ { ← not O } has a consistent answer
set satisfying G θ with a substitution θ, put G θ in
Sn.
Theorem 4.3. The set Sc (resp. Sn) computed above
coincides with the set of conditional proposals (resp.
neighborhood proposals).
Proof. The result for Sc follows from Definition 3.3 and
Proposition 4.1. The result for Sn follows from Definition 3.5
and Proposition 4.2.
Conditional neighborhood proposals are computed by
combining the above two procedures. Those proposals are
computed at each round. Note that the procedure for computing
Sn contains some nondeterministic choices. For instance,
there are generally several candidates of literals to relax in
a proposal. Also, there might be several rules in a program
for the usage of goal replacement. In practice, an agent can
prespecify literals in a proposal for possible relaxation or
rules in a program for the usage of goal replacement.
5. RELATED WORK
As there are a number of literature on automated
negotiation, this section focuses on comparison with negotiation
frameworks based on logic and argumentation.
Sadri et al. [14] use abductive logic programming as a
representation language of negotiating agents. Agents negotiate
using common dialogue primitives, called dialogue moves.
Each agent has an abductive logic program in which a
sequence of dialogues are specified by a program, a dialogue
protocol is specified as constraints, and dialogue moves are
specified as abducibles. The behavior of agents is regulated
by an observe-think-act cycle. Once a dialogue move is
uttered by an agent, another agent that observed the utterance
thinks and acts using a proof procedure. Their approach
and ours both employ abductive logic programming as a
platform of agent reasoning, but the use of it is quite
different. First, they use abducibles to specify dialogue primitives
of the form tell(utterer, receiver, subject, identifier, time),
while we use abducibles to specify arbitrary permissible
hypotheses to construct conditional proposals. Second, a
program pre-specifies a plan to carry out in order to achieve a
goal, together with available/missing resources in the
context of resource-exchanging problems. This is in contrast
with our method in which possible counter-proposals are
newly constructed in response to a proposal made by an
agent. Third, they specify a negotiation policy inside a
program (as integrity constraints), while we give a protocol
independent of individual agents. They provide an
operational model that completely specifies the behavior of agents
in terms of agent cycle. We do not provide such a complete
specification of the behavior of agents. Our primary interest
is to mechanize construction of proposals.
Bracciali and Torroni [2] formulate abductive agents that
have knowledge in abductive logic programs. To explain
an observation, two agents communicate by exchanging
integrity constraints. In the process of communication, an
agent can revise its own integrity constraints according to
the information provided by the other agent. A set IC
of integrity constraints relaxes a set IC (or IC tightens
IC ) if any observation that can be proved with respect to
IC can also be proved with respect to IC . For instance,
IC : ← a, b, c relaxes IC : ← a, b. Thus, they use relaxation
for weakening the constraints in an abductive logic program.
In contrast, we use relaxation for weakening proposals and
three different relaxation methods, anti-instantiation,
dropping conditions, and goal replacement, are considered. Their
goal is to explain an observation by revising integrity
constraints of an agent through communication, while we use
integrity constraints for communication to explain critiques
and help other agents in making counter-proposals.
Meyer et al. [11] introduce a logical framework for
negotiating agents. They introduce two different modes of
negotiation: concession and adaptation. They provide rational
postulates to characterize negotiated outcomes between two
agents, and describe methods for constructing outcomes.
They provide logical conditions for negotiated outcomes to
satisfy, but they do not describe a process of negotiation nor
negotiation protocols. Moreover, they represent agents by
classical propositional theories, which is different from our
abductive logic programming framework.
Foo et al. [5] model one-to-one negotiation as a one-time
encounter between two extended logic programs. An agent
offers an answer set of its program, and their mutual deal is
regarded as a trade on their answer sets. Starting from the
initial agreement set S∩T for an answer set S of an agent and
an answer set T of another agent, each agent extends this
set to reflect its own demand while keeping consistency with
demand of the other agent. Their algorithm returns new
programs having answer sets which are consistent with each
other and keep the agreement set. The work is extended to
repeated encounters in [3]. In their framework, two agents
exchange answer sets to produce a common belief set, which
is different from our framework of exchanging proposals.
There are a number of proposals for negotiation based
1028 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
on argumentation. An advantage of argumentation-based
negotiation is that it constructs a proposal with arguments
supporting the proposal [1]. The existence of arguments is
useful to convince other agents of reasons why an agent offers
(counter-)proposals or returns critiques. Parsons et al. [13]
develop a logic of argumentation-based negotiation among
BDI agents. In one-to-one negotiation, an agent A generates
a proposal together with its arguments, and passes it to
another agent B. The proposal is evaluated by B which
attempts to build arguments against it. If it conflicts with
B"s interest, B informs A of its objection by sending back
its attacking argument. In response to this, A tries to find
an alternative way of achieving its original objective, or a
way of persuading B to drop its objection. If either type of
argument can be found, A will submit it to B. If B finds no
reason to reject the new proposal, it will be accepted and
the negotiation ends in success. Otherwise, the process is
iterated. In this negotiation processes, the agent A never
changes its original objective, so that negotiation ends in
failure if A fails to find an alternative way of achieving the
original objective. In our framework, when a proposal is
rejected by another agent, an agent can weaken or change
its objective by abduction and relaxation. Our framework
does not have a mechanism of argumentation, but reasons
for critiques can be informed by responding critique sets.
Kakas and Moraitis [10] propose a negotiation protocol
which integrates abduction within an argumentation
framework. A proposal contains an offer corresponding to the
negotiation object, together with supporting information
representing conditions under which this offer is made.
Supporting information is computed by abduction and is used
for constructing conditional arguments during the process
of negotiation. In their negotiation protocol, when an agent
cannot satisfy its own goal, the agent considers the other
agent"s goal and searches for conditions under which the
goal is acceptable. Our present approach differs from theirs
in the following points. First, they use abduction to seek
conditions to support arguments, while we use abduction
to seek conditions for proposals to accept. Second, in their
negotiation protocol, counter-proposals are chosen among
candidates based on preference knowledge of an agent at
meta-level, which represents policy under which an agent
uses its object-level decision rules according to situations.
In our framework, counter-proposals are newly constructed
using abduction and relaxation. The method of
construction is independent of particular negotiation protocols. As
[2, 10, 14], abduction or abductive logic programming used
in negotiation is mostly based on normal abduction. In
contrast, our approach is based on extended abduction which
can not only introduce hypotheses but remove them from a
program. This is another important difference.
Relaxation and neighborhood query answering are devised
to make databases cooperative with their users [4, 6]. In this
sense, those techniques have the spirit similar to cooperative
problem solving in multi-agent systems. As far as the
authors know, however, there is no study which applies those
technique to agent negotiation.
6. CONCLUSION
In this paper we proposed a logical framework for
negotiating agents. To construct proposals in the process of
negotiation, we combined the techniques of extended abduction
and relaxation. It was shown that these two operations are
used for general inference rules in producing proposals. We
developed a negotiation protocol between two agents based
on exchange of proposals and critiques, and provided
procedures for computing proposals in abductive logic
programming. This enables us to realize automated negotiation on
top of the existing answer set solvers. The present
framework does not have a mechanism of selecting an optimal
(counter-)proposal among different alternatives. To
compare and evaluate proposals, an agent must have preference
knowledge of candidate proposals. Further elaboration to
maximize the utility of agents is left for future study.
7. REFERENCES
[1] L. Amgoud, S. Parsons, and N. Maudet. Arguments,
dialogue, and negotiation. In: Proc. ECAI-00,
pp. 338-342, IOS Press, 2000.
[2] A. Bracciali and P. Torroni. A new framework for
knowledge revision of abductive agents through their
interaction. In: Proc. CLIMA-IV, Computational Logic
in Multi-Agent Systems, LNAI 3259, pp. 159-177, 2004.
[3] W. Chen, M. Zhang, and N. Foo. Repeated negotiation
of logic programs. In: Proc. 7th Workshop on
Nonmonotonic Reasoning, Action and Change, 2006.
[4] W. W. Chu, Q. Chen, and R.-C. Lee. Cooperative
query answering via type abstraction hierarchy. In:
Cooperating Knowledge Based Systems, S. M. Deen ed.,
pp. 271-290, Springer, 1990.
[5] N. Foo, T. Meyer, Y. Zhang, and D. Zhang.
Negotiating logic programs. In: Proc. 6th Workshop on
Nonmonotonic Reasoning, Action and Change, 2005.
[6] T. Gaasterland, P. Godfrey, and J. Minker. Relaxation
as a platform for cooperative answering. Journal of
Intelligence Information Systems 1(3/4):293-321, 1992.
[7] M. Gelfond and V. Lifschitz. Classical negation in logic
programs and disjunctive databases. New Generation
Computing 9:365-385, 1991.
[8] K. Inoue and C. Sakama. Abductive framework for
nonmonotonic theory change. In: Proc. IJCAI-95,
pp. 204-210, Morgan Kaufmann.
[9] A. C. Kakas, R. A. Kowalski, and F. Toni, The role of
abduction in logic programming. In: Handbook of Logic
in AI and Logic Programming, D. M. Gabbay, et al.
(eds), vol. 5, pp. 235-324, Oxford University Press, 1998.
[10] A. C. Kakas and P. Moraitis. Adaptive agent
negotiation via argumentation. In: Proc. AAMAS-06,
pp. 384-391, ACM Press.
[11] T. Meyer, N. Foo, R. Kwok, and D. Zhang. Logical
foundation of negotiation: outcome, concession and
adaptation. In: Proc. AAAI-04, pp. 293-298, MIT Press.
[12] R. S. Michalski. A theory and methodology of
inductive learning. In: Machine Learning: An Artificial
Intelligence Approach, R. S. Michalski, et al. (eds),
pp. 83-134, Morgan Kaufmann, 1983.
[13] S. Parsons, C. Sierra and N. Jennings. Agents that
reason and negotiate by arguing. Journal of Logic and
Computation, 8(3):261-292, 1988.
[14] F. Sadri, F. Toni, and P. Torroni, An abductive logic
programming architecture for negotiating agents. In:
Proc. 8th European Conf. on Logics in AI, LNAI 2424,
pp. 419-431, Springer, 2002.
[15] C. Sakama and K. Inoue. An abductive framework for
computing knowledge base updates. Theory and Practice
of Logic Programming 3(6):671-715, 2003.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1029 | relaxation;logic program;anti-instantiation;abductive program;abductive framework;dropping condition;specific meta-knowledge;inductive generalization;one-to-one negotiation;minimal explanation;conditional proposal;integrity constraint;negotiation;extend abduction;automated negotiation;multi-agent system;alternative proposal |
train_I-77 | The LOGIC Negotiation Model | Successful negotiators prepare by determining their position along five dimensions: Legitimacy, Options, Goals, Independence, and Commitment, (LOGIC). We introduce a negotiation model based on these dimensions and on two primitive concepts: intimacy (degree of closeness) and balance (degree of fairness). The intimacy is a pair of matrices that evaluate both an agent"s contribution to the relationship and its opponent"s contribution each from an information view and from a utilitarian view across the five LOGIC dimensions. The balance is the difference between these matrices. A relationship strategy maintains a target intimacy for each relationship that an agent would like the relationship to move towards in future. The negotiation strategy maintains a set of Options that are in-line with the current intimacy level, and then tactics wrap the Options in argumentation with the aim of attaining a successful deal and manipulating the successive negotiation balances towards the target intimacy. | 1. INTRODUCTION
In this paper we propose a new negotiation model to deal
with long term relationships that are founded on successive
negotiation encounters. The model is grounded on results
from business and psychological studies [1, 16, 9], and
acknowledges that negotiation is an information exchange
process as well as a utility exchange process [15, 14]. We
believe that if agents are to succeed in real application domains
they have to reconcile both views: informational and
gametheoretical. Our aim is to model trading scenarios where
agents represent their human principals, and thus we want
their behaviour to be comprehensible by humans and to
respect usual human negotiation procedures, whilst being
consistent with, and somehow extending, game theoretical and
information theoretical results. In this sense, agents are not
just utility maximisers, but aim at building long lasting
relationships with progressing levels of intimacy that determine
what balance in information and resource sharing is
acceptable to them. These two concepts, intimacy and balance are
key in the model, and enable us to understand competitive
and co-operative game theory as two particular theories of
agent relationships (i.e. at different intimacy levels). These
two theories are too specific and distinct to describe how
a (business) relationship might grow because interactions
have some aspects of these two extremes on a continuum in
which, for example, agents reveal increasing amounts of
private information as their intimacy grows. We don"t follow
the "Co-Opetition" aproach [4] where co-operation and
competition depend on the issue under negotiation, but instead
we belief that the willingness to co-operate/compete affect
all aspects in the negotiation process. Negotiation strategies
can naturally be seen as procedures that select tactics used
to attain a successful deal and to reach a target intimacy
level. It is common in human settings to use tactics that
compensate for unbalances in one dimension of a
negotiation with unbalances in another dimension. In this sense,
humans aim at a general sense of fairness in an interaction.
In Section 2 we outline the aspects of human negotiation
modelling that we cover in this work. Then, in Section 3
we introduce the negotiation language. Section 4 explains
in outline the architecture and the concepts of intimacy and
balance, and how they influence the negotiation. Section 5
contains a description of the different metrics used in the
agent model including intimacy. Finally, Section 6 outlines
how strategies and tactics use the LOGIC framework,
intimacy and balance.
2. HUMAN NEGOTIATION
Before a negotiation starts human negotiators prepare the
dialogic exchanges that can be made along the five LOGIC
dimensions [7]:
• Legitimacy. What information is relevant to the
negotiation process? What are the persuasive arguments
about the fairness of the options?
1030
978-81-904262-7-5 (RPS) c 2007 IFAAMAS
• Options. What are the possible agreements we can
accept?
• Goals. What are the underlying things we need or care
about? What are our goals?
• Independence. What will we do if the negotiation fails?
What alternatives have we got?
• Commitment. What outstanding commitments do we
have?
Negotiation dialogues, in this context, exchange
dialogical moves, i.e. messages, with the intention of getting
information about the opponent or giving away information
about us along these five dimensions: request for
information, propose options, inform about interests, issue promises,
appeal to standards . . . A key part of any negotiation process
is to build a model of our opponent(s) along these
dimensions. All utterances agents make during a negotiation give
away information about their current LOGIC model, that
is, about their legitimacy, options, goals, independence, and
commitments. Also, several utterances can have a
utilitarian interpretation in the sense that an agent can associate
a preferential gain to them. For instance, an offer may
inform our negotiation opponent about our willingness to sign
a contract in the terms expressed in the offer, and at the
same time the opponent can compute what is its associated
expected utilitarian gain. These two views:
informationbased and utility-based, are central in the model proposed
in this paper.
2.1 Intimacy and Balance in relationships
There is evidence from psychological studies that humans
seek a balance in their negotiation relationships. The
classical view [1] is that people perceive resource allocations as
being distributively fair (i.e. well balanced) if they are
proportional to inputs or contributions (i.e. equitable).
However, more recent studies [16, 17] show that humans follow
a richer set of norms of distributive justice depending on
their intimacy level: equity, equality, and need. Equity
being the allocation proportional to the effort (e.g. the profit
of a company goes to the stock holders proportional to their
investment), equality being the allocation in equal amounts
(e.g. two friends eat the same amount of a cake cooked by
one of them), and need being the allocation proportional to
the need for the resource (e.g. in case of food scarcity, a
mother gives all food to her baby). For instance, if we are in
a purely economic setting (low intimacy) we might request
equity for the Options dimension but could accept equality
in the Goals dimension.
The perception of a relation being in balance (i.e. fair)
depends strongly on the nature of the social relationships
between individuals (i.e. the intimacy level). In purely
economical relationships (e.g., business), equity is perceived as
more fair; in relations where joint action or fostering of social
relationships are the goal (e.g. friends), equality is perceived
as more fair; and in situations where personal development
or personal welfare are the goal (e.g. family), allocations are
usually based on need.
We believe that the perception of balance in dialogues (in
negotiation or otherwise) is grounded on social relationships,
and that every dimension of an interaction between humans
can be correlated to the social closeness, or intimacy,
between the parties involved. According to the previous
studies, the more intimacy across the five LOGIC dimensions the
more the need norm is used, and the less intimacy the more
the equity norm is used. This might be part of our social
evolution. There is ample evidence that when human
societies evolved from a hunter-gatherer structure1
to a
shelterbased one2
the probability of survival increased when food
was scarce.
In this context, we can clearly see that, for instance,
families exchange not only goods but also information and
knowledge based on need, and that few families would consider
their relationships as being unbalanced, and thus unfair,
when there is a strong asymmetry in the exchanges (a mother
explaining everything to her children, or buying toys, does
not expect reciprocity). In the case of partners there is some
evidence [3] that the allocations of goods and burdens (i.e.
positive and negative utilities) are perceived as fair, or in
balance, based on equity for burdens and equality for goods.
See Table 1 for some examples of desired balances along the
LOGIC dimensions.
The perceived balance in a negotiation dialogue allows
negotiators to infer information about their opponent, about
its LOGIC stance, and to compare their relationships with
all negotiators. For instance, if we perceive that every time
we request information it is provided, and that no significant
questions are returned, or no complaints about not
receiving information are given, then that probably means that
our opponent perceives our social relationship to be very
close. Alternatively, we can detect what issues are causing
a burden to our opponent by observing an imbalance in the
information or utilitarian senses on that issue.
3. COMMUNICATION MODEL
3.1 Ontology
In order to define a language to structure agent dialogues we
need an ontology that includes a (minimum) repertoire of
elements: a set of concepts (e.g. quantity, quality, material)
organised in a is-a hierarchy (e.g. platypus is a mammal,
Australian-dollar is a currency), and a set of relations over
these concepts (e.g. price(beer,AUD)).3
We model
ontologies following an algebraic approach [8] as:
An ontology is a tuple O = (C, R, ≤, σ) where:
1. C is a finite set of concept symbols (including basic
data types);
2. R is a finite set of relation symbols;
3. ≤ is a reflexive, transitive and anti-symmetric relation
on C (a partial order)
4. σ : R → C+
is the function assigning to each relation
symbol its arity
1
In its purest form, individuals in these societies collect food
and consume it when and where it is found. This is a pure
equity sharing of the resources, the gain is proportional to
the effort.
2
In these societies there are family units, around a shelter,
that represent the basic food sharing structure. Usually,
food is accumulated at the shelter for future use. Then the
food intake depends more on the need of the members.
3
Usually, a set of axioms defined over the concepts and
relations is also required. We will omit this here.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1031
Element A new trading partner my butcher my boss my partner my children
Legitimacy equity equity equity equality need
Options equity equity equity mixeda
need
Goals equity need equity need need
Independence equity equity equality need need
Commitment equity equity equity mixed need
a
equity on burden, equality on good
Table 1: Some desired balances (sense of fairness) examples depending on the relationship.
where ≤ is the traditional is-a hierarchy. To simplify
computations in the computing of probability distributions we
assume that there is a number of disjoint is-a trees covering
different ontological spaces (e.g. a tree for types of fabric,
a tree for shapes of clothing, and so on). R contains
relations between the concepts in the hierarchy, this is needed
to define ‘objects" (e.g. deals) that are defined as a tuple of
issues.
The semantic distance between concepts within an
ontology depends on how far away they are in the structure
defined by the ≤ relation. Semantic distance plays a
fundamental role in strategies for information-based agency. How
signed contracts, Commit(·), about objects in a particular
semantic region, and their execution, Done(·), affect our
decision making process about signing future contracts in
nearby semantic regions is crucial to modelling the common
sense that human beings apply in managing trading
relationships. A measure [10] bases the semantic similarity
between two concepts on the path length induced by ≤ (more
distance in the ≤ graph means less semantic similarity), and
the depth of the subsumer concept (common ancestor) in the
shortest path between the two concepts (the deeper in the
hierarchy, the closer the meaning of the concepts). Semantic
similarity is then defined as:
Sim(c, c ) = e−κ1l
·
eκ2h
− e−κ2h
eκ2h + e−κ2h
where l is the length (i.e. number of hops) of the
shortest path between the concepts, h is the depth of the deepest
concept subsuming both concepts, and κ1 and κ2 are
parameters scaling the contributions of the shortest path length
and the depth respectively.
3.2 Language
The shape of the language that α uses to represent the
information received and the content of its dialogues depends on
two fundamental notions. First, when agents interact within
an overarching institution they explicitly or implicitly accept
the norms that will constrain their behaviour, and accept
the established sanctions and penalties whenever norms are
violated. Second, the dialogues in which α engages are built
around two fundamental actions: (i) passing information,
and (ii) exchanging proposals and contracts. A contract
δ = (a, b) between agents α and β is a pair where a and b
represent the actions that agents α and β are responsible
for respectively. Contracts signed by agents and
information passed by agents, are similar to norms in the sense that
they oblige agents to behave in a particular way, so as to
satisfy the conditions of the contract, or to make the world
consistent with the information passed. Contracts and
Information can thus be thought of as normative statements
that restrict an agent"s behaviour.
Norms, contracts, and information have an obvious
temporal dimension. Thus, an agent has to abide by a norm
while it is inside an institution, a contract has a validity
period, and a piece of information is true only during an
interval in time. The set of norms affecting the behaviour of
an agent defines the context that the agent has to take into
account.
α"s communication language has two fundamental
primitives: Commit(α, β, ϕ) to represent, in ϕ, the world that α
aims at bringing about and that β has the right to verify,
complain about or claim compensation for any deviations
from, and Done(μ) to represent the event that a certain
action μ4
has taken place. In this way, norms, contracts,
and information chunks will be represented as instances of
Commit(·) where α and β can be individual agents or
institutions. C is:
μ ::= illoc(α, β, ϕ, t) | μ; μ |
Let context In μ End
ϕ ::= term | Done(μ) | Commit(α, β, ϕ) | ϕ ∧ ϕ |
ϕ ∨ ϕ | ¬ϕ | ∀v.ϕv | ∃v.ϕv
context ::= ϕ | id = ϕ | prolog clause | context; context
where ϕv is a formula with free variable v, illoc is any
appropriate set of illocutionary particles, ‘;" means sequencing,
and context represents either previous agreements, previous
illocutions, the ontological working context, that is a
projection of the ontological trees that represent the focus of
the conversation, or code that aligns the ontological
differences between the speakers needed to interpret an action
a. Representing an ontology as a set predicates in Prolog
is simple. The set term contains instances of the ontology
concepts and relations.5
For example, we can represent the following offer: If you
spend a total of more than e100 in my shop during
October then I will give you a 10% discount on all goods in
November, as:
Offer( α, β,spent(β, α, October, X) ∧ X ≥ e100 →
∀ y. Done(Inform(ξ, α, pay(β, α, y), November)) →
Commit(α, β, discount(y,10%)))
ξ is an institution agent that reports the payment.
4
Without loss of generality we will assume that all actions
are dialogical.
5
We assume the convention that C(c) means that c is an
instance of concept C and r(c1, . . . , cn) implicitly determines
that ci is an instance of the concept in the i-th position of
the relation r.
1032 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Figure 1: The LOGIC agent architecture
4. AGENT ARCHITECTURE
A multiagent system {α, β1, . . . , βn, ξ, θ1, . . . , θt}, contains
an agent α that interacts with other argumentation agents,
βi, information providing agents, θj, and an institutional
agent, ξ, that represents the institution where we assume
the interactions happen [2]. The institutional agent reports
promptly and honestly on what actually occurs after an
agent signs a contract, or makes some other form of
commitment. In Section 4.1 this enables us to measure the
difference between an utterance and a subsequent observation.
The communication language C introduced in Section 3.2
enables us both to structure the dialogues and to structure the
processing of the information gathered by agents. Agents
have a probabilistic first-order internal language L used to
represent a world model, Mt
. A generic information-based
architecture is described in detail in [15].
The LOGIC agent architecture is shown in Figure 1. Agent
α acts in response to a need that is expressed in terms of the
ontology. A need may be exogenous such as a need to trade
profitably and may be triggered by another agent offering to
trade, or endogenous such as α deciding that it owns more
wine than it requires. Needs trigger α"s goal/plan
proactive reasoning, while other messages are dealt with by α"s
reactive reasoning.6
Each plan prepares for the negotiation
by assembling the contents of a ‘LOGIC briefcase" that the
agent ‘carries" into the negotiation7
. The relationship
strategy determines which agent to negotiate with for a given
need; it uses risk management analysis to preserve a
strategic set of trading relationships for each mission-critical need
- this is not detailed here. For each trading relationship
this strategy generates a relationship target that is expressed
in the LOGIC framework as a desired level of intimacy to
be achieved in the long term.
Each negotiation consists of a dialogue, Ψt
, between two
agents with agent α contributing utterance μ and the
part6
Each of α"s plans and reactions contain constructors for an
initial world model Mt
. Mt
is then maintained from
percepts received using update functions that transform
percepts into constraints on Mt
- for details, see [14, 15].
7
Empirical evidence shows that in human negotiation,
better outcomes are achieved by skewing the opening Options
in favour of the proposer. We are unaware of any
empirical investigation of this hypothesis for autonomous agents
in real trading scenarios.
ner β contributing μ using the language described in
Section 3.2. Each dialogue, Ψt
, is evaluated using the LOGIC
framework in terms of the value of Ψt
to both α and β - see
Section 5.2. The negotiation strategy then determines the
current set of Options {δi}, and then the tactics, guided by
the negotiation target, decide which, if any, of these Options
to put forward and wraps them in argumentation dialogue
- see Section 6. We now describe two of the distributions
in Mt
that support offer exchange.
Pt
(acc(α, β, χ, δ)) estimates the probability that α should
accept proposal δ in satisfaction of her need χ, where δ =
(a, b) is a pair of commitments, a for α and b for β. α will
accept δ if: Pt
(acc(α, β, χ, δ)) > c, for level of certainty c.
This estimate is compounded from subjective and objective
views of acceptability. The subjective estimate takes account
of: the extent to which the enactment of δ will satisfy α"s
need χ, how much δ is ‘worth" to α, and the extent to which
α believes that she will be in a position to execute her
commitment a [14, 15]. Sα(β, a) is a random variable denoting
α"s estimate of β"s subjective valuation of a over some finite,
numerical evaluation space. The objective estimate captures
whether δ is acceptable on the open market, and variable
Uα(b) denotes α"s open-market valuation of the enactment
of commitment b, again taken over some finite numerical
valuation space. We also consider needs, the variable Tα(β, a)
denotes α"s estimate of the strength of β"s motivating need
for the enactment of commitment a over a valuation space.
Then for δ = (a, b): Pt
(acc(α, β, χ, δ)) =
Pt
„
Tα(β, a)
Tα(α, b)
«h
×
„
Sα(α, b)
Sα(β, a)
«g
×
Uα(b)
Uα(a)
≥ s
!
(1)
where g ∈ [0, 1] is α"s greed, h ∈ [0, 1] is α"s degree of
altruism, and s ≈ 1 is derived from the stance8
described in
Section 6. The parameters g and h are independent. We can
imagine a relationship that begins with g = 1 and h = 0.
Then as the agents share increasing amounts of their
information about their open market valuations g gradually
reduces to 0, and then as they share increasing amounts of
information about their needs h increases to 1. The basis
for the acceptance criterion has thus developed from equity
to equality, and then to need.
Pt
(acc(β, α, δ)) estimates the probability that β would
accept δ, by observing β"s responses. For example, if β
sends the message Offer(δ1) then α derives the constraint:
{Pt
(acc(β, α, δ1)) = 1} on the distribution Pt
(β, α, δ), and
if this is a counter offer to a former offer of α"s, δ0, then:
{Pt
(acc(β, α, δ0)) = 0}. In the not-atypical special case of
multi-issue bargaining where the agents" preferences over the
individual issues only are known and are complementary to
each other"s, maximum entropy reasoning can be applied
to estimate the probability that any multi-issue δ will be
acceptable to β by enumerating the possible worlds that
represent β"s limit of acceptability [6].
4.1 Updating the World Model Mt
α"s world model consists of probability distributions that
represent its uncertainty in the world state. α is interested
8
If α chooses to inflate her opening Options then this is
achieved in Section 6 by increasing the value of s. If s 1
then a deal may not be possible. This illustrates the
wellknown inefficiency of bilateral bargaining established
analytically by Myerson and Satterthwaite in 1983.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1033
in the degree to which an utterance accurately describes
what will subsequently be observed. All observations about
the world are received as utterances from an all-truthful
institution agent ξ. For example, if β communicates the goal
I am hungry and the subsequent negotiation terminates
with β purchasing a book from α (by ξ advising α that a
certain amount of money has been credited to α"s account)
then α may conclude that the goal that β chose to satisfy
was something other than hunger. So, α"s world model
contains probability distributions that represent its uncertain
expectations of what will be observed on the basis of
utterances received.
We represent the relationship between utterance, ϕ, and
subsequent observation, ϕ , by Pt
(ϕ |ϕ) ∈ Mt
, where ϕ and
ϕ may be ontological categories in the interest of
computational feasibility. For example, if ϕ is I will deliver a bucket
of fish to you tomorrow then the distribution P(ϕ |ϕ) need
not be over all possible things that β might do, but could
be over ontological categories that summarise β"s possible
actions.
In the absence of in-coming utterances, the conditional
probabilities, Pt
(ϕ |ϕ), should tend to ignorance as
represented by a decay limit distribution D(ϕ |ϕ). α may have
background knowledge concerning D(ϕ |ϕ) as t → ∞,
otherwise α may assume that it has maximum entropy whilst
being consistent with the data. In general, given a
distribution, Pt
(Xi), and a decay limit distribution D(Xi), Pt
(Xi)
decays by:
Pt+1
(Xi) = Δi(D(Xi), Pt
(Xi)) (2)
where Δi is the decay function for the Xi satisfying the
property that limt→∞ Pt
(Xi) = D(Xi). For example, Δi
could be linear: Pt+1
(Xi) = (1 − νi) × D(Xi) + νi × Pt
(Xi),
where νi < 1 is the decay rate for the i"th distribution.
Either the decay function or the decay limit distribution
could also be a function of time: Δt
i and Dt
(Xi).
Suppose that α receives an utterance μ = illoc(α, β, ϕ, t)
from agent β at time t. Suppose that α attaches an
epistemic belief Rt
(α, β, μ) to μ - this probability takes account
of α"s level of personal caution. We model the update of
Pt
(ϕ |ϕ) in two cases, one for observations given ϕ, second
for observations given φ in the semantic neighbourhood of
ϕ.
4.2 Update of Pt
(ϕ |ϕ) given ϕ
First, if ϕk is observed then α may set Pt+1
(ϕk|ϕ) to some
value d where {ϕ1, ϕ2, . . . , ϕm} is the set of all possible
observations. We estimate the complete posterior
distribution Pt+1
(ϕ |ϕ) by applying the principle of minimum
relative entropy9
as follows. Let p(μ) be the distribution:
9
Given a probability distribution q, the minimum relative
entropy distribution p = (p1, . . . , pI ) subject to a set of J
linear constraints g = {gj(p) = aj · p − cj = 0}, j = 1, . . . , J
(that must include the constraint
P
i pi − 1 = 0) is: p =
arg minr
P
j rj log
rj
qj
. This may be calculated by
introducing Lagrange multipliers λ: L(p, λ) =
P
j pj log
pj
qj
+ λ · g.
Minimising L, { ∂L
∂λj
= gj(p) = 0}, j = 1, . . . , J is the set of
given constraints g, and a solution to ∂L
∂pi
= 0, i = 1, . . . , I
leads eventually to p. Entropy-based inference is a form of
Bayesian inference that is convenient when the data is sparse
[5] and encapsulates common-sense reasoning [12].
arg minx
P
j xj log
xj
Pt(ϕ |ϕ)j
that satisfies the constraint p(μ)k
= d. Then let q(μ) be the distribution:
q(μ) = Rt
(α, β, μ) × p(μ) + (1 − Rt
(α, β, μ)) × Pt
(ϕ |ϕ)
and then let:
r(μ) =
(
q(μ) if q(μ) is more interesting than Pt
(ϕ |ϕ)
Pt
(ϕ |ϕ) otherwise
A general measure of whether q(μ) is more interesting than
Pt
(ϕ |ϕ) is: K(q(μ) D(ϕ |ϕ)) > K(Pt
(ϕ |ϕ) D(ϕ |ϕ)), where
K(x y) =
P
j xj ln
xj
yj
is the Kullback-Leibler distance
between two probability distributions x and y [11].
Finally incorporating Eqn. 2 we obtain the method for
updating a distribution Pt
(ϕ |ϕ) on receipt of a message μ:
Pt+1
(ϕ |ϕ) = Δi(D(ϕ |ϕ), r(μ)) (3)
This procedure deals with integrity decay, and with two
probabilities: first, the probability z in the utterance μ, and
second the belief Rt
(α, β, μ) that α attached to μ.
4.3 Update of Pt
(φ |φ) given ϕ
The sim method: Given as above μ = illoc(α, β, ϕ, t) and
the observation ϕk we define the vector t by
ti = Pt
(φi|φ) + (1− | Sim(ϕk, ϕ) − Sim(φi, φ) |) · Sim(ϕk, φ)
with {φ1, φ2, . . . , φp} the set of all possible observations in
the context of φ and i = 1, . . . , p. t is not a probability
distribution. The multiplying factor Sim(ϕ , φ) limits the
variation of probability to those formulae whose
ontological context is not too far away from the observation. The
posterior Pt+1
(φ |φ) is obtained with Equation 3 with r(μ)
defined to be the normalisation of t.
The valuation method: For a given φk, wexp
(φk) =Pm
j=1 Pt
(φj|φk) · w(φj) is α"s expectation of the value of
what will be observed given that β has stated that φk will
be observed, for some measure w. Now suppose that, as
before, α observes ϕk after agent β has stated ϕ. α revises
the prior estimate of the expected valuation wexp
(φk) in the
light of the observation ϕk to:
(wrev
(φk) | (ϕk|ϕ)) =
g(wexp
(φk), Sim(φk, ϕ), w(φk), w(ϕ), wi(ϕk))
for some function g - the idea being, for example, that if the
execution, ϕk, of the commitment, ϕ, to supply cheese was
devalued then α"s expectation of the value of a commitment,
φ, to supply wine should decrease. We estimate the posterior
by applying the principle of minimum relative entropy as for
Equation 3, where the distribution p(μ) = p(φ |φ) satisfies the
constraint:
p
X
j=1
p(ϕ ,ϕ)j · wi(φj) =
g(wexp
(φk), Sim(φk, ϕ), w(φk), w(ϕ), wi(ϕk))
5. SUMMARY MEASURES
A dialogue, Ψt
, between agents α and β is a sequence of
inter-related utterances in context. A relationship, Ψ∗t
, is a
sequence of dialogues. We first measure the confidence that
an agent has for another by observing, for each utterance,
the difference between what is said (the utterance) and what
1034 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
subsequently occurs (the observation). Second we evaluate
each dialogue as it progresses in terms of the LOGIC
framework - this evaluation employs the confidence measures.
Finally we define the intimacy of a relationship as an
aggregation of the value of its component dialogues.
5.1 Confidence
Confidence measures generalise what are commonly called
trust, reliability and reputation measures into a single
computational framework that spans the LOGIC categories. In
Section 5.2 confidence measures are applied to valuing
fulfilment of promises in the Legitimacy category - we formerly
called this honour [14], to the execution of commitments
- we formerly called this trust [13], and to valuing
dialogues in the Goals category - we formerly called this
reliability [14].
Ideal observations. Consider a distribution of
observations that represent α"s ideal in the sense that it is the
best that α could reasonably expect to observe. This
distribution will be a function of α"s context with β denoted
by e, and is Pt
I (ϕ |ϕ, e). Here we measure the relative
entropy between this ideal distribution, Pt
I (ϕ |ϕ, e), and the
distribution of expected observations, Pt
(ϕ |ϕ). That is:
C(α, β, ϕ) = 1 −
X
ϕ
Pt
I (ϕ |ϕ, e) log
Pt
I (ϕ |ϕ, e)
Pt(ϕ |ϕ)
(4)
where the 1 is an arbitrarily chosen constant being the
maximum value that this measure may have. This equation
measures confidence for a single statement ϕ. It makes sense
to aggregate these values over a class of statements, say over
those ϕ that are in the ontological context o, that is ϕ ≤ o:
C(α, β, o) = 1 −
P
ϕ:ϕ≤o Pt
β(ϕ) [1 − C(α, β, ϕ)]
P
ϕ:ϕ≤o Pt
β(ϕ)
where Pt
β(ϕ) is a probability distribution over the space of
statements that the next statement β will make to α is ϕ.
Similarly, for an overall estimate of β"s confidence in α:
C(α, β) = 1 −
X
ϕ
Pt
β(ϕ) [1 − C(α, β, ϕ)]
Preferred observations. The previous measure requires
that an ideal distribution, Pt
I (ϕ |ϕ, e), has to be specified for
each ϕ. Here we measure the extent to which the
observation ϕ is preferable to the original statement ϕ. Given a
predicate Prefer(c1, c2, e) meaning that α prefers c1 to c2 in
environment e. Then if ϕ ≤ o:
C(α, β, ϕ) =
X
ϕ
Pt
(Prefer(ϕ , ϕ, o))Pt
(ϕ |ϕ)
and:
C(α, β, o) =
P
ϕ:ϕ≤o Pt
β(ϕ)C(α, β, ϕ)
P
ϕ:ϕ≤o Pt
β(ϕ)
Certainty in observation. Here we measure the
consistency in expected acceptable observations, or the lack of
expected uncertainty in those possible observations that are
better than the original statement. If ϕ ≤ o let: Φ+(ϕ, o, κ) =˘
ϕ | Pt
(Prefer(ϕ , ϕ, o)) > κ
¯
for some constant κ, and:
C(α, β, ϕ) = 1 +
1
B∗
·
X
ϕ ∈Φ+(ϕ,o,κ)
Pt
+(ϕ |ϕ) log Pt
+(ϕ |ϕ)
where Pt
+(ϕ |ϕ) is the normalisation of Pt
(ϕ |ϕ) for ϕ ∈
Φ+(ϕ, o, κ),
B∗
=
(
1 if |Φ+(ϕ, o, κ)| = 1
log |Φ+(ϕ, o, κ)| otherwise
As above we aggregate this measure for observations in a
particular context o, and measure confidence as before.
Computational Note. The various measures given above
involve extensive calculations. For example, Eqn. 4 containsP
ϕ that sums over all possible observations ϕ . We obtain
a more computationally friendly measure by appealing to
the structure of the ontology described in Section 3.2, and
the right-hand side of Eqn. 4 may be approximated to:
1 −
X
ϕ :Sim(ϕ ,ϕ)≥η
Pt
η,I (ϕ |ϕ, e) log
Pt
η,I (ϕ |ϕ, e)
Pt
η(ϕ |ϕ)
where Pt
η,I (ϕ |ϕ, e) is the normalisation of Pt
I (ϕ |ϕ, e) for
Sim(ϕ , ϕ) ≥ η, and similarly for Pt
η(ϕ |ϕ). The extent
of this calculation is controlled by the parameter η. An
even tighter restriction may be obtained with: Sim(ϕ , ϕ) ≥
η and ϕ ≤ ψ for some ψ.
5.2 Valuing negotiation dialogues
Suppose that a negotiation commences at time s, and by
time t a string of utterances, Φt
= μ1, . . . , μn has been
exchanged between agent α and agent β. This
negotiation dialogue is evaluated by α in the context of α"s world
model at time s, Ms
, and the environment e that includes
utterances that may have been received from other agents
in the system including the information sources {θi}. Let
Ψt
= (Φt
, Ms
, e), then α estimates the value of this dialogue
to itself in the context of Ms
and e as a 2 × 5 array Vα(Ψt
)
where:
Vx(Ψt
) =
„
IL
x (Ψt
) IO
x (Ψt
) IG
x (Ψt
) II
x(Ψt
) IC
x (Ψt
)
UL
x (Ψt
) UO
x (Ψt
) UG
x (Ψt
) UI
x(Ψt
) UC
x (Ψt
)
«
where the I(·) and U(·) functions are information-based and
utility-based measures respectively as we now describe. α
estimates the value of this dialogue to β as Vβ(Ψt
) by
assuming that β"s reasoning apparatus mirrors its own.
In general terms, the information-based valuations
measure the reduction in uncertainty, or information gain, that
the dialogue gives to each agent, they are expressed in terms
of decrease in entropy that can always be calculated. The
utility-based valuations measure utility gain are expressed in
terms of some suitable utility evaluation function U(·) that
can be difficult to define. This is one reason why the
utilitarian approach has no natural extension to the management
of argumentation that is achieved here by our
informationbased approach. For example, if α receives the utterance
Today is Tuesday then this may be translated into a
constraint on a single distribution, and the resulting decrease
in entropy is the information gain. Attaching a utilitarian
measure to this utterance may not be so simple.
We use the term 2 × 5 array loosely to describe Vα in
that the elements of the array are lists of measures that will
be determined by the agent"s requirements. Table 2 shows
a sample measure for each of the ten categories, in it the
dialogue commences at time s and terminates at time t.
In that Table, U(·) is a suitable utility evaluation function,
needs(β, χ) means agent β needs the need χ, cho(β, χ, γ)
means agent β satisfies need χ by choosing to negotiate
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1035
with agent γ, N is the set of needs chosen from the
ontology at some suitable level of abstraction, Tt
is the set
of offers on the table at time t, com(β, γ, b) means agent
β has an outstanding commitment with agent γ to execute
the commitment b where b is defined in the ontology at
some suitable level of abstraction, B is the number of such
commitments, and there are n + 1 agents in the system.
5.3 Intimacy and Balance
The balance in a negotiation dialogue, Ψt
, is defined as:
Bαβ(Ψt
) = Vα(Ψt
) Vβ(Ψt
) for an element-by-element
difference operator that respects the structure of V (Ψt
).
The intimacy between agents α and β, I∗t
αβ, is the pattern
of the two 2 × 5 arrays V ∗t
α and V ∗t
β that are computed by
an update function as each negotiation round terminates,
I∗t
αβ =
`
V ∗t
α , V ∗t
β
´
. If Ψt
terminates at time t:
V ∗t+1
x = ν × Vx(Ψt
) + (1 − ν) × V ∗t
x (5)
where ν is the learning rate, and x = α, β. Additionally,
V ∗t
x continually decays by: V ∗t+1
x = τ × V ∗t
x + (1 − τ) ×
Dx, where x = α, β; τ is the decay rate, and Dx is a 2 ×
5 array being the decay limit distribution for the value to
agent x of the intimacy of the relationship in the absence
of any interaction. Dx is the reputation of agent x. The
relationship balance between agents α and β is: B∗t
αβ = V ∗t
α
V ∗t
β . In particular, the intimacy determines values for the
parameters g and h in Equation 1. As a simple example, if
both IO
α (Ψ∗t
) and IO
β (Ψ∗t
) increase then g decreases, and as
the remaining eight information-based LOGIC components
increase, h increases.
The notion of balance may be applied to pairs of
utterances by treating them as degenerate dialogues. In simple
multi-issue bargaining the equitable information revelation
strategy generalises the tit-for-tat strategy in single-issue
bargaining, and extends to a tit-for-tat argumentation
strategy by applying the same principle across the LOGIC
framework.
6. STRATEGIES AND TACTICS
Each negotiation has to achieve two goals. First it may
be intended to achieve some contractual outcome. Second
it will aim to contribute to the growth, or decline, of the
relationship intimacy.
We now describe in greater detail the contents of the
Negotiation box in Figure 1. The negotiation literature
consistently advises that an agent"s behaviour should not be
predictable even in close, intimate relationships. The
required variation of behaviour is normally described as
varying the negotiation stance that informally varies from
friendly guy to tough guy. The stance is shown in Figure 1,
it injects bounded random noise into the process, where the
bound tightens as intimacy increases. The stance, St
αβ, is a
2 × 5 matrix of randomly chosen multipliers, each ≈ 1, that
perturbs α"s actions. The value in the (x, y) position in the
matrix, where x = I, U and y = L, O, G, I, C, is chosen at
random from [ 1
l(I∗t
αβ
,x,y)
, l(I∗t
αβ, x, y)] where l(I∗t
αβ, x, y) is the
bound, and I∗t
αβ is the intimacy.
The negotiation strategy is concerned with maintaining a
working set of Options. If the set of options is empty then
α will quit the negotiation. α perturbs the acceptance
machinery (see Section 4) by deriving s from the St
αβ matrix
such as the value at the (I, O) position. In line with the
comment in Footnote 7, in the early stages of the
negotiation α may decide to inflate her opening Options. This is
achieved by increasing the value of s in Equation 1. The
following strategy uses the machinery described in Section 4.
Fix h, g, s and c, set the Options to the empty set, let
Dt
s = {δ | Pt
(acc(α, β, χ, δ) > c}, then:
• repeat the following as many times as desired: add
δ = arg maxx{Pt
(acc(β, α, x)) | x ∈ Dt
s} to Options,
remove {y ∈ Dt
s | Sim(y, δ) < k} for some k from Dt
s
By using Pt
(acc(β, α, δ)) this strategy reacts to β"s history
of Propose and Reject utterances.
Negotiation tactics are concerned with selecting some
Options and wrapping them in argumentation. Prior
interactions with agent β will have produced an intimacy pattern
expressed in the form of
`
V ∗t
α , V ∗t
β
´
. Suppose that the
relationship target is (T∗t
α , T∗t
β ). Following from Equation 5, α
will want to achieve a negotiation target, Nβ(Ψt
) such that:
ν · Nβ(Ψt
) + (1 − ν) · V ∗t
β is a bit on the T∗t
β side of V ∗t
β :
Nβ(Ψt
) =
ν − κ
ν
V ∗t
β ⊕
κ
ν
T∗t
β (6)
for small κ ∈ [0, ν] that represents α"s desired rate of
development for her relationship with β. Nβ(Ψt
) is a 2 × 5
matrix containing variations in the LOGIC dimensions that
α would like to reveal to β during Ψt
(e.g. I"ll pass a bit
more information on options than usual, I"ll be stronger
in concessions on options, etc.). It is reasonable to
expect β to progress towards her target at the same rate and
Nα(Ψt
) is calculated by replacing β by α in Equation 6.
Nα(Ψt
) is what α hopes to receive from β during Ψt
. This
gives a negotiation balance target of: Nα(Ψt
) Nβ(Ψt
) that
can be used as the foundation for reactive tactics by
striving to maintain this balance across the LOGIC dimensions.
A cautious tactic could use the balance to bound the
response μ to each utterance μ from β by the constraint:
Vα(μ ) Vβ(μ) ≈ St
αβ ⊗ (Nα(Ψt
) Nβ(Ψt
)), where ⊗ is
element-by-element matrix multiplication, and St
αβ is the
stance. A less neurotic tactic could attempt to achieve the
target negotiation balance over the anticipated complete
dialogue. If a balance bound requires negative information
revelation in one LOGIC category then α will contribute
nothing to it, and will leave this to the natural decay to the
reputation D as described above.
7. DISCUSSION
In this paper we have introduced a novel approach to
negotiation that uses information and game-theoretical
measures grounded on business and psychological studies. It
introduces the concepts of intimacy and balance as key
elements in understanding what is a negotiation strategy and
tactic. Negotiation is understood as a dialogue that affect
five basic dimensions: Legitimacy, Options, Goals,
Independence, and Commitment. Each dialogical move produces a
change in a 2×5 matrix that evaluates the dialogue along five
information-based measures and five utility-based measures.
The current Balance and intimacy levels and the desired, or
target, levels are used by the tactics to determine what to
say next. We are currently exploring the use of this model as
an extension of a currently widespread eProcurement
software commercialised by iSOCO, a spin-off company of the
laboratory of one of the authors.
1036 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
IL
α(Ψt
) =
X
ϕ∈Ψt
Ct
(α, β, ϕ) − Cs
(α, β, ϕ) UL
α (Ψt
) =
X
ϕ∈Ψt
X
ϕ
Pt
β(ϕ |ϕ) × Uα(ϕ )
IO
α (Ψt
) =
P
δ∈T t Hs
(acc(β, α, δ)) −
P
δ∈T t Ht
(acc(β, α, δ))
|Tt|
UO
α (Ψt
) =
X
δ∈T t
Pt
(acc(β, α, δ)) ×
X
δ
Pt
(δ |δ)Uα(δ )
IG
α (Ψt
) =
P
χ∈N Hs
(needs(β, χ)) − Ht
(needs(β, χ))
|N|
UG
α (Ψt
) =
X
χ∈N
Pt
(needs(β, χ)) × Et
(Uα(needs(β, χ)))
II
α(Ψt
) =
Po
i=1
P
χ∈N Hs
(cho(β, χ, βi)) − Ht
(cho(β, χ, βi))
n × |N|
UI
α(Ψt
) =
oX
i=1
X
χ∈N
Ut
(cho(β, χ, βi)) − Us
(cho(β, χ, βi))
IC
α (Ψt
) =
Po
i=1
P
δ∈B Hs
(com(β, βi, b)) − Ht
(com(β, βi, b))
n × |B|
UC
α (Ψt
) =
oX
i=1
X
δ∈B
Ut
(com(β, βi, b)) − Us
(com(β, βi, b))
Table 2: Sample measures for each category in Vα(Ψt
). (Similarly for Vβ(Ψt
).)
Acknowledgements Carles Sierra is partially supported
by the OpenKnowledge European STREP project and by
the Spanish IEA Project.
8. REFERENCES
[1] Adams, J. S. Inequity in social exchange. In Advances
in experimental social psychology, L. Berkowitz, Ed.,
vol. 2. New York: Academic Press, 1965.
[2] Arcos, J. L., Esteva, M., Noriega, P.,
Rodr´ıguez, J. A., and Sierra, C. Environment
engineering for multiagent systems. Journal on
Engineering Applications of Artificial Intelligence 18
(2005).
[3] Bazerman, M. H., Loewenstein, G. F., and
White, S. B. Reversal of preference in allocation
decisions: judging an alternative versus choosing
among alternatives. Administration Science Quarterly,
37 (1992), 220-240.
[4] Brandenburger, A., and Nalebuff, B.
Co-Opetition : A Revolution Mindset That Combines
Competition and Cooperation. Doubleday, New York,
1996.
[5] Cheeseman, P., and Stutz, J. Bayesian Inference
and Maximum Entropy Methods in Science and
Engineering. American Institute of Physics, Melville,
NY, USA, 2004, ch. On The Relationship between
Bayesian and Maximum Entropy Inference, pp.
445461.
[6] Debenham, J. Bargaining with information. In
Proceedings Third International Conference on
Autonomous Agents and Multi Agent Systems
AAMAS-2004 (July 2004), N. Jennings, C. Sierra,
L. Sonenberg, and M. Tambe, Eds., ACM Press, New
York, pp. 664 - 671.
[7] Fischer, R., Ury, W., and Patton, B. Getting to
Yes: Negotiating agreements without giving in.
Penguin Books, 1995.
[8] Kalfoglou, Y., and Schorlemmer, M. IF-Map:
An ontology-mapping method based on
information-flow theory. In Journal on Data
Semantics I, S. Spaccapietra, S. March, and
K. Aberer, Eds., vol. 2800 of Lecture Notes in
Computer Science. Springer-Verlag: Heidelberg,
Germany, 2003, pp. 98-127.
[9] Lewicki, R. J., Saunders, D. M., and Minton,
J. W. Essentials of Negotiation. McGraw Hill, 2001.
[10] Li, Y., Bandar, Z. A., and McLean, D. An
approach for measuring semantic similarity between
words using multiple information sources. IEEE
Transactions on Knowledge and Data Engineering 15,
4 (July / August 2003), 871 - 882.
[11] MacKay, D. Information Theory, Inference and
Learning Algorithms. Cambridge University Press,
2003.
[12] Paris, J. Common sense and maximum entropy.
Synthese 117, 1 (1999), 75 - 93.
[13] Sierra, C., and Debenham, J. An
information-based model for trust. In Proceedings
Fourth International Conference on Autonomous
Agents and Multi Agent Systems AAMAS-2005
(Utrecht, The Netherlands, July 2005), F. Dignum,
V. Dignum, S. Koenig, S. Kraus, M. Singh, and
M. Wooldridge, Eds., ACM Press, New York, pp. 497
- 504.
[14] Sierra, C., and Debenham, J. Trust and honour in
information-based agency. In Proceedings Fifth
International Conference on Autonomous Agents and
Multi Agent Systems AAMAS-2006 (Hakodate, Japan,
May 2006), P. Stone and G. Weiss, Eds., ACM Press,
New York, pp. 1225 - 1232.
[15] Sierra, C., and Debenham, J. Information-based
agency. In Proceedings of Twentieth International
Joint Conference on Artificial Intelligence IJCAI-07
(Hyderabad, India, January 2007), pp. 1513-1518.
[16] Sondak, H., Neale, M. A., and Pinkley, R. The
negotiated allocations of benefits and burdens: The
impact of outcome valence, contribution, and
relationship. Organizational Behaviour and Human
Decision Processes, 3 (December 1995), 249-260.
[17] Valley, K. L., Neale, M. A., and Mannix, E. A.
Friends, lovers, colleagues, strangers: The effects of
relationships on the process and outcome of
negotiations. In Research in Negotiation in
Organizations, R. Bies, R. Lewicki, and B. Sheppard,
Eds., vol. 5. JAI Press, 1995, pp. 65-94.
The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1037 | set predicate;confidence measure;ontology;view of acceptability;acceptability view;acceptance criterion;component dialogue;long term relationship;utilitarian interpretation;utterance;successive negotiation encounter;negotiation;logic agent architecture;multiagent system;negotiation strategy |
train_J-33 | Bid Expressiveness and Clearing Algorithms in Multiattribute Double Auctions | We investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades. We develop a formal semantic framework for characterizing expressible offers, and show conditions under which the allocation problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades. We analyze the bilateral matching problem while taking into consideration relevant results from multiattribute utility theory. Network flow models we develop for computing global allocations facilitate classification of the problem space by computational complexity, and provide guidance for developing solution algorithms. Experimental trials help distinguish tractable problem classes for proposed solution techniques. | 1. BACKGROUND
A multiattribute auction is a market-based mechanism where
goods are described by vectors of features, or attributes [3, 5, 8,
19]. Such mechanisms provide traders with the ability to negotiate
over a multidimensional space of potential deals, delaying
commitment to specific configurations until the most promising candidates
are identified. For example, in a multiattribute auction for
computers, the good may be defined by attributes such as processor speed,
memory, and hard disk capacity. Agents have varying preferences
(or costs) associated with the possible configurations. For example,
a buyer may be willing to purchase a computer with a 2 GHz
processor, 500 MB of memory, and a 50 GB hard disk for a price no
greater than $500, or the same computer with 1GB of memory for
a price no greater than $600.
Existing research in multiattribute auctions has focused
primarily on one-sided mechanisms, which automate the process whereby
a single agent negotiates with multiple potential trading partners
[8, 7, 19, 5, 23, 22]. Models of procurement typically assume
the buyer has a value function, v, ranging over the possible
configurations, X, and that each seller i can similarly be associated
with a cost function ci over this domain. The role of the auction is
to elicit these functions (possibly approximate or partial versions),
and identify the surplus-maximizing deal. In this case, such an
outcome would be arg maxi,x v(x) − ci(x). This problem can be
translated into the more familiar auction for a single good without
attributes by computing a score for each attribute vector based on
the seller valuation function, and have buyers bid scores. Analogs
of the classic first- and second-price auctions correspond to
firstand second-score auctions [8, 7].
In the absence of a published buyer scoring function, agents on
both sides may provide partial specifications of the deals they are
willing to engage. Research on such auctions has, for example,
produced iterative mechanisms for eliciting cost functions
incrementally [19]. Other efforts focus on the optimization problem facing
the bid taker, for example considering side constraints on the
combination of trades comprising an overall deal [4]. Side constraints
have also been analyzed in the context of combinatorial auctions
[6, 20].
Our emphasis is on two-sided multiattribute auctions, where
multiple buyers and sellers submit bids, and the objective is to construct
a set of deals maximizing overall surplus. Previous research on
such auctions includes works by Fink et al. [12] and Gong [14],
both of which consider a matching problem for continuous double
auctions (CDAs), where deals are struck whenever a pair of
compatible bids is identified.
In a call market, in contrast, bids accumulate until designated
times (e.g., on a periodic or scheduled basis) at which the auction
clears by determining a comprehensive match over the entire set
of bids. Because the optimization is performed over an aggregated
scope, call markets often enjoy liquidity and efficiency advantages
over CDAs [10].1
Clearing a multiattribute CDA is much like clearing a one-sided
multiattribute auction. Because nothing happens between bids, the
problem is to match a given new bid (say, an offer to buy) with the
existing bids on the other (sell) side. Multiattribute call markets are
potentially much more complex. Constructing an optimal overall
matching may require consideration of many different
combina1
In the interim between clears, call markets may also disseminate
price quotes providing summary information about the state of the
auction [24]. Such price quotes are often computed based on
hypothetical clears, and so the clearing algorithm may be invoked more
frequently than actual market clearing operations.
110
tions of trades, among the various potential trading-partner
pairings. The problem can be complicated by restrictions on overall
assignments, as expressed in side constraints [16].
The goal of the present work is to develop a general framework
for multiattribute call markets, to enable investigation of design
issues and possibilities. In particular, we use the framework to
explore tradeoffs between expressive power of agent bids and
computational properties of auction clearing. We conduct our
exploration independent of any consideration of strategic issues bearing
on mechanism design. As with analogous studies of combinatorial
auctions [18], we intend that tradeoffs quantified in this work can
be combined with incentive factors within a comprehensive overall
approach to multiattribute auction design.
We provide the formal semantics of multiattribute offers in our
framework in the next section. We abstract, where appropriate,
from the specific language used to express offers, characterizing
expressiveness semantically in terms of what deals may be offered.
This enables us to identify some general conditions under which
the problem of multilateral matching can be decomposed into
bilateral matching problems. We then develop a family of network
flow problems that capture corresponding classes of multiattribute
call market optimizations. Experimental trials provide preliminary
confirmation that the network formulations provide useful structure
for implementing clearing algorithms.
2. MULTIATTRIBUTE OFFERS
2.1 Basic Definitions
The distinguishing feature of a multiattribute auction is that the
goods are defined by vectors of attributes, x = (x1, . . . , xm),
xj ∈ Xj . A configuration is a particular attribute vector, x ∈
X =
Qm
j=1 Xj . The outcome of the auction is a set of bilateral
trades. Trade t takes the form t = (x, q, b, s, π), signifying that
agent b buys q > 0 units of configuration x from seller s, for
payment π > 0. For convenience, we use the notation xt to denote the
configuration associated with trade t (and similarly for other
elements of t). For a set of trades T, we denote by Ti that subset of T
involving agent i (i.e., b = i or s = i). Let T denote the set of all
possible trades.
A bid expresses an agent"s willingness to participate in trades.
We specify the semantics of a bid in terms of offer sets. Let OT
i ⊆
Ti denote agent i"s trade offer set. Intuitively, this represents the
trades in which i is willing to participate. However, since the
outcome of the auction is a set of trades, several of which may involve
agent i, we must in general consider willingness to engage in trade
combinations. Accordingly, we introduce the combination offer set
of agent i, OC
i ⊆ 2Ti .
2.2 Specifying Offer Sets
A fully expressive bid language would allow specification of
arbitrary combination offer sets. We instead consider a more limited
class which, while restrictive, still captures most forms of
multiattribute bidding proposed in the literature. Our bids directly specify
part of the agent"s trade offer set, and include further directives
controlling how this can be extended to the full trade and combination
offer sets.
For example, one way to specify a trade (buy) offer set would
be to describe a set of configurations and quantities, along with the
maximal payment one would exchange for each (x, q) specified.
This description could be by enumeration, or any available means
of defining such a mapping.
An explicit set of trades in the offer set generally entails inclusion
of many more implicit trades. We assume payment monotonicity,
which means that agents always prefer more money. That is, for
π > π > 0,
(x, q, i, s, π) ∈ OT
i ⇒ (x, q, i, s, π ) ∈ OT
i ,
(x, q, b, i, π ) ∈ OT
i ⇒ (x, q, b, i, π) ∈ OT
i .
We also assume free disposal, which dictates that for all i, q >
q > 0,
(x, q , i, s, π) ∈ OT
i ⇒ (x, q, i, s, π) ∈ OT
i ,
(x, q, b, i, π) ∈ OT
i ⇒ (x, q , b, i, π) ∈ OT
i .
Note that the conditions for agents in the role of buyers and sellers
are analogous. Henceforth, for expository simplicity, we present
all definitions with respect to buyers only, leaving the definition
for sellers as understood. Allowing agents" bids to comprise offers
from both buyer and seller perspectives is also straightforward.
An assertion that offers are divisible entails further implicit
members in the trade offer set.
DEFINITION 1 (DIVISIBLE OFFER). Agent i"s offer is
divisible down to q iff
∀q < q < q. (x, q, i, s, π) ∈ OT
i ⇒ (x, q , i, s, q
q
π) ∈ OT
i .
We employ the shorthand divisible to mean divisible down to 0.
The definition above specifies arbitrary divisibility. It would
likewise be possible to define divisibility with respect to integers, or to
any given finite granularity. Note that when offers are divisible, it
suffices to specify one offer corresponding to the maximal quantity
one is willing to trade for any given configuration, trading partner,
and per-unit payment (called the price).
At the extreme of indivisibility are all-or-none offers.
DEFINITION 2 (AON OFFER). Agent i"s offer is all-or-none
(AON) iff
(x, q, i, s, π) ∈ OT
i ∧ (x, q , i, s, π ) ∈ OT
i ⇒ [q = q ∨ π = π ].
In many cases, the agent will be indifferent with respect to
different trading partners. In that event, it may omit the partner element
from trades directly specified in its offer set, and simply assert that
its offer is anonymous.
DEFINITION 3 (ANONYMITY). Agent i"s offer is anonymous
iff ∀s, s , b, b . (x, q, i, s, π) ∈ OT
i ⇔ (x, q, i, s , π) ∈ OT
i ∧
(x, q, b, i, π) ∈ OT
i ⇔ (x, q, b , i, π) ∈ OT
i .
Because omitting trading partner qualifications simplifies the
exposition, we generally assume in the following that all offers are
anonymous unless explicitly specified otherwise. Extending to the
non-anonymous case is conceptually straightforward. We employ
the wild-card symbol ∗ in place of an agent identifier to indicate
that any agent is acceptable.
To specify a trade offer set, a bidder directly specifies a set of
willing trades, along with any regularity conditions (e.g.,
divisibility, anonymity) that implicitly extend the set. The full trade offer
set is then defined by the closure of this direct set with respect to
payment monotonicity, free disposal, and any applicable
divisibility assumptions.
We next consider the specification of combination offer sets.
Without loss of generality, we restrict each trade set T ∈ OC
i to
include at most one trade for any combination of configuration and
trading partner (multiple such trades are equivalent to one net trade
aggregating the quantities and payments). The key question is to
what extent the agent is willing to aggregate deals across
configurations or trading partners. One possibility is disallowing any
aggregation.
111
DEFINITION 4 (NO AGGREGATION). The no-aggregation
combinations are given by ONA
i = {∅} ∪ {{t} | t ∈ OT
i }. Agent i"s
offer exhibits non-aggregation iff OC
i = ONA
i .
We require in general that OC
i ⊇ ONA
i .
A more flexible policy is to allow aggregation across trading
partners, keeping configuration constant.
DEFINITION 5 (PARTNER AGGREGATION). Suppose a
particular trade is offered in the same context (set of additional trades,
T) with two different sellers, s and s . That is,
{(x, q, i, s, π)} ∪ T ∈ OC
i ∧ {(x, q, i, s , π)} ∪ T ∈ OC
i .
Agent i"s offer allows seller aggregation iff in all such cases,
{(x, q , i, s, π ), (x, q − q , i, s , π − π )} ∪ T ∈ OC
i .
In other words, we may create new trade offer combinations by
splitting the common trade (quantity and payment, not necessarily
proportionately) between the two sellers.
In some cases, it might be reasonable to form combinations by
aggregating different configurations.
DEFINITION 6 (CONFIGURATION AGGREGATION). Suppose
agent i offers, in the same context, the same quantity of two (not
necessarily different) configurations, x and x . That is,
{(x, q, i, ∗, π)} ∪ T ∈ OC
i ∧ {(x , q, i, ∗, π )} ∪ T ∈ OC
i .
Agent i"s offer allows configuration aggregation iff in all such cases
(and analogously when it is a seller),
{(x, q , i, ∗,
q
q
π), (x , q − q , i, ∗,
q − q
q
π )} ∪ T ∈ OC
i .
Note that combination offer sets can accommodate offerings of
configuration bundles. However, classes of bundles formed by
partner or configuration aggregation are highly regular, covering only a
specific type of bundle formed by splitting a desired quantity across
configurations. This is quite restrictive compared to the general
combinatorial case.
2.3 Willingness to Pay
An agent"s offer trade set implicitly defines the agent"s
willingness to pay for any given configuration and quantity. We assume
anonymity to avoid conditioning our definitions on trading partner.
DEFINITION 7 (WILLINGNESS TO PAY). Agent i"s willingness
to pay for quantity q of configuration x is given by
ˆuB
i (x, q) = max π s.t. (x, q, i, ∗, π) ∈ OT
i .
We use the symbol ˆu to recognize that willingness to pay can be
viewed as a proxy for the agent"s utility function, measured in
monetary units. The superscript B distinguishes the buyer"s
willingnessto-pay function from, a seller"s willingness to accept, ˆuS
i (x, q),
defined as the minimum payment seller i will accept for q units of
configuration x. We omit the superscript where the distinction is
inessential or clear from context.
DEFINITION 8 (TRADE QUANTITY BOUNDS). Agent i"s
minimum trade quantity for configuration x is given by
qi(x) = min q s.t. ∃π. (x, q, i, ∗, π) ∈ OT
i .
The agent"s maximum trade quantity for x is
¯qi(x) = max q s.t.
∃π. (x, q, i, ∗, π) ∈ OT
i ∧ ¬∃q < q. (x, q , i, ∗, π) ∈ OT
i .
When the agent has no offers involving x, we take qi(x) = ¯qi(x) =
0.
It is useful to define a special case where all configurations are
offered in the same quantity range.
DEFINITION 9 (CONFIGURATION PARITY). Agent i"s offers
exhibit configuration parity iff
qi(x) > 0 ∧ qi(x ) > 0 ⇒ qi(x) = qi(x ) ∧ ¯qi(x) = ¯qi(x ).
Under configuration parity we drop the arguments from trade
quantity bounds, yielding the constants ¯q and q which apply to all offers.
DEFINITION 10 (LINEAR PRICING). Agent i"s offers exhibit
linear pricing iff for all qi(x) ≤ q ≤ ¯qi(x),
ˆui(x, q) =
q
¯qi(x)
ˆui(x, ¯qi(x)).
Note that linear pricing assumes divisibility down to qi(x). Given
linear pricing, we can define the unit willingness to pay, ˆui(x) =
ˆui(x, ¯qi(x))/¯qi(x), and take ˆui(x, q) = qˆui(x) for all qi(x) ≤
q ≤ ¯qi(x).
In general, an agent"s willingness to pay may depend on a context
of other trades the agent is engaging in.
DEFINITION 11 (WILLINGNESS TO PAY IN CONTEXT). Agent
i"s willingness to pay for quantity q of configuration x in the
context of other trades T is given by
ˆuB
i (x, q; T) = max π s.t. {(x, q, i, s, π)} ∪ Ti ∈ OC
i .
LEMMA 1. If OC
i is either non aggregating, or exhibits linear
pricing, then
ˆuB
i (x, q; T) = ˆuB
i (x, q).
3. MULTIATTRIBUTE ALLOCATION
DEFINITION 12 (TRADE SURPLUS). The surplus of trade t =
(x, q, b, s, π) is given by
σ(t) = ˆuB
b (x, q) − ˆuS
s (x, q).
Note that the trade surplus does not depend on the payment, which
is simply a transfer from buyer to seller.
DEFINITION 13 (TRADE UNIT SURPLUS). The unit surplus
of trade t = (x, q, b, s, π) is given by σ1
(t) = σ(t)/q.
Under linear pricing, we can equivalently write σ1
(t) = ˆuB
b (x) −
ˆuS
s (x).
DEFINITION 14 (SURPLUS OF A TRADE IN CONTEXT). The
surplus of trade t = (x, q, b, s, π) in the context of other trades T,
σ(t; T), is given by
ˆuB
b (x, q; T) − ˆuS
s (x, q; T).
DEFINITION 15 (GMAP). The Global Multiattribute
Allocation Problem (GMAP) is to find the set of acceptable trades
maximizing total surplus,
max
T ∈2T
X
t∈T
σ(t; T \ {t}) s.t. ∀i. Ti ∈ OC
i .
DEFINITION 16 (MMP). The Multiattribute Matching
Problem (MMP) is to find a best trade for a given pair of traders,
MMP(b, s) = arg max
t∈OT
b
∩OT
s
σ(t).
If OT
b ∩ OT
s is empty, we say that MMP has no solution.
112
Proofs of all the following results are provided in an extended
version of this paper available from the authors.
THEOREM 2. Suppose all agents" offers exhibit no aggregation
(Definition 4). Then the solution to GMAP consists of a set of
trades, each of which is a solution to MMP for its specified pair
of traders.
THEOREM 3. Suppose that each agent"s offer set satisfies one
of the following (not necessarily the same) sets of conditions.
1. No aggregation and configuration parity (Definitions 4 and 9).
2. Divisibility, linear pricing, and configuration parity
(Definitions 1, 10, and 9), with combination offer set defined as the
minimal set consistent with configuration aggregation
(Definition 6).2
Then the solution to GMAP consists of a set of trades, each of
which employs a configuration that solves MMP for its specified
pair of traders.
Let MMPd
(b, s) denote a modified version of MMP, where OT
b
and OT
s are extended to assume divisibility (i.e., the offer sets are
taken to be their closures under Definition 1). Then we can extend
Theorem 3 to allow aggregating agents to maintain AON or
minquantity offers as follows.
THEOREM 4. Suppose offer sets as in Theorem 3, except that
agents i satisfying configuration aggregation need be divisible only
down to qi, rather than down to 0. Then the solution to GMAP
consists of a set of trades, each of which employs the same
configuration as a solution to MMPd
for its specified pair of traders.
THEOREM 5. Suppose agents b and s exhibit configuration
parity, divisibility, and linear pricing, and there exists configuration x
such that ˆub(x) − ˆus(x) > 0. Then t ∈ MMPd
(b, s) iff
xt = arg max
x
{ˆub(x) − ˆus(x)}
qt = min(¯qb, ¯qs).
(1)
The preceding results signify that under certain conditions, we
can divide the global optimization problem into two parts: first find
a bilateral trade that maximizes unit surplus for each pair of traders
(or total surplus in the non-aggregation case), and then use the
results to find a globally optimal set of trades. In the following two
sections we investigate each of these subproblems.
4. UTILITY REPRESENTATION AND MMP
We turn next to consider the problem of finding a best deal
between pairs of traders. The complexity of MMP depends pivotally
on the representation by bids of offer sets, an issue we have
postponed to this point.
Note that issues of utility representation and MMP apply to a
broad class of multiattribute mechanisms, beyond the multiattribute
call markets we emphasize. For example, the complexity results
contained in this section apply equally to the bidding problem faced
by sellers in reverse auctions, given a published buyer scoring
function.
The simplest representation of an offer set is a direct
enumeration of configurations and associated quantities and payments. This
approach treats the configurations as atomic entities, making no use
2
That is, for such an agent i, OC
i is the closure under configuration
aggregation of ONA
i .
of attribute structure. A common and inexpensive enhancement is
to enable a trader to express sets of configurations, by specifying
subsets of the domains of component attributes. Associating a
single quantity and payment with a set of configurations expresses
indifference among them; hence we refer to such a set as an
indifference range.3
Indifference ranges include the case of attributes
with a natural ordering, in which a bid specifies a minimum or
maximum acceptable attribute level. The use of indifference ranges
can be convenient for MMP. The compatibility of two indifference
ranges is simply found by testing set intersection for each attribute,
as demonstrated by the decision-tree algorithm of Fink et al. [12].
Alternatively, bidders may specify willingness-to-pay functions
ˆu in terms of compact functional forms. Enumeration based
representations, even when enhanced with indifference ranges, are
ultimately limited by the exponential size of attribute space.
Functional forms may avoid this explosion, but only if ˆu reflects
structure among the attributes. Moreover, even given a compact
specification of ˆu, we gain computational benefits only if we can
perform the matching without expanding the ˆu values of an
exponential number of configuration points.
4.1 Additive Forms
One particularly useful multiattribute representation is known as
the additive scoring function. Though this form is widely used in
practice and in the academic literature, it is important to stress the
assumptions behind it. The theory of multiattribute representation
is best developed in the context where ˆu is interpreted as a
utility function representing an underlying preference order [17]. We
present the premises of additive utility theory in this section, and
discuss some generalizations in the next.
DEFINITION 17. A set of attributes Y ⊂ X is preferentially
independent (PI) of its complement Z = X \ Y if the conditional
preference order over Y given a fixed level Z0
of Z is the same
regardless of the choice of Z0
.
In other words, the preference order over the projection of X on
the attributes in Y is the same for any instantiation of the attributes
in Z.
DEFINITION 18. X = {x1, . . . , xm} is mutually preferentially
independent (MPI) if any subset of X is preferentially independent
of its complement.
THEOREM 6 ([9]). A preference order over set of attributes
X has an additive utility function representation
u(x1, . . . , xm) =
mX
i=1
ui(xi)
iff X is mutually preferential independent.
A utility function over outcomes including money is quasi-linear
if the function can be represented as a function over non-monetary
attributes plus payments, π. Interpreting ˆu as a utility function over
non-monetary attributes is tantamount to assuming quasi-linearity.
Even when quasi-linearity is assumed, however, MPI over
nonmonetary attributes is not sufficient for the quasi-linear utility
function to be additive. For this, we also need that each of the pairs
(π, Xi) for any attribute Xi would be PI of the rest of the attributes.
3
These should not be mistaken with indifference curves, which
express dependency between the attributes. Indifference curves can
be expressed by the more elaborate utility representations discussed
below.
113
This (by MAUT) in turn implies that the set of attributes including
money is MPI and the utility function can be represented as
u(x1, . . . , xm, π) =
mX
i=1
ui(xi) + π.
Given that form, a willingness-to-pay function reflecting u can be
represented additively, as
ˆu(x) =
mX
i=1
ui(xi)
In many cases the additivity assumption provides practically
crucial simplification of offer set elicitation. In addition to
compactness, additivity dramatically simplifies MMP. If both sides provide
additive ˆu representations, the globally optimal match reduces to
finding the optimal match separately for each attribute.
A common scenario in procurement has the buyer define an
additive scoring function, while suppliers submit enumerated offer
points or indifference ranges. This model is still very amenable to
MMP: for each element in a supplier"s enumerated set, we optimize
each attribute by finding the point in the supplier"s allowable range
that is most preferred by the buyer.
A special type of scoring (more particularly, cost) function was
defined by Bichler and Kalagnanam [4] and called a configurable
offer. This idea is geared towards procurement auctions: assuming
suppliers are usually comfortable with expressing their preferences
in terms of cost that is quasi-linear in every attribute, they can
specify a price for a base offer, and additional cost for every change in
a specific attribute level. This model is essentially a pricing out
approach [17]. For this case, MMP can still be optimized on a
per-attribute basis. A similar idea has been applied to one-sided
iterative mechanisms [19], in which sellers refine prices on a
perattribute basis at each iteration.
4.2 Multiattribute Utility Theory
Under MPI, the tradeoffs between the attributes in each subset
cannot be affected by the value of other attributes. For example,
when buying a PC, a weaker CPU may increase the importance of
the RAM compared to, say, the type of keyboard. Such
relationships cannot be expressed under an additive model.
Multiattribute utility theory (MAUT) develops various compact
representations of utility functions that are based on weaker
structural assumptions [17, 2]. There are several challenges in adapting
these techniques to multiattribute bidding. First, as noted above,
the theory is developed for utility functions, which may behave
differently from willingness-to-pay functions. Second, computational
efficiency of matching has not been an explicit goal of most work
in the area. Third, adapting such representations to iterative
mechanisms may be more challenging.
One representation that employs somewhat weaker assumptions
than additivity, yet retains the summation structure is the
generalized additive (GA) decomposition:
u(x) =
JX
j=1
fj(xj
), xj
∈ Xj
, (2)
where the Xj
are potentially overlapping sets of attributes, together
exhausting the space X.
A key point from our perspective is that the complexity of the
matching is similar to the complexity of optimizing a single
function, since the sum function is in the form (2) as well. Recent work
by Gonzales and Perny [15] provides an elicitation process for GA
decomposable preferences under certainty, as well as an
optimization algorithm for the GA decomposed function. The complexity of
exact optimization is exponential in the induced width of the graph.
However, to become operational for multiattribute bidding this
decomposition must be detectable and verifiable by statements over
preferences with respect to price outcomes. We are exploring this
topic in ongoing work [11].
5. SOLVING GMAP UNDER ALLOCATION
CONSTRAINTS
Theorems 2, 3, and 4 establish conditions under which GMAP
solutions must comprise elements from constituent MMP solutions.
In Sections 5.1 and 5.2, we show how to compute these GMAP
solutions, given the MMP solutions, under these conditions. In these
settings, traders that aggregate partners also aggregate
configurations; hence we refer to them simply as aggregating or
nonaggregating. Section 5.3 suggests a means to relax the linear
pricing restriction employed in these constructions. Section 5.4
provides strategies for allowing traders to aggregate partners and
restrict configuration aggregation at the same time.
5.1 Notation and Graphical Representation
Our clearing algorithms are based on network flow formulations
of the underlying optimization problem [1]. The network model is
based on a bipartite graph, in which nodes on the left side represent
buyers, and nodes on the right represent sellers. We denote the sets
of buyers and sellers by B and S, respectively.
We define two graph families, one for the case of non-aggregating
traders (called single-unit), and the other for the case of
aggregating traders (called multi-unit).4
For both types, a single directed
arc is placed from a buyer i ∈ B to a seller j ∈ S if and only if
MMP(i, j) is nonempty. We denote by T(i) the set of potential
trading partners of trader i (i.e., the nodes connected to buyer or
seller i in the bipartite graph.
In the single-unit case, we define the weight of an arc (i, j) as
wij = σ(MMP(i, j)). Note that free disposal lets a buy offer
receive a larger quantity than desired (and similarly for sell offers).
For the multi-unit case, the weights are wij = σ1
(MMP(i, j)),
and we associate the quantity ¯qi with the node for trader i. We also
use the notation qij for the mathematical formulations to denote
partial fulfillment of qt for t = MMP(i, j).
5.2 Handling Indivisibility and Aggregation
Constraints
Under the restrictions of Theorems 2, 3, or 4, and when the
solution to MMP is given, GMAP exhibits strong similarity to the
problem of clearing double auctions with assignment constraints
[16]. A match in our bipartite representation corresponds to a
potential trade in which assignment constraints are satisfied. Network
flow formulations have been shown to model this problem under
the assumption of indivisibility and aggregation for all traders. The
novelty in this part of our work is the use of generalized network
flow formulations for more complex cases where aggregation and
divisibility may be controlled by traders.
Initially we examine the simple case of no aggregation
(Theorem 2). Observe that the optimal allocation is simply the solution
to the well known weighted assignment problem [1] on the
singleunit bipartite graph described above. The set of matches that
maximizes the total weight of arcs corresponds to the set of trades that
maximizes total surplus. Note that any form of (in)divisibility can
4
In the next section, we introduce a hybrid form of graph
accommodating mixes of the two trader categories.
114
also be accommodated in this model via the constituent MMP
subproblems.
The next formulation solves the case in which all traders fall
under case 2 of Theorem 3-that is, all traders are aggregating and
divisible, and exhibit linear pricing. This case can be represented
using the following linear program, corresponding to our multi-unit
graph:
max
X
i∈B,j∈S
wij qij
s.t.
X
i∈T (j)
qij ≤ ¯qj j ∈ S
X
j∈T (i)
qij ≤ ¯qi i ∈ B
qij ≥ 0 j ∈ S, i ∈ B
Recall that the qij variables in the solution represent the number
of units that buyer i procures from seller j. This formulation is
known as the network transportation problem with inequality
constraints, for which efficient algorithms are available [1]. It is a
well known property of the transportation problem (and flow
problems on pure networks in general) that given integer input values,
the optimal solution is guaranteed to be integer as well. Figure 1
demonstrates the transformation of a set of bids to a transportation
problem instance.
Figure 1: Multi-unit matching with two boolean attributes.
(a) Bids, with offers to buy in the left column and offers to
sell at right. q@p indicates an offer to trade q units at price p
per unit. Configurations are described in terms of constraints
on attribute values. (b) Corresponding multi-unit assignment
model. W represents arc weights (unit surplus), s represents
source (exogenous) flow, and t represents sink quantity.
The problem becomes significantly harder when aggregation is
given as an option to bidders, requiring various enhancements to
the basic multi-unit bipartite graph described above. In general,
we consider traders that are either aggregating or not, with either
divisible or AON offers.
Initially we examine a special case, which at the same time
demonstrates the hardness of the problem but still carries computational
advantages. We designate one side (e.g., buyers) as restrictive (AON
and non-aggregating), and the other side (sellers) as unrestrictive
(divisible and aggregating). This problem can be represented using
the following integer programming formulation:
max
X
i∈B,j∈S
wij qij
s.t.
X
i∈T (j)
¯qiqij ≤ ¯qj j ∈ S
X
j∈T (i)
qij ≤ 1 i ∈ B
qij ∈ {0, 1} j ∈ S, i ∈ B
(3)
This formulation is a restriction of the generalized assignment
problem (GAP) [13]. Although GAP is known to be NP-hard, it can
be solved relatively efficiently by exact or approximate algorithms.
GAP is more general than the formulation above as it allows
buyside quantities (¯qi above) to be different for each potential seller.
That this formulation is NP-hard as well (even the case of a single
seller corresponds to the knapsack problem), illustrates the drastic
increase in complexity when traders with different constraints are
admitted to the same problem instance.
Other than the special case above, we found no advantage in
limiting AON constraints when traders may specify aggregation
constraints. Therefore, the next generalization allows any combination
of the two boolean constraints, that is, any trader chooses among
four bid types:
NI Bid AON and not aggregating.
AD Bid allows aggregation and divisibility.
AI Bid AON, allows aggregation (quantity can be aggregated across
configurations, as long as it sums to the whole amount).
ND No aggregation, divisibility (one trade, but smaller quantities
are acceptable).
To formulate an integer programming representation for the
problem, we introduce the following variables. Boolean (0/1) variables
ri and rj indicate whether buyer i and seller j participate in the
solution (used for AON traders). Another indicator variable, yij ,
applied to non-aggregating buyer i and seller j, is one iff i trades
with j. For aggregating traders, yij is not constrained.
max
X
i∈B,j∈S
Wij qij (4a)
s.t.
X
j∈T (i)
qij = ¯qiri i ∈ AIb (4b)
X
j∈T (i)
qij ≤ ¯qiri i ∈ ADb (4c)
X
i∈T (j)
qij = ¯qirj j ∈ AIs (4d)
X
i∈T (j)
qij ≤ qj rj j ∈ ADs (4e)
xij ≤ ¯qiyij i ∈ NDb , j ∈ T(i) (4f)
xij ≤ ¯qj yij j ∈ NIs , i ∈ T(j) (4g)
X
j∈T (i)
yij ≤ ri i ∈ NIb ∪ NDb (4h)
X
i∈T (j)
yij ≤ rj j ∈ NIs ∪ NDs (4i)
int qij (4j)
yij , rj, ri ∈ {0, 1} (4k)
115
Figure 2: Generalized network flow model. B1 is a buyer in
AD, B2 ∈ NI, B3 ∈ AI, B4 ∈ ND. V 1 is a seller in ND,
V 2 ∈ AI, V 4 ∈ AD. The g values represent arc gains.
Problem (4) has additional structure as a generalized min-cost
flow problem with integral flow.5
A generalized flow network is
a network in which each arc may have a gain factor, in addition
to the pure network parameters (which are flow limits and costs).
Flow in an arc is then multiplied by its gain factor, so that the flow
that enters the end node of an arc equals the flow that entered from
its start node, multiplied by the gain factor of the arc. The network
model can in turn be translated into an IP formulation that captures
such structure.
The generalized min-cost flow problem is well-studied and has
a multitude of efficient algorithms [1]. The faster algorithms are
polynomial in the number of arcs and the logarithm of the maximal
gain, that is, performance is not strongly polynomial but is
polynomial in the size of the input. The main benefit of this graphical
formulation to our matching problem is that it provides a very
efficient linear relaxation. Integer programming algorithms such as
branch-and-bound use solutions to the linear relaxation instance to
bound the optimal integer solution. Since network flow algorithms
are much faster than arbitrary linear programs (generalized network
flow simplex algorithms have been shown to run in practice only 2
or 3 times slower than pure network min-cost flow [1]), we expect
a branch-and-bound solver for the matching problem to show
improved performance when taking advantage of network flow
modeling.
The network flow formulation is depicted in Figure 2.
Nonrestrictive traders are treated as in Figure 1. For a non-aggregating
buyer, a single unit from the source will saturate up to one of the
yij for all j, and be multiplied by ¯qi. If i ∈ ND, the end node of
yij will function as a sink that may drain up to ¯qi of the entering
flow. For i ∈ NI we use an indicator (0/1) arc ri, on which the
flow is multiplied by ¯qi. Trader i trades the full quantity iff ri = 1.
At the seller side, the end node of a qij arc functions as a source
for sellers j ∈ ND, in order to let the flow through yij
arcs be 0
or ¯qj. The flow is then multiplied by 1
¯qj
so 0/1 flows enter an end
node which can drain either 1 or 0 units. for sellers j ∈ NI arcs rj
ensure AON similarly to arcs rj for buyers.
Having established this framework, we are ready to
accommo5
Constraint (4j) could be omitted (yielding computational savings)
if non-integer quantities are allowed. Here and henceforth we
assume the harder problem, where divisibility is with respect to
integers.
date more flexible versions of side constraints. The first
generalization is to replace the boolean AON constraint with divisibility down
to q, the minimal quantity. In our network flow instance we simply
need to turn the node of the constrained trader i (e.g., the node B3
in Figure 2) to a sink that can drain up to ¯qi − qi units of flow. The
integer program (4) can be also easily changed to accommodate
this extension.
Using gains, we can also apply batch size constraints. If a trader
specifies a batch size β, we change the gain on the r arcs to β, and
set the available flow of its origin to the maximal number of batches
¯qi/β.
5.3 Nonlinear Pricing
A key assumption in handling aggregation up to this point is
linear pricing, which enables us to limit attention to a single unit
price. Divisibility without linear pricing allows expression of
concave willingness-to-pay functions, corresponding to convex
preference relations. Bidders may often wish to express non-convex offer
sets, for example, due to fixed costs or switching costs in
production settings [21].
We consider nonlinear pricing in the form of enumerated
payment schedules-that is, defining values ˆu(x, q) for a select set of
quantities q. For the indivisible case, these points are distinguished
in the offer set by satisfying the following:
∃π. (x, q, i, ∗, π) ∈ OT
i ∧ ¬∃q < q. (x, q , i, ∗, π) ∈ OT
i .
(cf. Definition 8, which defines the maximum quantity, ¯q, as the
largest of these.) For the divisible case, the distinguished quantities
are those where the unit price changes, which can be formalized
similarly.
To handle nonlinear pricing, we augment the network to include
flow possibilities corresponding to each of the enumerated
quantities, plus additional structure to enforce exclusivity among them.
In other words, the network treats the offer for a given quantity as
in Section 5.2, and embeds this in an XOR relation to ensure that
each trader picks only one of these quantities. Since for each such
quantity choice we can apply Theorem 3 or 4, the solution we get
is in fact the solution to GMAP.
The network representation of the XOR relation (which can be
embedded into the network of Figure 2) is depicted in Figure 3. For
a trader i with K XOR quantity points, we define dummy variables,
zk
i , k = 1, . . . , K. Since we consider trades between every pair
of quantity points we also have qk
ij
, k = 1, . . . , K. For buyer
i ∈ AI with XOR points at quantities ¯qk
i , we replace (4b) with the
following constraints:
X
j∈T (i)
qk
ij
= ¯qk
i zk
i k = 1, . . . , K
KX
k=1
zk
i = ri
zk
i ∈ {0, 1} k = 1, . . . , K
(5)
5.4 Homogeneity Constraints
The model (4) handles constraints over the aggregation of
quantities from different trading partners. When aggregation is allowed,
the formulation permits trades involving arbitrary combinations of
configurations. A homogeneity constraint [4] restricts such
combinations, by requiring that configurations aggregated in an overall
deal must agree on some or all attributes.
116
Figure 3: Extending the network flow model to express an XOR
over quantities. B2 has 3 XOR points for 6, 3, or 5 units.
In the presence of homogeneity constraints, we can no longer
apply the convenient separation of GMAP into MMP plus global
bipartite optimization, as the solution to GMAP may include trades
not part of any MMP solution. For example, let buyer b specify
an offer for maximum quantity 10 of various acceptable
configurations, with a homogeneity constraint over the attribute color.
This means b is willing to aggregate deals over different trading
partners and configurations, as long as all are the same color. If
seller s can provide 5 blue units or 5 green units, and seller s can
provide only 5 green units, we may prefer that b and s trade on
green units, even if the local surplus of a blue trade is greater.
Let {x1, . . . , xH} be attributes that some trader constrains to
be homogeneous. To preserve the network flow framework, we
need to consider, for each trader, every point in the product domain
of these homogeneous attributes. Thus, for every assignment ˆx
to the homogeneous attributes, we compute MMP(b, s) under the
constraint that configurations are consistent with ˆx. We apply the
same approach as in Section 5.3: solve the global optimization,
such that the alternative ˆx assignments for each trader are combined
under XOR semantics, thus enforcing homogeneity constraints.
The size of this network is exponential in the number of
homogeneous attributes, since we need a node for each point in the product
domain of all the homogeneous attributes of each trader.6
Hence
this solution method will only be tractable in applications were the
traders can be limited to a small number of homogeneous attributes.
It is important to note that the graph needs to include a node only
for each point that potentially matches a point of the other side. It
is therefore possible to make the problem tractable by limiting one
of the sides to a less expressive bidding language, and by that limit
the set of potential matches. For example, if sellers submit bounded
sets of XOR points, we only need to consider the points in the
combined set offered by the sellers, and the reduction to network flow
is polynomial regardless of the number of homogeneous attributes.
If such simplifications do not apply, it may be preferable to solve
the global problem directly as a single optimization problem. We
provide the formulation for the special case of divisibility (with
respect to integers) and configuration parity. Let i index buyers, j
sellers, and H homogeneous attributes. Variable xh
ij
∈ Xh
represents the value of attribute Xh in the trade between buyer i and
seller j. Integer variable qij represents the quantity of the trade
(zero for no trade) between i and j.
6
If traders differ on which attributes they express such constraints,
we can limit consideration to the relevant alternatives. The
complexity will still be exponential, but in the maximum number of
homogeneous attributes for any pair of traders.
max
X
i∈B,j∈S
[ˆuB
i (xij , qij ) − ˆuS
j (xij , qij )]
X
j∈S
qij ≤ ¯qi i ∈ B
X
i∈B
qij ≤ ¯qj j ∈ S
xh
1j
= xh
2j
= · · · = x|B|j
j ∈ S, h ∈ {1, . . . , H}
xh
i1
= xh
i2
= · · · = xi|S|
i ∈ B, h ∈ {1, . . . , H}
(6)
Table 1 summarizes the mapping we presented from allocation
constraints to the complexity of solving GMAP. Configuration
parity is assumed for all cases but the first.
6. EXPERIMENTAL RESULTS
We approach the experimental aspect of this work with two
objectives. First, we seek a general idea of the sizes and types of
clearing problems that can be solved under given time constraints. We
also look to compare the performance of a straightforward integer
program as in (4) with an integer program that is based on the
network formulations developed here. Since we used CPLEX, a
commercial optimization tool, the second objective could be achieved
to the extent that CPLEX can take advantage of network structure
present in a model.
We found that in addition to the problem size (in terms of number
of traders), the number of aggregating traders plays a crucial role in
determining complexity. When most of the traders are aggregating,
problems of larger sizes can be solved quickly. For example, our
IP model solved instances with 600 buyers and 500 sellers, where
90% of them are aggregating, in less than two minutes. When the
aggregating ratio was reduced to 80% for the same data, solution
time was just under five minutes.
These results motivated us to develop a new network model.
Rather than treat non-aggregating traders as a special case, the new
model takes advantage of the single-unit nature of non-aggregating
trades (treating the aggregating traders as a special case). This new
model outperformed our other models on most problem instances,
exceptions being those where aggregating traders constitute a vast
majority (at least 80%).
This new model (Figure 4) has a single node for each non
aggregating trader, with a single-unit arc designating a match to another
non-aggregating trader. An aggregating trader has a node for each
potential match, connected (via y arcs) to a mutual source node.
Unlike the previous model we allow fractional flow for this case,
representing the traded fraction of the buyer"s total quantity.7
We tested all three models on random data in the form of
bipartite graphs encoding MMP solutions. In our experiments, each
trader has a maximum quantity uniformly distributed over [30, 70],
and minimum quantity uniformly distributed from zero to maximal
quantity. Each buyer/seller pair is selected as matching with
probability 0.75, with matches assigned a surplus uniformly distributed
over [10, 70]. Whereas the size of the problem is defined by the
number of traders on each side, the problem complexity depends
on the product |B| × |S|. The tests depicted in Figures 5-7 are
for the worst case |B| = |S|, with each data point averaged over
six samples. In the figures, the direct IP (4) is designated SW,
our first network model (Figure 2) NW, and our revised network
model (Figure 4) NW 2.
7
Traded quantity remains integer.
117
Aggregation Hom. attr. Divisibility linear pricing Technique Complexity
No aggregation N/A Any Not required Assignment problem Polynomial
All aggregate None Down to 0 Required Transpor. problem Polynomial
One side None Aggr side div. Aggr. side GAP NP-hard
Optional None Down to q, batch Required Generalized ntwrk flow NP-hard
Optional Bounded Down to q, batch Bounded size schdl. Generalized ntwrk flow NP-hard
Optional Not bounded Down to q, batch Not required Nonlinear opt Depends on ˆu(x, q)
Table 1: Mapping from combinations of allocation constraints to the solution methods of GMAP. One Side means that one side
aggregates and divisible, and the other side is restrictive. Batch means that traders may submit batch sizes.
Figure 4: Generalized network flow model. B1 is a buyer in
AD, B2 ∈ AI, B3 ∈ NI, B4 ∈ ND. V 1 is a seller in AD,
V 2 ∈ AI, V 4 ∈ ND. The g values represent arc gains, and W
values represent weights.
Figure 5: Average performance of models when 30% of traders
aggregate.
Figure 6: Average performance of models when 50% of traders
aggregate.
Figure 7: Average performance of models when 70% of traders
aggregate.
118
Figure 8: Performance of models when varying percentage of
aggregating traders
Figure 8 shows how the various models are affected by a change
in the percentage of aggregating traders, holding problem size fixed.8
Due to the integrality constraints, we could not test available
algorithms specialized for network-flow problems on our test
problems. Thus, we cannot fully evaluate the potential gain attributable
to network structure. However, the model we built based on the
insight from the network structure clearly provided a significant
speedup, even without using a special-purpose algorithm. Model
NW 2 provided speedups of a factor of 4-10 over the model SW.
This was consistent throughout the problem sizes, including the
smaller sizes for which the speedup is not visually apparent on the
chart.
7. CONCLUSIONS
The implementation and deployment of market exchanges
requires the development of bidding languages, information feedback
policies, and clearing algorithms that are suitable for the target
domain, while paying heed to the incentive properties of the resulting
mechanisms. For multiattribute exchanges, the space of feasible
such mechanisms is constrained by computational limitations
imposed by the clearing process. The extent to which the space of
feasible mechanisms may be quantified a priori will facilitate the
search for such exchanges in the full mechanism design problem.
In this work, we investigate the space of two-sided multiattribute
auctions, focusing on the relationship between constraints on the
offers traders can express through bids, and the resulting
computational problem of determining an optimal set of trades. We
developed a formal semantic framework for characterizing expressible
offers, and introduced some basic classes of restrictions. Our key
technical results identify sets of conditions under which the
overall matching problem can be separated into first identifying
optimal pairwise trades and subsequently optimizing combinations of
those trades. Based on these results, we developed network flow
models for the overall clearing problem, which facilitate
classification of problem versions by computational complexity, and provide
guidance for developing solution algorithms and relaxing bidding
constraints.
8. ACKNOWLEDGMENTS
This work was supported in part by NSF grant IIS-0205435, and
the STIET program under NSF IGERT grant 0114368. We are
8
All tests were performed on Intel 3.4 GHz processors with 2048
KB cache. Test that did not complete by the one-hour time limit
were recorded as 4000 seconds.
grateful to comments from an anonymous reviewer. Some of the
underlying ideas were developed while the first two authors worked
at TradingDynamics Inc. and Ariba Inc. in 1999-2001 (cf. US
Patent 6,952,682). We thank Yoav Shoham, Kumar Ramaiyer, and
Gopal Sundaram for fruitful discussions about multiattribute
auctions in that time frame.
9. REFERENCES
[1] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Network Flows.
Prentice-Hall, 1993.
[2] F. Bacchus and A. Grove. Graphical models for preference and
utility. In Eleventh Conference on Uncertainty in Artificial
Intelligence, pages 3-10, Montreal, 1995.
[3] M. Bichler. The Future of e-Markets: Multi-Dimensional Market
Mechanisms. Cambridge U. Press, New York, NY, USA, 2001.
[4] M. Bichler and J. Kalagnanam. Configurable offers and winner
determination in multi-attribute auctions. European Journal of
Operational Research, 160:380-394, 2005.
[5] M. Bichler, M. Kaukal, and A. Segev. Multi-attribute auctions for
electronic procurement. In Proceedings of the 1st IBM IAC
Workshop on Internet Based Negotiation Technologies, 1999.
[6] C. Boutilier, T. Sandholm, and R. Shields. Eliciting bid taker
non-price preferences in (combinatorial) auctions. In Nineteenth
Natl. Conf. on Artificial Intelligence, pages 204-211, San Jose, 2004.
[7] F. Branco. The design of multidimensional auctions. RAND Journal
of Economics, 28(1):63-81, 1997.
[8] Y.-K. Che. Design competition through multidimensional auctions.
RAND Journal of Economics, 24(4):668-680, 1993.
[9] G. Debreu. Topological methods in cardinal utility theory. In
K. Arrow, S. Karlin, and P. Suppes, editors, Mathematical Methods
in the Social Sciences. Stanford University Press, 1959.
[10] N. Economides and R. A. Schwartz. Electronic call market trading.
Journal of Portfolio Management, 21(3), 1995.
[11] Y. Engel and M. P. Wellman. Multiattribute utility representation for
willingness-to-pay functions. Tech. report, Univ. of Michigan, 2006.
[12] E. Fink, J. Johnson, and J. Hu. Exchange market for complex goods:
Theory and experiments. Netnomics, 6(1):21-42, 2004.
[13] M. L. Fisher, R. Jaikumar, and L. N. Van Wassenhove. A multiplier
adjustment method for the generalized assignment problem.
Management Science, 32(9):1095-1103, 1986.
[14] J. Gong. Exchanges for complex commodities: Search for optimal
matches. Master"s thesis, University of South Florida, 2002.
[15] C. Gonzales and P. Perny. GAI networks for decision making under
certainty. In IJCAI-05 workshop on preferences, Edinburgh, 2005.
[16] J. R. Kalagnanam, A. J. Davenport, and H. S. Lee. Computational
aspects of clearing continuous call double auctions with assignment
constraints and indivisible demand. Electronic Commerce Research,
1(3):221-238, 2001.
[17] R. L. Keeney and H. Raiffa. Decisions with Multiple Objectives:
Preferences and Value Tradeoffs. Wiley, 1976.
[18] N. Nisan. Bidding and allocation in combinatorial auctions. In
Second ACM Conference on Electronic Commerce, pages 1-12,
Minneapolis, MN, 2000.
[19] D. C. Parkes and J. Kalagnanam. Models for iterative multiattribute
procurement auctions. Management Science, 51:435-451, 2005.
[20] T. Sandholm and S. Suri. Side constraints and non-price attributes in
markets. In IJCAI-01 Workshop on Distributed Constraint
Reasoning, Seattle, 2001.
[21] L. J. Schvartzman and M. P. Wellman. Market-based allocation with
indivisible bids. In AAMAS-05 Workshop on Agent-Mediated
Electronic Commerce, Utrecht, 2005.
[22] J. Shachat and J. T. Swarthout. Procurement auctions for
differentiated goods. Technical Report 0310004, Economics
Working Paper Archive at WUSTL, Oct. 2003.
[23] A. V. Sunderam and D. C. Parkes. Preference elicitation in proxied
multiattribute auctions. In Fourth ACM Conference on Electronic
Commerce, pages 214-215, San Diego, 2003.
[24] P. R. Wurman, M. P. Wellman, and W. E. Walsh. A parametrization
of the auction design space. Games and Economic Behavior,
35:304-338, 2001.
119 | bid;constraint;one-sided mechanism;partial specification;multiattribute auction;combinatorial auction;auction;global allocation;seller valuation function;preference;continuous double auction;multiattribute utility theory;semantic framework |
train_J-34 | (In)Stability Properties of Limit Order Dynamics | We study the stability properties of the dynamics of the standard continuous limit-order mechanism that is used in modern equity markets. We ask whether such mechanisms are susceptible to butterfly effects - the infliction of large changes on common measures of market activity by only small perturbations of the order sequence. We show that the answer depends strongly on whether the market consists of absolute traders (who determine their prices independent of the current order book state) or relative traders (who determine their prices relative to the current bid and ask). We prove that while the absolute trader model enjoys provably strong stability properties, the relative trader model is vulnerable to great instability. Our theoretical results are supported by large-scale experiments using limit order data from INET, a large electronic exchange for NASDAQ stocks. | 1. INTRODUCTION
In recent years there has been an explosive increase in
the automation of modern equity markets. This increase
has taken place both in the exchanges, which are
increasingly computerized and offer sophisticated interfaces for
order placement and management, and in the trading
activity itself, which is ever more frequently undertaken by
software. The so-called Electronic Communication Networks (or
ECNs) that dominate trading in NASDAQ stocks are a
common example of the automation of the exchanges. On the
trading side, computer programs now are entrusted not only
with the careful execution of large block trades for clients
(sometimes referred to on Wall Street as program trading),
but with the autonomous selection of stocks, direction (long
or short) and volumes to trade for profit (commonly referred
to as statistical arbitrage).
The vast majority of equity trading is done via the
standard limit order market mechanism. In this mechanism,
continuous trading takes place via the arrival of limit
orders specifying whether the party wishes to buy or sell, the
volume desired, and the price offered. Arriving limit orders
that are entirely or partially executable with the best offers
on the other side are executed immediately, with any
volume not immediately executable being placed in an queue
(or book) ordered by price on the appropriate side (buy or
sell). (A detailed description of the limit order mechanism is
given in Section 3.) While traders have always been able to
view the prices at the top of the buy and sell books (known
as the bid and ask), a relatively recent development in
certain exchanges is the real-time revelation of the entire order
book - the complete distribution of orders, prices and
volumes on both sides of the exchange. With this revelation
has come the opportunity - and increasingly, the
needfor modeling and exploiting limit order data and
dynamics. It is fair to say that market microstructure, as this area
is generally known, is a topic commanding great interest
both in the real markets and in the academic finance
literature. The opportunities and needs span the range from
the optimized execution of large trades to the creation of
stand-alone proprietary strategies that attempt to profit
from high-frequency microstructure signals.
In this paper we investigate a previously unexplored but
fundamental aspect of limit order microstructure: the
stability properties of the dynamics. Specifically, we are
interested in the following natural question: To what extent
are simple models of limit order markets either susceptible
or immune to butterfly effects - that is, the infliction of
large changes in important activity statistics (such as the
120
number of shares traded or the average price per share) by
only minor perturbations of the order sequence?
To examine this question, we consider two stylized but
natural models of the limit order arrival process. In the
absolute price model, buyers and sellers arrive with limit order
prices that are determined independently of the current state
of the market (as represented by the order books), though
they may depend on all manner of exogenous information
or shocks, such as time, news events, announcements from
the company whose shares are being traded, private signals
or state of the individual traders, etc. This process models
traditional fundamentals-based trading, in which market
participants each have some inherent but possibly varying
valuation for the good that in turn determines their limit
price.
In contrast, in the relative price model, traders express
their limit order prices relative to the best price offered in
their respective book (buy or sell). Thus, a buyer would
encode their limit order price as an offset ∆ (which may be
positive, negative, or zero) from the current bid pb, which is
then translated to the limit price pb +∆. Again, in addition
to now depending on the state of the order books, prices
may also depend on all manner of exogenous information.
The relative price model can be viewed as modeling traders
who, in addition to perhaps incorporating fundamental
external information on the stock, may also position their
orders strategically relative to the other orders on their side
of the book. A common example of such strategic
behavior is known as penny-jumping on Wall Street, in which
a trader who has in interest in buying shares quickly, but
still at a discount to placing a market order, will
deliberately position their order just above the current bid. More
generally, the entire area of modern execution optimization
[9, 10, 8] has come to rely heavily on the careful positioning
of limit orders relative to the current order book state. Note
that such positioning may depend on more complex features
of the order books than just the current bid and ask, but
the relative model is a natural and simplified starting point.
We remark that an alternate view of the two models is that
all traders behave in a relative manner, but with absolute
traders able to act only on a considerably slower time scale
than the faster relative traders.
How do these two models differ? Clearly, given any fixed
sequence of arriving limit order prices, we can choose to
express these prices either as their original (absolute) values,
or we can run the order book dynamical process and
transform each order into a relative difference with the top of
its book, and obtain identical results. The differences arise
when we consider the stability question introduced above.
Intuitively, in the absolute model a small perturbation in
the arriving limit price sequence should have limited (but
still some) effects on the subsequent evolution of the order
books, since prices are determined independently. For the
relative model this intuition is less clear. It seems possible
that a small perturbation could (for example) slightly
modify the current bid, which in turn could slightly modify the
price of the next arriving order, which could then slightly
modify the price of the subsequent order, and so on, leading
to an amplifying sequence of events.
Our main results demonstrate that these two models do
indeed have dramatically different stability properties. We
first show that for any fixed sequence of prices in the
absolute model, the modification of a single order has a bounded
and extremely limited impact on the subsequent evolution
of the books. In particular, we define a natural notion of
distance between order books and show that small
modifications can result in only constant distance to the original
books for all subsequent time steps. We then show that this
implies that for almost any standard statistic of market
activity - the executed volume, the average price execution
price, and many others - the statistic can be influenced
only infinitesimally by small perturbations.
In contrast, we show that the relative model enjoys no
such stability properties. After giving specific (worst-case)
relative price sequences in which small perturbations
generate large changes in basic statistics (for example, altering the
number of shares traded by a factor of two), we proceed to
demonstrate that the difference in stability properties of the
two models is more than merely theoretical. Using
extensive INET (a major ECN for NASDAQ stocks) limit order
data and order book reconstruction code, we investigate the
empirical stability properties when the data is interpreted
as containing either absolute prices, relative prices, or
mixtures of the two. The theoretical predictions of stability and
instability are strongly borne out by the subsequent
experiments.
In addition to stability being of fundamental interest in
any important dynamical system, we believe that the
results described here provide food for thought on the
topics of market impact and the backtesting of quantitative
trading strategies (the attempt to determine hypothetical
past performance using historical data). They suggest that
one"s confidence that trading quietly and in small
volumes will have minimal market impact is linked to an
implicit belief in an absolute price model. Our results and the
fact that in the real markets there is a large and increasing
amount of relative behavior such as penny-jumping would
seem to cast doubts on such beliefs. Similarly, in a purely or
largely relative-price world, backtesting even low-frequency,
low-volume strategies could result in historical estimates of
performance that are not only unrelated to future
performance (the usual concern), but are not even accurate
measures of a hypothetical past.
The outline of the paper follows. In Section 2 we briefly
review the large literature on market microstructure. In
Section 3 we describe the limit order mechanism and our
formal models. Section 4 presents our most important
theoretical results, the 1-Modification Theorem for the absolute
price model. This theorem is applied in Section 5 to derive a
number of strong stability properties in the absolute model.
Section 6 presents specific examples establishing the
worstcase instability of the relative model. Section 7 contains
the simulation studies that largely confirm our theoretical
findings on INET market data.
2. RELATED WORK
As was mentioned in the Introduction, market
microstructure is an important and timely topic both in academic
finance and on Wall Street, and consequently has a large and
varied recent literature. Here we have space only to
summarize the main themes of this literature and to provide
pointers to further readings. To our knowledge the stability
properties of detailed limit order microstructure dynamics
have not been previously considered. (However, see Farmer
and Joshi [6] for an example and survey of other price
dynamic stability studies.)
121
On the more theoretical side, there is a rich line of work
examining what might be considered the game-theoretic
properties of limit order markets. These works model traders
and market-makers (who provide liquidity by offering both
buy and sell quotes, and profit on the difference) by utility
functions incorporating tolerance for risks of price
movement, large positions and other factors, and examine the
resulting equilibrium prices and behaviors. Common
findings predict negative price impacts for large trades, and price
effects for large inventory holdings by market-makers. An
excellent and comprehensive survey of results in this area
can be found in [2].
There is a similarly large body of empirical work on
microstructure. Major themes include the measurement of
price impacts, statistical properties of limit order books, and
attempts to establish the informational value of order books
[4]. A good overview of the empirical work can be found in
[7]. Of particular note for our interests is [3], which
empirically studies the distribution of arriving limit order prices in
several prominent markets. This work takes a view of
arriving prices analogous to our relative model, and establishes
a power-law form for the resulting distributions.
There is also a small but growing number of works
examining market microstructure topics from a computer
science perspective, including some focused on the use of
microstructure in algorithms for optimized trade execution.
Kakade et al. [9] introduced limit order dynamics in
competitive analysis for one-way and volume-weighted average
price (VWAP) trading. Some recent papers have applied
reinforcement learning methods to trade execution using order
book properties as state variables [1, 5, 10].
3. MICROSTRUCTURE PRELIMINARIES
The following expository background material is adapted
from [9]. The market mechanism we examine in this paper
is driven by the simple and standard concept of a limit
order. Suppose we wish to purchase 1000 shares of Microsoft
(MSFT) stock. In a limit order, we specify not only the
desired volume (1000 shares), but also the desired price.
Suppose that MSFT is currently trading at roughly $24.07
a share (see Figure 1, which shows an actual snapshot of an
MSFT order book on INET), but we are only willing to buy
the 1000 shares at $24.04 a share or lower. We can choose to
submit a limit order with this specification, and our order
will be placed in a queue called the buy order book, which
is ordered by price, with the highest offered unexecuted buy
price at the top (often referred to as the bid). If there are
multiple limit orders at the same price, they are ordered
by time of arrival (with older orders higher in the book).
In the example provided by Figure 1, our order would be
placed immediately after the extant order for 5,503 shares
at $24.04; though we offer the same price, this order has
arrived before ours. Similarly, a sell order book for sell limit
orders is maintained, this time with the lowest sell price
offered (often referred to as the ask) at its top.
Thus, the order books are sorted from the most
competitive limit orders at the top (high buy prices and low sell
prices) down to less competitive limit orders. The bid and
ask prices together are sometimes referred to as the inside
market, and the difference between them as the spread. By
definition, the order books always consist exclusively of
unexecuted orders - they are queues of orders hopefully
waiting for the price to move in their direction.
Figure 1: Sample INET order books for MSFT.
How then do orders get (partially) executed? If a buy
(sell, respectively) limit order comes in above the ask
(below the bid, respectively) price, then the order is matched
with orders on the opposing books until either the incoming
order"s volume is filled, or no further matching is possible,
in which case the remaining incoming volume is placed in
the books.
For instance, suppose in the example of Figure 1 a buy
order for 2000 shares arrived with a limit price of $24.08.
This order would be partially filled by the two 500-share
sell orders at $24.069 in the sell books, the 500-share sell
order at $24.07, and the 200-share sell order at $24.08, for
a total of 1700 shares executed. The remaining 300 shares
of the incoming buy order would become the new bid of
the buy book at $24.08. It is important to note that the
prices of executions are the prices specified in the limit orders
already in the books, not the prices of the incoming order
that is immediately executed. Thus in this example, the
1700 executed shares would be at different prices. Note that
this also means that in a pure limit order exchange such as
INET, market orders can be simulated by limit orders
with extreme price values. In exchanges such as INET, any
order can be withdrawn or canceled by the party that placed
it any time prior to execution.
Every limit order arrives atomically and instantaneously
- there is a strict temporal sequence in which orders arrive,
and two orders can never arrive simultaneously. This gives
rise to the definition of the last price of the exchange, which
is simply the last price at which the exchange executed an
order. It is this quantity that is usually meant when people
casually refer to the (ticker) price of a stock.
3.1 Formal Definitions
We now provide a formal model for the limit order
pro122
cess described above. In this model, limit orders arrive in a
temporal sequence, with each order specifying its limit price
and an indication of its type (buy or sell). Like the actual
exchanges, we also allow cancellation of a standing
(unexecuted) order in the books any time prior to its execution.
Without loss of generality we limit attention to a model in
which every order is for a single share; large order volumes
can be represented by 1-share sequences.
Definition 3.1. Let Σ = σ1, ...σn be a sequence of limit
orders, where each σi has the form ni, ti, vi . Here ni is an
order identifier, ti is the order type (buy, sell, or cancel), and
vi is the limit order value. In the case that ti is a cancel, ni
matches a previously placed order and vi is ignored.
We have deliberately called vi in the definition above the
limit order value rather than price, since our two models
will differ in their interpretation of vi (as being absolute or
relative). In the absolute model, we do indeed interpret vi
as simply being the price of the limit order. In the
relative model, if the current order book configuration is (A, B)
(where A is the sell and B the buy book), the price of the
order is ask(A) + vi if ti is sell, and bid(B) + vi if ti is buy,
where by ask(X) and bid(X) we denote the price of the
order at the top of the book X. (Note vi can be negative.)
Our main interest in this paper is the effects that the
modification of a small number of limit orders can have on the
resulting dynamics. For simplicity we consider only
modifications to the limit order values, but our results generalize
to any modification.
Definition 3.2. A k-modification of Σ is a sequence Σ
such that for exactly k indices i1, ..., ik vij = vij
, tij = tij
,
and nij = nij
. For every = ij , j ∈ {1, . . . , k} σ = σ .
We now define the various quantities whose stability
properties we examine in the absolute and relative models. All of
these are standard quantities of common interest in financial
markets.
• volume(Σ): Number of shares executed (traded) in the
sequence Σ.
• average(Σ): Average execution price.
• close(Σ): Price of the last (closing) execution.
• lastbid(Σ): Bid at the end of the sequence.
• lastask(Σ): Ask at end of the sequence.
4. THE 1-MODIFICATION THEOREM
In this section we provide our most important technical
result. It shows that in the absolute model, the effects that
the modification of a single order has on the resulting
evolution of the order books is extremely limited. We then apply
this result to derive strong stability results for all of the
aforementioned quantities in the absolute model.
Throughout this section, we consider an arbitrary order
sequence Σ in the absolute model, and any 1-modification
Σ of Σ. At any point (index) i in the two sequences we shall
use (A1, B1) to denote the sell and buy books (respectively)
in Σ, and (A2, B2) to denote the sell and buy books in Σ ;
for notational convenience we omit explicitly superscripting
by the current index i. We will shortly establish that at all
times i, (A1, B1) and (A2, B2) are very close.
Although the order books are sorted by price, we will use
(for example) A1 ∪ {a2} = A2 to indicate that A2 contains
an order at some price a2 that is not present in A1, but that
otherwise A1 and A2 are identical; thus deleting the order
at a2 in A2 would render the books the same. Similarly,
B1 ∪ {b2} = B2 ∪ {b1} means B1 contains an order at price
b1 not present in B2, B2 contains an order at price b2 not
present in B1, and that otherwise B1 and B2 are identical.
Using this notation, we now define a set of stable system
states, where each state is composed from the order books
of the original and the modified sequences. Shortly we show
that if we change only one order"s value (price), we remain
in this set for any sequence of limit orders.
Definition 4.1. Let ab be the set of all states (A1, B1)
and (A2, B2) such that A1 = A2 and B1 = B2. Let ¯ab be
the set of states such that A1 ∪ {a2} = A2 ∪ {a1}, where
a1 = a2, and B1 = B2. Let a¯b be the set of states such that
B1∪{b2} = B2∪{b1}, where b1 = b2, and A1 = A2. Let ¯a¯b be
the set of states in which A1 = A2∪{a1} and B1 = B2∪{b1},
or in which A2 = A1 ∪ {a2} and B2 = B1 ∪ {b2}. Finally
we define S = ab ∪ ¯ab ∪ ¯ba ∪ ¯a¯b as the set of stable states.
Theorem 4.1. (1-Modification Theorem) Consider any
sequence of orders Σ and any 1-modification Σ of Σ. Then
the order books (A1, B1) and (A2, B2) determined by Σ and
Σ lie in the set S of stable states at all times.
ab
¯a¯b
a¯b¯ab
Figure 2: Diagram representing the set S of stable
states and the possible movements transitions in it
after the change.
The idea of the proof of this theorem is contained in
Figure 2, which shows a state transition diagram labeled by the
categories of stable states. This diagram describes all
transitions that can take place after the arrival of the order on
which Σ and Σ differ. The following establishes that
immediately after the arrival of this differing order, the state lies
in S.
Lemma 4.2. If at any time the current books (A1, B1) and
(A2, B2) are in the set ab (and thus identical), then
modifying the price of the next order keeps the state in S.
Proof. Suppose the arriving order is a sell order and we
change it from a1 to a2; assume without loss of generality
that a1 > a2. If neither order is executed immediately, then
we move to state ¯ab; if both of them are executed then we
stay in state ab; and if only a2 is executed then we move to
state ¯a¯b. The analysis of an arriving buy order is similar.
Following the arrival of their only differing order, Σ and
Σ are identical. We now give a sequence of lemmas showing
123
Executed with two orders
Not executed in both
Arrivng buy order
Arriving buy order
Arriving buy order
Arriving sell order
¯ab
ab
¯a¯b
Executed only with a1
(not a1 and a2)
Executed with a1 and a2
Figure 3: The state diagram when starting at state
¯ab. This diagram provides the intuition of Lemma
4.3
that following the initial difference covered by Lemma 4.2,
the state remains in S forever on the remaining (identical)
sequence. We first show that from state ¯ab we remain in
S regardless the next order. The intuition of this lemma is
demonstrated in Figure 3.
Lemma 4.3. If the current state is in the set ¯ab, then for
any order the state will remain in S.
Proof. We first provide the analysis for the case of an
arriving sell order. Note that in ¯ab the buy books are identical
(B1 = B2). Thus either the arriving sell order is executed
with the same buy order in both buy books, or it is not
executed in both buy books. For the first case, the buy
books remain identical (the bid is executed in both) and the
sell books remain unchanged. For the second case, the buy
books remain unchanged and identical, and the sell books
have the new sell order added to both of them (and thus
still differ by one order).
Next we provide an analysis of the more subtle case where
the arriving item is a buy order. For this case we need to
take care of several different scenarios. The first is when the
top of both sell books (the ask) is identical. Then
regardless of whether the new buy order is executed or not, the
state remains in ¯ab (the analysis is similar to an arriving
sell order).
We are left to deal with case where ask(A1) and ask(A2)
are different. Here we discuss two subcases: (a) ask(A1) =
a1 and ask(A2) = a2, and (b) ask(A1) = a1 and ask(A2) =
a . Here a1 and a2 are as in the definition of ¯ab in
Definition 4.1, and a is some other price. For subcase (a), by our
assumption a1 < a2, then either (1) both asks get executed,
the sell books become identical, and we move to state ab;
(2) neither ask is executed and we remain in state ¯ab; or (3)
only ask(A1) = a1 is executed, in which case we move to
state ¯a¯b with A2 = A1 ∪ {a2} and B2 = B1 ∪ {b2}, where
b2 is the arriving buy order price. For subcase (b), either
(1) buy order is executed in neither sell book we remain in
state ¯ab; or (2) the buy order is executed in both sell books
and stay in state ¯ab with A1 ∪ {a } = A2 ∪ {a2}; or (3) only
ask(A1) = a1 is executed and we move to state ¯a¯b.
Lemma 4.4. If the current state is in the set a¯b, then for
any order the state will remain in S.
Lemma 4.5. If the current configuration is in the set ¯a¯b,
then for any order the state will remain in S
The proofs of these two lemmas are omitted, but are
similar in spirit to that of Lemma 4.3. The next and final lemma
deals with cancellations.
Lemma 4.6. If the current order book state lies in S, then
following the arrival of a cancellation it remains in S.
Proof. When a cancellation order arrives, one of the
following possibilities holds: (1) the order is still in both sets of
books, (2) it is not in either of them and (3) it is only in one
of them. For the first two cases it is easy to see that the
cancellation effect is identical on both sets of books, and thus
the state remains unchanged. For the case when the order
appears only in one set of books, without loss of generality
we assume that the cancellation cancels a buy order at b1.
Rather than removing b1 from the book we can change it to
have price 0, meaning this buy order will never be executed
and is effectively canceled. Now regardless the state that we
were in, b1 is still only in one buy book (but with a different
price), and thus we remain in the same state in S.
The proof of Theorem 4.1 follows from the above lemmas.
5. ABSOLUTE MODEL STABILITY
In this section we apply the 1-Modification Theorem to
show strong stability properties for the absolute model. We
begin with an examination of the executed volume.
Lemma 5.1. Let Σ be any sequence and Σ be any
1modification of Σ. Then the set of the executed orders (ID
numbers) generated by the two sequences differs by at most
2.
Proof. By Theorem 4.1 we know that at each stage the
books differ by at most two orders. Now since the union of
the IDs of the executed orders and the order books is always
identical for both sequences, this implies that the executed
orders can differ by at most two.
Corollary 5.2. Let Σ be any sequence and Σ be any
kmodification of Σ. Then the set of the executed orders (ID
numbers) generated by the two sequences differs by at most
2k.
An order sequence Σ is a k-extension of Σ if Σ can be
obtained by deleting any k orders in Σ .
Lemma 5.3. Let Σ be any sequence and let Σ be any
kextension of Σ. Then the set of the executed orders generated
by Σ and Σ differ by at most 2k.
This lemma is the key to obtain our main absolute model
volume result below. We use edit(Σ, Σ ) to denote the
standard edit distance between the sequences Σ and Σ - the
minimal number of substitutions, insertions and deletions or
orders needed to change Σ to Σ .
Theorem 5.4. Let Σ and Σ be any absolute model order
sequences. Then if edit(Σ, Σ ) ≤ k, the set of the executed
orders generated by Σ and Σ differ by at most 4k. In
particular, |volume(Σ) − volume(Σ )| ≤ 4k.
Proof. We first define the sequence ˜Σ which is the
intersection of Σ and Σ . Since Σ and Σ are at most k apart,we
have that by k insertions we change ˜Σ to either Σ or Σ , and
by Lemma 5.3 its set of executed orders is at most 2k from
each. Thus the set of executed orders in Σ and Σ is at most
4k apart.
124
5.1 Spread Bounds
Theorem 5.4 establishes strong stability for executed
volume in the absolute model. We now turn to the quantities
that involve execution prices as opposed to volume alone
- namely, average(Σ), close(Σ), lastbid(Σ) and lastask(Σ).
For these results, unlike executed volume, a condition must
hold on Σ in order for stability to occur. This condition
is expressed in terms of a natural measure of the spread of
the market, or the gap between the buyers and sellers. We
motivate this condition by first showing that without it, by
changing one order, we can change average(Σ) by any
positive value x.
Lemma 5.5. There exists Σ such that for any x ≥ 0,
there is a 1-modification Σ of Σ such that average(Σ ) =
average(Σ) + x.
Proof. Let Σ be a sequence of alternating sell and buy
orders in which each seller offers p and each buyer p + x,
and the first order is a sell. Then all executions take place
at the ask, which is always p, and thus average(Σ) = p.
Now suppose we modify only the first sell order to be at
price p+1+x. This initial sell order will never be executed,
and now all executions take place at the bid, which is always
p + x.
Similar instability results can be shown to hold for the
other price-based quantities. This motivates the
introduction of a quantity we call the second spread of the order
books, which is defined as the difference between the prices
of the second order in the sell book and the second order in
the buy book (as opposed to the bid-ask difference, which is
commonly called the spread). We note that in a liquid stock,
such as those we examine experimentally in Section 7, the
second spread will typically be quite small and in fact almost
always equal to the spread.
In this subsection we consider changes in the sequence
only after an initialization period, and sequences such that
the second spread is always defined after the time we make a
change. We define s2(Σ) to be the maximum second spread
in the sequence Σ following the change.
Theorem 5.6. Let Σ be a sequence and let Σ be any
1modification of Σ. Then
1. |lastbid(Σ) − lastbid(Σ )| ≤ s2(Σ)
2. |lastask(Σ) − lastask(Σ )| ≤ s2(Σ)
where s2(Σ) is the maximum over the second spread in Σ
following the 1-modification.
Proof. We provide the proof for the last bid; the proof
for the last ask is similar. The proof relies on Theorem 4.1
and considers states in the stable set S. For states ab and ¯ab,
we have that the bid is identical. Let bid(X), sb(X), ask(X),
be the bid, the second highest buy order, and the ask of a
sequence X. Now recall that in state a¯b we have that the sell
books are identical, and that the two buy books are identical
except one different order. Thus
bid(Σ)+s2(Σ) ≥ sb(Σ)+s2(Σ) ≥ ask(Σ) = ask(Σ ) ≥ bid(Σ ).
Now it remains to bound bid(Σ). Here we use the fact that
the bid of the modified sequence is at least the second
highest buy order in the original sequence, due to the fact that
the books are different only in one order. Since
bid(Σ ) ≥ sb(Σ) ≥ ask(Σ) − s2(Σ) ≥ bid(Σ) − s2(Σ)
we have that |bid(Σ) − bid(Σ )| ≤ s2(Σ) as desired.
In state ¯a¯b we have that for one sequence the books
contain an additional buy order and an additional sell order.
First suppose that the books containing the additional
orders are the original sequence Σ. Now if the bid is not the
additional order we are done, otherwise we have the
following:
bid(Σ) ≤ ask(Σ) ≤ sb(Σ) + s2(Σ) = bid(Σ ) + s2(Σ),
where sb(Σ) ≤ bid(Σ ) since the original buy book has only
one additional order.
Now assume that the books with the additional orders are
for the modified sequence Σ . We have
bid(Σ) + s2(Σ) ≥ ask(Σ) ≥ ask(Σ ) ≥ bid(Σ ),
where we used the fact that ask(Σ) ≥ ask(Σ ) since the
modified sequence has an additional order. Similarly we
have that bid(Σ) ≤ bid(Σ ) since the modified buy book
contains an additional order.
We note that the proof of Theorem 5.6 actually establishes
that the bid and ask of the original and modified sequences
are within s2(Σ) at all times.
Next we provide a technical lemma which relates the (first)
spread of the modified sequence to the second spread of the
original sequence.
Lemma 5.7. Let Σ be a sequence and let Σ be any
1modification of Σ. Then the spread of Σ is bounded by
s2(Σ).
Proof. By the 1-Modification Theorem, we know that
the books of the modified sequence and the original sequence
can differ by at most one order in each book (buy and sell).
Therefore, the second-highest buy order in the original
sequence is always at most the bid in the modified sequence,
and the second-lowest sell order in the original sequence is
always at least the ask of the modified sequence.
We are now ready to state a stability result for the average
execution price in the absolute model. It establishes that in
highly liquid markets, where the executed volume is large
and the spread small, the average price is highly stable.
Theorem 5.8. Let Σ be a sequence and let Σ be any
1modification of Σ. Then
|average(Σ) − average(Σ )| ≤
2(pmax + s2(Σ))
volume(Σ)
+ s2(Σ)
where pmax is the highest execution price in Σ.
Proof. The proof will show that every execution in Σ
besides the execution of the modified order and the last
execution has a matching execution in Σ with a price different
by at most s2(Σ), and will use the fact that pmax + s2(Σ) is
a bound on the price in Σ .
Referring to the proof of the 1-Modification Theorem,
suppose we are in state ¯a¯b, where we have in one sequence
(which can be either Σ or Σ ) an additional buy order b
and an additional sell order a. Without loss of generality
we assume that the sequence with the additional orders is
Σ. If the next execution does not involve a or b then clearly
we have the same execution in both Σ and Σ . Suppose
that it involves a; there are two possibilities. Either a is the
modified order, in which case we change the average price
125
difference by (pmax +s2(Σ))/volume(Σ), and this can happen
only once; or a was executed before in Σ and the executions
both involve an order whose limit price is a. By Lemma 5.7
the spread of both sequences is bounded by s2(Σ), which
implies that the price of the execution in Σ was at most
a + s2(Σ), while execution is in Σ is at price a, and thus the
prices are different by at most s2(Σ).
In states ¯ab, a¯b as long as we have concurrent executions
in the two sequences, we know that the prices can differ
by at most s2(Σ). If we have an execution only in one
sequence, we either match it in state ¯a¯b, or charge it by
(pmax + s2(Σ))/volume(Σ) if we end at state ¯a¯b.
If we end in state ab, ¯ab or a¯b, then every execution in
states ¯ab or a¯b were matched to an execution in state ¯a¯b. If
we end up in state ¯a¯b, we have the one execution that is not
matched and thus we charge it (pmax +s2(Σ))/volume(Σ).
We next give a stability result for the closing price. We
first provide a technical lemma regarding the prices of
consecutive executions.
Lemma 5.9. Let Σ be any sequence. Then the prices of
two consecutive executions in Σ differ by at most s2(Σ).
Proof. Suppose the first execution is taken at time t;
its price is bounded below by the current bid and above by
the current ask. Now after this execution the bid is at least
the second highest buy order at time t, if the former bid
was executed and no higher buy orders arrived, and higher
otherwise. Similarly, the ask is at most the second lowest
sell order at time t. Therefore, the next execution price is
at least the second bid at time t and at most the second ask
at time t, which is at most s2(Σ) away from the bid/ask at
time t.
Lemma 5.10. Let Σ be any sequence and let Σ be a
1modification of Σ. If the volume(Σ) ≥ 2, then
|close(Σ) − close(Σ )| ≤ s2(Σ)
Proof. We first deal with case where the last execution
occurs in both sequences simultaneously. By Theorem 5.6,
both the ask and the bid of Σ and Σ are at most s2(Σ)
apart at every time t. Since the price of the last execution
is their asks (bids) at time t we are done.
Next we deal with the case where the last execution among
the two sequences occurs only in Σ. In this case we know
that either the previous execution happened simultaneously
in both sequences at time t, and thus all three executions are
within the second spread of Σ at time t (the first execution
in Σ by definition, the execution at Σ from identical
arguments as in the former case, and the third by Lemma 5.9).
Otherwise the previous execution happened only in Σ at
time t, in which case the two executions are within the the
spread of Σ at time t (the execution of Σ from the same
arguments as before, and the execution in Σ must be inside
its spread in time t).
If the last execution happens only in Σ we know that
the next execution of Σ will be at most s2(Σ) away from
its previous execution by Lemma 5.9. Together with the
fact that if an execution happens only in one sequence it
implies that the order is in the spread of the second sequence
as long as the sequences are 1-modification, the proof is
completed.
5.2 Spread Bounds for k-Modifications
As in the case of executed volume, we would like to extend
the absolute model stability results for price-based
quantities to the case where multiple orders are modified. Here our
results are weaker and depend on the k-spread, the distance
between the kth highest buy order and the kth lowest sell
order, instead of the second spread. (Looking ahead to
Section 7, we note that in actual market data for liquid stocks,
this quantity is often very small as well.) We use sk(Σ) to
denote the k-spread. As before, we assume that the k-spread
is always defined after an initialization period.
We first state the following generalization of Lemma 5.7.
Lemma 5.11. Let Σ be a sequence and let Σ be any
1modification of Σ. For ≥ 1, if s +1(Σ) is always defined
after the change, then s (Σ ) ≤ s +1(Σ).
The proof is similar to the proof of Lemma 5.7 and
omitted. A simple application of this lemma is the following: Let
Σ be any sequence which is an -modification of Σ. Then
we have s2(Σ ) ≤ s +2(Σ). Now using the above lemma and
by simple induction we can obtain the following theorem.
Theorem 5.12. Let Σ be a sequence and let Σ be any
k-modification of Σ. Then
1. |lastbid(Σ) − lastbid(Σ )| ≤
Pk
=1 s +1(Σ) ≤ ksk+1(Σ)
2. |lastask(Σ)−lastask(Σ )| ≤
Pk
=1 s +1(Σ) ≤ ksk+1(Σ)
3. |close(Σ) − close(Σ )| ≤
Pk
=1 s +1(Σ) ≤ ksk+1(Σ)
4. |average(Σ) − average(Σ )| ≤
Pk
=1
2(pmax +s +1(Σ))
volume(Σ)
+ s +1(Σ)
where s (Σ) is the maximum over the -spread in Σ following
the first modification.
We note that while these bounds depend on deeper
measures of spread for more modifications, we are working in
a 1-share order model. Thus in an actual market, where
single orders contain hundreds or thousands of shares, the
k-spread even for large k might be quite small and close to
the standard 1-spread in liquid stocks.
6. RELATIVE MODEL INSTABILITY
In the relative model the underlying assumption is that
traders try to exploit their knowledge of the books to
strategically place their orders. Thus if a trader wants her buy
order to be executed quickly, she may position it above the
current bid and be the first in the queue; if the trader is
patient and believes that the price trend is going to be
downward she will place orders deeper in the buy book, and so
on.
While in the previous sections we showed stability results
for the absolute model, here we provide simple examples
which show instability in the relative model for the
executed volume, last bid, last ask, average execution price and
the last execution price. In Section 7 we provide many
simulations on actual market data that demonstrate that this
instability is inherent to the relative model, and not due
to artificial constructions. In the relative model we assume
that for every sequence the ask and bid are always defined,
so the books have a non-empty initial configuration.
126
We begin by showing that in the relative model, even a
single modification can double the number of shares
executed.
Theorem 6.1. There is a sequence Σ and a 1-modification
Σ of Σ such that volume(Σ ) ≥ 2volume(Σ).
Proof. For concreteness we assume that at the
beginning the ask is 10 and the bid is 8. The sequence Σ is
composed from n buy orders with ∆ = 0, followed by n sell
orders with ∆ = 0, and finally an alternating sequence of
buy orders with ∆ = +1 and sell orders with ∆ = −1 of
length 2n. Since the books before the alternating sequence
contain n + 1 sell orders at 10 and n + 1 buy orders at 8, we
have that each pair of buy sell order in the alternating part
is matched and executed, but none of the initial 2n orders
is executed, and thus volume(Σ) = n. Now we change the
first buy order to have ∆ = +1. After the first 2n orders
there are still no executions; however, the books are
different. Now there are n + 1 sell orders at 10, n buy orders at 9
and one buy order at 8. Now each order in the alternating
sequence is executed with one of the former orders and we
have volume(Σ ) = 2n.
The next theorem shows that the spread-based stability
results of Section 5.1 do not also hold in the relative model.
Before providing the proof, we give its intuition. At the
beginning the sell book contains only two prices which are far
apart and both contain only two orders, now several buy
orders arrive, at the original sequence they are not being
executed, while in the modified sequence they will be
executed and leave the sell book with only the orders at the high
price. Now many sell orders followed by many buy orders
will arrive, such that in the original sequence they will be
executed only at the low price and in the modified sequence
they will executed at the high price.
Theorem 6.2. For any positive numbers s and x, there
is sequence Σ such that s2(Σ) = s and a 1-modification Σ
of Σ such that
• |close(Σ) − close(Σ )| ≥ x
• |average(Σ) − average(Σ )| ≥ x
• |lastbid(Σ) − lastbid(Σ )| ≥ x
• |lastask(Σ) − lastask(Σ )| ≥ x
Proof. Without loss of generality let us consider sequences
in which all prices are integer-valued, in which case the
smallest possible value for the second spread is 1; we provide
the proof for the case s2(Σ) = 2, but the s2(Σ) = 1 case is
similar.
We consider a sequence Σ such that after an initialization
period there have been no executions, the buy book has
2 orders at price 10, and the sell book has two orders at
price 12 and 2 orders with value 12+y, where y is a positive
integer that will be determined by the analysis. The original
sequence Σ is a buy order with ∆ = 0, followed by two
buy orders with ∆ = +1, then 2y sell orders with ∆ = 0,
and then 2y buy orders with ∆ = +1. We first note that
s2(Σ) = 2, there are 2y executions, all at price 12, the last
bid is 11 and the last ask is 12. Next we analyze a modified
sequence. We change the first buy order from ∆ = 0 to
∆ = +1. Therefore, the next two buy orders with ∆ = +1
are executed, and afterwards we have that the bid is 11 and
the ask is 12 + y. Now the 2y sell orders are accumulated at
12+y, and after the next y buy orders the bid is at 12+y−1.
Therefore, at the end we have that lastbid(Σ ) = 12 + y − 1,
lastask(Σ ) = 12 + y, close(Σ ) = 12 + y, and average(Σ ) =
y
y+2
(12 + y) + 2
y+2
(12). Setting y = x + 2, we obtain the
lemma for every property.
We note that while this proof was based on the fact that
there are two consecutive orders in the books which are far
(y) apart, we can provide a slightly more complicated
example in which all orders are close (at most 2 apart), yet still
one change results in large differences.
7. SIMULATION STUDIES
The results presented so far paint a striking contrast
between the absolute and relative price models: while the
absolute model enjoys provably strong stability over any fixed
event sequence, there exist at least specific sequences
demonstrating great instability in the relative model. The
worstcase nature of these results raises the question of the extent
to which such differences could actually occur in real
markets. In this section we provide indirect evidence on this
question by presenting simulation results exploiting a rich
source of real-market historical limit order sequence data.
By interpreting arriving limit order prices as either
absolute values, or by transforming them into differences with
the current bid and ask (relative model), we can perform
small modifications on the sequences and examine how
different various outcomes (volume traded, average price, etc.)
would be from what actually occurred in the market. These
simulations provide an empirical counterpart to the theory
we have developed. We emphasize that all such simulations
interpret the actual historical data as falling into either the
absolute or relative model, and are meaningful only within
the confines of such an interpretation. Nevertheless, we feel
they provide valuable empirical insight into the potential
(in)stability properties of modern equity limit order
markets, and demonstrate that one"s belief or hope in stability
largely relies on an absolute model interpretation. We also
investigate the empirical behavior of mixtures of absolute
and relative prices.
7.1 Data
The historical data used in our simulations is
commercially available limit order data from INET, the previously
mentioned electronic exchange for NASDAQ stocks. Broadly
speaking, this data consists of practically every single event
on INET regarding the trading of an individual
stockevery arriving limit order (price, volume, and sequence ID
number), every execution, and every cancellation of a
standing order - all timestamped in milliseconds. It is data
sufficient to recreate the precise INET order book in a given
stock on a given day and time.
We will report stability properties for three stocks:
Amazon, Nvidia, and Qualcomm (identified in the sequel by their
tickers, AMZN, NVDA and QCOM). These three provide
some range of liquidities (with QCOM having the greatest
and NVDA the least liquidity on INET) and other trading
properties. We note that the qualitative results of our
simulations were similar for several other stocks we examined.
127
7.2 Methodology
For our simulations we employed order-book
reconstruction code operating on the underlying raw data. The basic
format of each experiment was the following:
1. Run the order book reconstruction code on the
original INET data and compute the quantity of interest
(volume traded, average price, etc.)
2. Make a small modification to a single order, and
recompute the resulting value of the quantity of interest.
In the absolute model case, Step 2 is as simple as
modifying the order in the original data and re-running the order
book reconstruction. For the relative model, we must first
pre-process the raw data and convert its prices to relative
values, then make the modification and re-run the order
book reconstruction on the relative values.
The type of modification we examined was extremely small
compared to the volume of orders placed in these stocks:
namely, the deletion of a single randomly chosen order from
the sequence. Although a deletion is not 1-modification,
its edit distance is 1 and we can apply Theorem 5.4. For
each trading day examined,this single deleted order was
selected among those arriving between 10 AM and 3 PM, and
the quantities of interest were measured and compared at 3
PM. These times were chosen to include the busiest part of
the trading day but avoid the half hour around the opening
and closing of the official NASDAQ market (9:30 AM and
3:30 PM respectively), which are known to have different
dynamics than the central portion of the day.
We run the absolute and relative model simulations on
both the raw INET data and on a cleaned version of
this data. In the cleaned we remove all limit orders that
were canceled in the actual market prior to their execution
(along with the cancellations themselves). The reason is
that such cancellations may often be the first step in the
repositioning of orders - that is, cancellations of the
order that are followed by the submission of a replacement
order at a different price. Not removing canceled orders
allows the possibility of modified simulations in which the
same order 1
is executed twice, which may magnify
instability effects. Again, it is clear that neither the raw nor
the cleaned data can perfectly reflect what would have
happened under the deleted orders in the actual market.
However, the results both from the raw data and the clean
data are qualitatively similar. The results mainly differ, as
expected, in the executed volume, where the instability
results for the relative model are much more dramatic in the
raw data.
7.3 Results
We begin with summary statistics capturing our overall
stability findings. Each row of the tables below contains a
ticker (e.g. AMZN) followed by either -R (for the uncleaned
or raw data) or -C (for the data with canceled orders
removed). For each of the approximately 250 trading days
in 2003, 1000 trials were run in which a randomly selected
order was deleted from the INET event sequence. For each
quantity of interest (volume executed, average price, closing
price and last bid), we show for the both the absolute and
1
Here same is in quotes since the two orders will actually
have different sequence ID numbers, which is what makes
such repositioning activity impossible to reliably detect in
the data.
relative model the average percentage change in the quantity
induced by the deletion.
The results confirm rather strikingly the qualitative
conclusions of the theory we have developed. In virtually every
case (stock, raw or cleaned data, and quantity) the
percentage change induced by a single deletion in the relative
model is many orders of magnitude greater than in the
absolute model, and shows that indeed butterfly effects may
occur in a relative model market. As just one specific
representative example, notice that for QCOM on the cleaned
data, the relative model effect of just a single deletion on the
closing price is in excess of a full percentage point. This is
a variety of market impact entirely separate from the more
traditional and expected kind generated by trading a large
volume of shares.
Stock Date volume average
Rel Abs Rel Abs
AMZN-R 2003 15.1% 0.04% 0.3% 0.0002%
AMZN-C 2003 0.69% 0.087% 0.36% 0.0007%
NVDA-R 2003 9.09% 0.05 % 0.17% 0.0003%
NVDA-C 2003 0.73% 0.09 % 0.35% 0.001%
QCOM-R 2003 16.94% 0.035% 0.21% 0.0002%
QCOM-C 2003 0.58% 0.06% 0.35% 0.0005%
Stock Date close lastbid
Rel Abs Rel Abs
AMZN-R 2003 0.78% 0.0001% 0.78% 0.0007%
AMZN-C 2003 1.10% 0.077% 1.11% 0.001%
NVDA-R 2003 1.17% 0.002 % 1.18 % 0.08%
NVDA-C 2003 0.45% 0.0003% 0.45% 0.0006%
QCOM-R 2003 0.58% 0.0001% 0.58% 0.0004%
QCOM-C 2003 1.05% 0.0006% 1.05% 0.06%
In Figure 4 we examine how the change to one the
quantities, the average execution price, grows with the
introduction of greater perturbations of the event sequence in the two
models. Rather than deleting only a single order between
10 AM and 3 PM, in these experiments a growing number
of randomly chosen deletions was performed, and the
percentage change to the average price measured. As suggested
by the theory we have developed, for the absolute model the
change to the average price grows linearly with the number
of deletions and remains very small (note the vastly
different scales of the y-axis in the panels for the absolute and
relative models in the figure). For the relative model, it is
interesting to note that while small numbers of changes have
large effects (often causing average execution price changes
well in excess of 0.1 percent), the effects of large numbers of
changes levels off quite rapidly and consistently.
We conclude with an examination of experiments with a
mixture model. Even if one accepts a world in which traders
behave in either an absolute or relative manner, one would
be likely to claim that the market contains a mixture of both.
We thus ran simulations in which each arriving order in the
INET event streams was treated as an absolute price with
probability α, and as a relative price with probability 1−α.
Representative results for the average execution price in this
mixture model are shown in Figure 5 for AMZN and NVDA.
Perhaps as expected, we see a monotonic decrease in the
percentage change (instability) as the fraction of absolute
traders increases, with most of the reduction already being
realized by the introduction of just a small population of
absolute traders. Thus even in a largely relative-price world, a
128
0 10 20 30 40 50 60 70 80 90 100
0
0.5
1
1.5
2
2.5
x 10
−3 QCOM−R June 2004: Absolute
Number of changes
Averageprice
0 10 20 30 40 50 60 70 80 90 100
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
QCOM−R June 2004: Relative
Number of changes
Averageprice
Figure 4: Percentage change to the average
execution price (y-axis) as a function of the number of
deletions to the sequence (x-axis). The left panel is
for the absolute model, the right panel for the
relative model, and each curve corresponds to a single
day of QCOM trading in June 2004. Curves
represent averages over 1000 trials.
small minority of absolute traders can have a greatly
stabilizing effect. Similar behavior is found for closing price and
last bid.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
AMZN−R Feburary 2004
α
Averageprice
0 0.2 0.4 0.6 0.8 1
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
NVDA−R June 2004
α
Averageprice
Figure 5: Percentage change to the average
execution price (y-axis) vs. probability of treating
arriving INET orders as absolute prices (x-axis). Each
curve corresponds to a single day of trading during
a month of 2004. Curves represent averages over
1000 trials.
For the executed volume in the mixture model, however,
the findings are more curious. In Figure 6, we show how
the percentage change to the executed volume varies with
the absolute trader fraction α, for NVDA data that is both
raw and cleaned of cancellations. We first see that for this
quantity, unlike the others, the difference induced by the
cleaned and uncleaned data is indeed dramatic, as already
suggested by the summary statistics table above. But most
intriguing is the fact that the stability is not monotonically
increasing with α for either the cleaned or uncleaned
datathe market with maximum instability is not a pure relative
price market, but occurs at some nonzero value for α. It was
in fact not obvious to us that sequences with this property
could even be artificially constructed, much less that they
would occur as actual market data. We have yet to find a
satisfying explanation for this phenomenon and leave it to
future research.
8. ACKNOWLEDGMENTS
We are grateful to Yuriy Nevmyvaka of Lehman Brothers
in New York for the use of his INET order book
reconstruction code, and for valuable comments on the work presented
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.5
1
1.5
2
2.5
3
3.5
4
NVDA−C June 2004
α
Volume
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
2
4
6
8
10
12
14
16
18
NVDA−R June 2004
α
Volume
Figure 6: Percentage change to the executed volume
(y-axis) vs. probability of treating arriving INET
orders as absolute prices (x-axis). The left panel is
for NVDA using the raw data that includes
cancellations, while the right panel is on the cleaned data.
Each curve corresponds to a single day of trading
during June 2004. Curves represent averages over
1000 trials.
here. Yishay Mansour was supported in part by the IST
Programme of the European Community, under the PASCAL
Network of Excellence, IST-2002-506778, by a grant from
the Israel Science Foundation and an IBM faculty award.
9. REFERENCES
[1] D. Bertsimas and A. Lo. Optimal control of execution
costs. Journal of Financial Markets, 1:1-50, 1998.
[2] B. Biais, L. Glosten, and C. Spatt. Market
microstructure: a survey of microfoundations,
empirical results and policy implications. Journal of
Financial Markets, 8:217-264, 2005.
[3] J.-P. Bouchaud, M. Mezard, and M. Potters.
Statistical properties of stock order books: empirical
results and models. Quantitative Finance, 2:251-256,
2002.
[4] C. Cao, O.Hansch, and X. Wang. The informational
content of an open limit order book, 2004. AFA 2005
Philadelphia Meetings, EFA Maastricht Meetings
Paper No. 4311.
[5] R. Coggins, A. Blazejewski, and M. Aitken. Optimal
trade execution of equities in a limit order market. In
International Conference on Computational
Intelligence for Financial Engineering, pages 371-378,
March 2003.
[6] D. Farmer and S. Joshi. The price dynamics of
common trading strategies. Journal of Economic
Behavior and Organization, 29:149-171, 2002.
[7] J. Hasbrouck. Empirical market microstructure:
Economic and statistical perspectives on the dynamics
of trade in securities markets, 2004. Course notes,
Stern School of Business, New York University.
[8] R. Kissell and M. Glantz. Optimal Trading Strategies.
Amacom, 2003.
[9] S.Kakade, M. Kearns, Y. Mansour, and L. Ortiz.
Competitive algorithms for VWAP and limit order
trading. In Proceedings of the ACM Conference on
Electronic Commerce, pages 189-198, 2004.
[10] Y.Nevmyvaka, Y. Feng, and M. Kearns. Reinforcement
learning for optimized trade execution, 2006. Preprint.
129 | bid;market microstructure;absolute trader model;computational finance;penny-jumping;standard continuous limit-order mechanism;modern execution optimization;relative trader model;modern equity market;high-frequency microstructure signal;relative price model;quantitative trading strategy;electronic communication network |
train_J-35 | Efficiency and Nash Equilibria in a Scrip System for P2P Networks | A model of providing service in a P2P network is analyzed. It is shown that by adding a scrip system, a mechanism that admits a reasonable Nash equilibrium that reduces free riding can be obtained. The effect of varying the total amount of money (scrip) in the system on efficiency (i.e., social welfare) is analyzed, and it is shown that by maintaining the appropriate ratio between the total amount of money and the number of agents, efficiency is maximized. The work has implications for many online systems, not only P2P networks but also a wide variety of online forums for which scrip systems are popular, but formal analyses have been lacking. | 1. INTRODUCTION
A common feature of many online distributed systems is
that individuals provide services for each other.
Peer-topeer (P2P) networks (such as Kazaa [25] or BitTorrent [3])
have proved popular as mechanisms for file sharing, and
applications such as distributed computation and file storage
are on the horizon; systems such as Seti@home [24] provide
computational assistance; systems such as Slashdot [21]
provide content, evaluations, and advice forums in which people
answer each other"s questions. Having individuals provide
each other with service typically increases the social welfare:
the individual utilizing the resources of the system derives a
greater benefit from it than the cost to the individual
providing it. However, the cost of providing service can still be
nontrivial. For example, users of Kazaa and BitTorrent may
be charged for bandwidth usage; in addition, in some
filesharing systems, there is the possibility of being sued, which
can be viewed as part of the cost. Thus, in many systems
there is a strong incentive to become a free rider and
benefit from the system without contributing to it. This is not
merely a theoretical problem; studies of the Gnutella [22]
network have shown that almost 70 percent of users share
no files and nearly 50 percent of responses are from the top
1 percent of sharing hosts [1].
Having relatively few users provide most of the service
creates a point of centralization; the disappearance of a small
percentage of users can greatly impair the functionality of
the system. Moreover, current trends seem to be leading
to the elimination of the altruistic users on which these
systems rely. These heavy users are some of the most
expensive customers ISPs have. Thus, as the amount of traffic has
grown, ISPs have begun to seek ways to reduce this traffic.
Some universities have started charging students for
excessive bandwidth usage; others revoke network access for it
[5]. A number of companies have also formed whose service
is to detect excessive bandwidth usage [19].
These trends make developing a system that encourages
a more equal distribution of the work critical for the
continued viability of P2P networks and other distributed online
systems. A significant amount of research has gone into
designing reputation systems to give preferential treatment
to users who are sharing files. Some of the P2P networks
currently in use have implemented versions of these
techniques. However, these approaches tend to fall into one of
two categories: either they are barter-like or reputational.
By barter-like, we mean that each agent bases its decisions
only on information it has derived from its own interactions.
Perhaps the best-known example of a barter-like system is
BitTorrent, where clients downloading a file try to find other
clients with parts they are missing so that they can trade,
thus creating a roughly equal amount of work. Since the
barter is restricted to users currently interested in a
single file, this works well for popular files, but tends to have
problems maintaining availability of less popular ones. An
example of a barter-like system built on top of a more
traditional file-sharing system is the credit system used by eMule
140
[8]. Each user tracks his history of interactions with other
users and gives priority to those he has downloaded from in
the past. However, in a large system, the probability that
a pair of randomly-chosen users will have interacted before
is quite small, so this interaction history will not be
terribly helpful. Anagnostakis and Greenwald [2] present a more
sophisticated version of this approach, but it still seems to
suffer from similar problems.
A number of attempts have been made at providing
general reputation systems (e.g. [12, 13, 17, 27]). The basic idea
is to aggregate each user"s experience into a global number
for each individual that intuitively represents the system"s
view of that individual"s reputation. However, these
attempts tend to suffer from practical problems because they
implicitly view users as either good or bad, assume that
the good users will act according to the specified protocol,
and that there are relatively few bad users. Unfortunately,
if there are easy ways to game the system, once this
information becomes widely available, rational users are likely to
make use of it. We cannot count on only a few users being
bad (in the sense of not following the prescribed protocol).
For example, Kazaa uses a measure of the ratio of the
number of uploads to the number of downloads to identify good
and bad users. However, to avoid penalizing new users, they
gave new users an average rating. Users discovered that they
could use this relatively good rating to free ride for a while
and, once it started to get bad, they could delete their stored
information and effectively come back as a new user, thus
circumventing the system (see [2] for a discussion and [11]
for a formal analysis of this whitewashing). Thus Kazaa"s
reputation system is ineffective.
This is a simple case of a more general vulnerability of
such systems to sybil attacks [6], where a single user
maintains multiple identities and uses them in a coordinated
fashion to get better service than he otherwise would. Recent
work has shown that most common reputation systems are
vulnerable (in the worst case)to such attacks [4]; however,
the degree of this vulnerability is still unclear. The
analyses of the practical vulnerabilities and the existence of such
systems that are immune to such attacks remains an area of
active research (e.g., [4, 28, 14]).
Simple economic systems based on a scrip or money seem
to avoid many of these problems, are easy to implement and
are quite popular (see, e.g., [13, 15, 26]). However, they
have a different set of problems. Perhaps the most common
involve determining the amount of money in the system.
Roughly speaking, if there is too little money in the system
relative to the number of agents, then relatively few users
can afford to make request. On the other hand, if there is
too much money, then users will not feel the need to
respond to a request; they have enough money already. A
related problem involves handling newcomers. If newcomers
are each given a positive amount of money, then the system
is open to sybil attacks. Perhaps not surprisingly, scrip
systems end up having to deal with standard economic woes
such as inflation, bubbles, and crashes [26].
In this paper, we provide a formal model in which to
analyze scrip systems. We describe a simple scrip system
and show that, under reasonable assumptions, for each fixed
amount of money there is a nontrivial Nash equilibrium
involving threshold strategies, where an agent accepts a request
if he has less than $k for some threshold k.1
An interesting
aspect of our analysis is that, in equilibrium, the
distribution of users with each amount of money is the distribution
that maximizes entropy (subject to the money supply
constraint). This allows us to compute the money supply that
maximizes efficiency (social welfare), given the number of
agents. It also leads to a solution for the problem of
dealing with newcomers: we simply assume that new users come
in with no money, and adjust the price of service (which is
equivalent to adjusting the money supply) to maintain the
ratio that maximizes efficiency. While assuming that new
users come in with no money will not work in all settings,
we believe the approach will be widely applicable. In
systems where the goal is to do work, new users can acquire
money by performing work. It should also work in
Kazaalike system where a user can come in with some resources
(e.g., a private collection of MP3s).
The rest of the paper is organized as follows. In Section 2,
we present our formal model and observe that it can be used
to understand the effect of altruists. In Section 3, we
examine what happens in the game under nonstrategic play, if all
agents use the same threshold strategy. We show that, in
this case, the system quickly converges to a situation where
the distribution of money is characterized by maximum
entropy. Using this analysis, we show in Section 4 that, under
minimal assumptions, there is a nontrivial Nash equilibrium
in the game where all agents use some threshold strategy.
Moreover, we show in Section 5 that the analysis leads to
an understanding of how to choose the amount of money
in the system (or, equivalently, the cost to fulfill a request)
so as to maximize efficiency, and also shows how to handle
new users. In Section 6, we discuss the extent to which our
approach can handle sybils and collusion. We conclude in
Section 7.
2. THE MODEL
To begin, we formalize providing service in a P2P network
as a non-cooperative game. Unlike much of the modeling in
this area, our model will model the asymmetric interactions
in a file sharing system in which the matching of players
(those requesting a file with those who have that particular
file) is a key part of the system. This is in contrast with
much previous work which uses random matching in a
prisoner"s dilemma. Such models were studied in the economics
literature [18, 7] and first applied to online reputations in
[11]; an application to P2P is found in [9].
This random-matching model fails to capture some salient
aspects of a number of important settings. When a request
is made, there are typically many people in the network who
can potentially satisfy it (especially in a large P2P network),
but not all can. For example, some people may not have
the time or resources to satisfy the request. The
randommatching process ignores the fact that some people may not
be able to satisfy the request. Presumably, if the person
matched with the requester could not satisfy the match, he
would have to defect. Moreover, it does not capture the fact
that the decision as to whether to volunteer to satisfy
the request should be made before the matching process,
not after. That is, the matching process does not capture
1
Although we refer to our unit of scrip as the dollar, these
are not real dollars nor do we view them as convertible to
dollars.
141
the fact that if someone is unwilling to satisfy the request,
there will doubtless be others who can satisfy it. Finally, the
actions and payoffs in the prisoner"s dilemma game do not
obviously correspond to actual choices that can be made.
For example, it is not clear what defection on the part of
the requester means. In our model we try to deal with all
these issues.
Suppose that there are n agents. At each round, an agent
is picked uniformly at random to make a request. Each other
agent is able to satisfy this request with probability β > 0 at
all times, independent of previous behavior. The term β is
intended to capture the probability that an agent is busy, or
does not have the resources to fulfill the request. Assuming
that β is time-independent does not capture the intution
that being an unable to fulfill a request at time t may well
be correlated with being unable to fulfill it at time t+1. We
believe that, in large systems, we should be able to drop the
independence assumption, but we leave this for future work.
In any case, those agents that are able to satisfy the request
must choose whether or not to volunteer to satisfy it. If
at least one agent volunteers, the requester gets a benefit
of 1 util (the job is done) and one of volunteers is chosen
at random to fulfill the request. The agent that fulfills the
request pays a cost of α < 1. As is standard in the literature,
we assume that agents discount future payoffs by a factor of
δ per time unit. This captures the intuition that a util now is
worth more than a util tomorrow, and allows us to compute
the total utility derived by an agent in an infinite game.
Lastly, we assume that with more players requests come
more often. Thus we assume that the time between rounds
is 1/n. This captures the fact that the systems we want
to model are really processing many requests in parallel, so
we would expect the number of concurrent requests to be
proportional to the number of users.2
Let G(n, δ, α, β) denote this game with n agents, a
discount factor of δ, a cost to satisfy requests of α, and a
probability of being able to satisfy requests of β. When the
latter two parameters are not relevant, we sometimes write
G(n, δ).
We use the following notation throughout the paper:
• pt
denotes the agent chosen in round t.
• Bt
i ∈ {0, 1} denotes whether agent i can satisfy the
request in round t. Bt
i = 1 with probability β > 0 and
Bt
i is independent of Bt
i for all t = t.
• V t
i ∈ {0, 1} denotes agent i"s decision about whether
to volunteer in round t; 1 indicates volunteering. V t
i
is determined by agent i"s strategy.
• vt
∈ {j | V t
j Bt
j = 1} denotes the agent chosen to satisfy
the request. This agent is chosen uniformly at random
from those who are willing (V t
j = 1) and able (Bt
j = 1)
to satisfy the request.
• ut
i denotes agent i"s utility in round t.
A standard agent is one whose utility is determined as
discussed in the introduction; namely, the agent gets
2
For large n, our model converges to one in which players
make requests in real time, and the time between a player"s
requests are exponentially distributed with mean 1. In
addition, the time between requests served by a single player
is also exponentially distributed.
a utility of 1 for a fulfilled request and utility −α for
fulfilling a request. Thus, if i is a standard agent, then
ut
i =
8
<
:
1 if i = pt and
P
j=i V t
j Bt
j > 0
−α if i = vt
0 otherwise.
• Ui =
P∞
t=0 δt/n
ut
i denotes the total utility for agent
i. It is the discounted total of agent i"s utility in each
round. Note that the effective discount factor is δ1/n
since an increase in n leads to a shortening of the time
between rounds.
Now that we have a model of making and satisfying
requests, we use it to analyze free riding. Take an altruist to
be someone who always fulfills requests. Agent i might
rationally behave altruistically if agent i"s utility function has
the following form, for some α > 0:
ut
i =
8
<
:
1 if i = pt and
P
j=i V t
j Bt
j > 0
α if i = vt
0 otherwise.
Thus, rather than suffering a loss of utility when satisfying
a request, an agent derives positive utility from satisfying
it. Such a utility function is a reasonable representation of
the pleasure that some people get from the sense that they
provide the music that everyone is playing. For such
altruistic agents, playing the strategy that sets V t
i = 1 for all t
is dominant. While having a nonstandard utility function
might be one reason that a rational agent might use this
strategy, there are certainly others. For example a naive
user of filesharing software with a good connection might
well follow this strategy. All that matters for the
following discussion is that there are some agents that use this
strategy, for whatever reason.
As we have observed, such users seem to exist in some
large systems. Suppose that our system has a altruists.
Intuitively, if a is moderately large, they will manage to satisfy
most of the requests in the system even if other agents do
no work. Thus, there is little incentive for any other agent
to volunteer, because he is already getting full advantage of
participating in the system. Based on this intuition, it is a
relatively straightforward calculation to determine a value
of a that depends only on α, β, and δ, but not the number
n of players in the system, such that the dominant strategy
for all standard agents i is to never volunteer to satisfy any
requests (i.e., V t
i = 0 for all t).
Proposition 2.1. There exists an a that depends only on
α, β, and δ such that, in G(n, δ, α, β) with at least a altruists,
not volunteering in every round is a dominant strategy for
all standard agents.
Proof. Consider the strategy for a standard player j in
the presence of a altruists. Even with no money, player
j will get a request satisfied with probability 1 − (1 − β)a
just through the actions of these altruists. Thus, even if j is
chosen to make a request in every round, the most additional
expected utility he can hope to gain by having money isP∞
k=1(1 − β)a
δk
= (1 − β)a
/(1 − δ). If (1 − β)a
/(1 − δ) > α
or, equivalently, if a > log1−β(α(1 − δ)), never volunteering
is a dominant strategy.
Consider the following reasonable values for our
parameters: β = .01 (so that each player can satisfy 1% of the
requests), α = .1 (a low but non-negligible cost), δ = .9999/day
142
(which corresponds to a yearly discount factor of
approximately 0.95), and an average of 1 request per day per player.
Then we only need a > 1145. While this is a large number,
it is small relative to the size of a large P2P network.
Current systems all have a pool of users behaving like our
altruists. This means that attempts to add a reputation
system on top of an existing P2P system to influence users
to cooperate will have no effect on rational users. To have
a fair distribution of work, these systems must be
fundamentally redesigned to eliminate the pool of altruistic users.
In some sense, this is not a problem at all. In a system
with altruists, the altruists are presumably happy, as are
the standard agents, who get almost all their requests
satisfied without having to do any work. Indeed, current P2P
network work quite well in terms of distributing content to
people. However, as we said in the introduction, there is
some reason to believe these altruists may not be around
forever. Thus, it is worth looking at what can be done to
make these systems work in their absence. For the rest of
this paper we assume that all agents are standard, and try
to maximize expected utility.
We are interested in equilibria based on a scrip system.
Each time an agent has a request satisfied he must pay the
person who satisfied it some amount. For now, we assume
that the payment is fixed; for simplicity, we take the amount
to be $1. We denote by M the total amount of money in
the system. We assume that M > 0 (otherwise no one will
ever be able to get paid).
In principle, agents are free to adopt a very wide
variety of strategies. They can make decisions based on the
names of other agents or use a strategy that is heavily
history dependant, and mix these strategies freely. To aid our
analysis, we would like to be able to restrict our attention to
a simpler class of strategies. The class of strategies we are
interested in is easy to motivate. The intuitive reason for
wanting to earn money is to cater for the possibility that an
agent will run out before he has a chance to earn more. On
the other hand, a rational agent with plenty of mone would
not want to work, because by the time he has managed to
spend all his money, the util will have less value than the
present cost of working. The natural balance between these
two is a threshold strategy. Let Sk be the strategy where
an agent volunteers whenever he has less than k dollars and
not otherwise. Note that S0 is the strategy where the agent
never volunteers. While everyone playing S0 is a Nash
equilibrium (nobody can do better by volunteering if no one else
is willing to), it is an uninteresting one. As we will show
in Section 4, it is sufficient to restrict our attention to this
class of strategies.
We use Kt
i to denote the amount of money agent i has
at time t. Clearly Kt+1
i = Kt
i unless agent i has a request
satisfied, in which case Kt+1
i = Kt+1
i − 1 or agent i fulfills a
request, in which case Kt+1
i = Kt+1
i + 1. Formally,
Kt+1
i =
8
<
:
Kt
i − 1 if i = pt
,
P
j=i V t
j Bt
j > 0, and Kt
i > 0
Kt
i + 1 if i = vt
and Kt
pt > 0
Kt
i otherwise.
The threshold strategy Sk is the strategy such that
V t
i =
1 if Kt
pt > 0 and Kt
i < k
0 otherwise.
3. THE GAME UNDER NONSTRATEGIC
PLAY
Before we consider strategic play, we examine what
happens in the system if everyone just plays the same strategy
Sk. Our overall goal is to show that there is some
distribution over money (i.e., the fraction of people with each
amount of money) such that the system converges to this
distribution in a sense to be made precise shortly.
Suppose that everyone plays Sk. For simplicity, assume
that everyone has at most k dollars. We can make this
assumption with essentially no loss of generality, since if
someone has more than k dollars, he will just spend money
until he has at most k dollars. After this point he will never
acquire more than k. Thus, eventually the system will be
in such a state. If M ≥ kn, no agent will ever be willing to
work. Thus, for the purposes of this section we assume that
M < kn.
From the perspective of a single agent, in (stochastic)
equilibrium, the agent is undergoing a random walk.
However, the parameters of this random walk depend on the
random walks of the other agents and it is quite complicated
to solve directly. Thus we consider an alternative analysis
based on the evolution of the system as a whole.
If everyone has at most k dollars, then the amount of
money that an agent has is an element of {0, . . . , k}. If there
are n agents, then the state of the game can be described
by identifying how much money each agent has, so we can
represent it by an element of Sk,n = {0, . . . , k}{1,...,n}
. Since
the total amount of money is constant, not all of these states
can arise in the game. For example the state where each
player has $0 is impossible to reach in any game with money
in the system. Let mS(s) =
P
i∈{1...n} s(i) denote the total
mount of money in the game at state s, where s(i) is the
number of dollars that agent i has in state s. We want
to consider only those states where the total money in the
system is M, namely
Sk,n,M = {s ∈ Sk,n | mS(s) = M}.
Under the assumption that all agents use strategy Sk, the
evolution of the system can be treated as a Markov chain
Mk,n,M over the state space Sk,n,M . It is possible to move
from one state to another in a single round if by choosing
a particular agent to make a request and a particular agent
to satisfy it, the amounts of money possesed by each agent
become those in the second state. Therefore the
probability of a transition from a state s to t is 0 unless there exist
two agents i and j such that s(i ) = t(i ) for all i /∈ {i, j},
t(i) = s(i) + 1, and t(j) = s(j) − 1. In this case the
probability of transitioning from s to t is the probability of j
being chosen to spend a dollar and has someone willing and
able to satisfy his request ((1/n)(1 − (1 − β)|{i |s(i )=k}|−Ij
)
multiplied by the probability of i being chosen to satisfy his
request (1/(|({i | s(i ) = k}| − Ij )). Ij is 0 if j has k dollars
and 1 otherwise (it is just a correction for the fact that j
cannot satisfy his own request.)
Let ∆k
denote the set of probability distributions on {0, . . . , k}.
We can think of an element of ∆k
as describing the fraction
of people with each amount of money. This is a useful way
of looking at the system, since we typically don"t care who
has each amount of money, but just the fraction of people
that have each amount. As before, not all elements of ∆k
are possible, given our constraint that the total amount of
143
money is M. Rather than thinking in terms of the total
amount of money in the system, it will prove more useful to
think in terms of the average amount of money each player
has. Of course, the total amount of money in a system
with n agents is M iff the average amount that each player
has is m = M/n. Let ∆k
m denote all distributions d ∈ ∆k
such that E(d) = m (i.e.,
Pk
j=0 d(j)j = m). Given a state
s ∈ Sk,n,M , let ds
∈ ∆k
m denote the distribution of money
in s. Our goal is to show that, if n is large, then there is
a distribution d∗
∈ ∆k
m such that, with high probability,
the Markov chain Mk,n,M will almost always be in a state
s such that ds
is close to d∗
. Thus, agents can base their
decisions about what strategy to use on the assumption that
they will be in such a state.
We can in fact completely characterize the distribution
d∗
. Given a distribution d ∈ ∆k
, let
H(d) = −
X
{j:d(j)=0}
d(j) log(d(j))
denote the entropy of d. If ∆ is a closed convex set of
distributions, then it is well known that there is a unique
distribution in ∆ at which the entropy function takes its maximum
value in ∆. Since ∆k
m is easily seen to be a closed convex set
of distributions, it follows that there is a unique distribution
in ∆k
m that we denote d∗
k,m whose entropy is greater than
that of all other distributions in ∆k
m. We now show that,
for n sufficiently large, the Markov chain Mk,n,M is almost
surely in a state s such that ds
is close to d∗
k,M/n. The
statement is correct under a number of senses of close.
For definiteness, we consider the Euclidean distance. Given
> 0, let Sk,n,m, denote the set of states s in Sk,n,mn such
that
Pk
j=0 |ds
(j) − d∗
k,m|2
< .
Given a Markov chain M over a state space S and S ⊆ S,
let Xt,s,S be the random variable that denotes that M is in
a state of S at time t, when started in state s.
Theorem 3.1. For all > 0, all k, and all m, there exists
n such that for all n > n and all states s ∈ Sk,n,mn, there
exists a time t∗
(which may depend on k, n, m, and ) such
that for t > t∗
, we have Pr(Xt,s,Sk,n,m, ) > 1 − .
Proof. (Sketch) Suppose that at some time t, Pr(Xt,s,s )
is uniform for all s . Then the probability of being in a set of
states is just the size of the set divided by the total number of
states. A standard technique from statistical mechanics is to
show that there is a concentration phenomenon around the
maximum entropy distribution [16]. More precisely, using
a straightforward combinatorial argument, it can be shown
that the fraction of states not in Sk,n,m, is bounded by
p(n)/ecn
, where p is a polynomial. This fraction clearly
goes to 0 as n gets large. Thus, for sufficiently large n,
Pr(Xt,s,Sk,n,m, ) > 1 − if Pr(Xt,s,s ) is uniform.
It is relatively straightforward to show that our Markov
Chain has a limit distribution π over Sk,n,mn, such that for
all s, s ∈ Sk,n,mn, limt→∞ Pr(Xt,s,s ) = πs . Let Pij denote
the probability of transitioning from state i to state j. It
is easily verified by an explicit computation of the
transition probabilities that Pij = Pji for all states i and j. It
immediatly follows from this symmetry that πs = πs , so
π is uniform. After a sufficient amount of time, the
distribution will be close enough to π, that the probabilities are
again bounded by constant, which is sufficient to complete
the theorem.
0 0.002 0.004 0.006 0.008 0.01
Euclidean Distance
2000
2500
3000
3500
4000
NumberofSteps
Figure 1: Distance from maximum-entropy
distribution with 1000 agents.
5000 10000 15000 20000 25000
Number of Agents
0.001
0.002
0.003
0.004
0.005
MaximumDistance
Figure 2: Maximum distance from
maximumentropy distribution over 106
timesteps.
0 5000 10000 15000 20000 25000
Number of Agents
0
20000
40000
60000
TimetoDistance.001
Figure 3: Average time to get within .001 of the
maximum-entropy distribution.
144
We performed a number of experiments that show that
the maximum entropy behavior described in Theorem 3.1
arises quickly for quite practical values of n and t. The
first experiment showed that, even if n = 1000, we reach
the maximum-entropy distribution quickly. We averaged 10
runs of the Markov chain for k = 5 where there is enough
money for each agent to have $2 starting from a very extreme
distribution (every agent has either $0 or $5) and considered
the average time needed to come within various distances
of the maximum entropy distribution. As Figure 1 shows,
after 2,000 steps, on average, the Euclidean distance from
the average distribution of money to the maximum-entropy
distribution is .008; after 3,000 steps, the distance is down
to .001. Note that this is really only 3 real time units since
with 1000 players we have 1000 transactions per time unit.
We then considered how close the distribution stays to
the maximum entropy distribution once it has reached it.
To simplify things, we started the system in a state whose
distribution was very close to the maximum-entropy
distribution and ran it for 106
steps, for various values of n.
As Figure 2 shows, the system does not move far from the
maximum-entropy distribution once it is there. For
example, if n = 5000, the system is never more than distance .001
from the maximum-entropy distribution; if n = 25, 000, it is
never more than .0002 from the maximum-entropy
distribution.
Finally, we considered how more carefully how quickly
the system converges to the maximum-entropy distribution
for various values of n. There are approximately kn
possible states, so the convergence time could in principle be
quite large. However, we suspect that the Markov chain
that arises here is rapidly mixing, which means that it will
converge significantly faster (see [20] for more details about
rapid mixing). We believe that the actually time needed is
O(n). This behavior is illustrated in Figure 3, which shows
that for our example chain (again averaged over 10 runs),
after 3n steps, the Euclidean distance between the actual
distribution of money in the system and the maximum-entropy
distribution is less than .001.
4. THE GAME UNDER STRATEGIC PLAY
We have seen that the system is well behaved if the agents
all follow a threshold strategy; we now want to show that
there is a nontrivial Nash equilibrium where they do so (that
is, a Nash equilibrium where all the agents use Sk for some
k > 0.) This is not true in general. If δ is small, then agents
have no incentive to work. Intuitively, if future utility is
sufficiently discounted, then all that matters is the present,
and there is no point in volunteering to work. With small
δ, S0 is the only equilibrium. However, we show that for δ
sufficiently large, there is another equilibrium in threshold
strategies. We do this by first showing that, if every other
agent is playing a threshold strategy, then there is a best
response that is also a threshold strategy (although not
necessarily the same one). We then show that there must be
some (mixed) threshold strategy for which this best response
is the same strategy. It follows that this tuple of threshold
strategies is a Nash equilibrium.
As a first step, we show that, for all k, if everyone other
than agent i is playing Sk, then there is a threshold
strategy Sk that is a best response for agent i. To prove this,
we need to assume that the system is close to the
steadystate distribution (i.e., the maximum-entropy distribution).
However, as long as δ is sufficiently close to 1, we can ignore
what happens during the period that the system is not in
steady state.3
We have thus far considered threshold strategies of the
form Sk, where k is a natural number; this is a discrete set
of strategies. For a later proof, it will be helpful to have
a continuous set of strategies. If γ = k + γ , where k is
a natural number and 0 ≤ γ < 1, let Sγ be the strategy
that performs Sk with probability 1 − γ and Sk+1 with
probability γ. (Note that we are not considering arbitrary
mixed threshold strategies here, but rather just mixing
between adjacent strategies for the sole purpose of making out
strategies continuous in a natural way.) Theorem 3.1
applies to strategies Sγ (the same proof goes through without
change), where γ is an arbitrary nonnegative real number.
Theorem 4.1. Fix a strategy Sγ and an agent i. There
exists δ∗
< 1 and n∗
such that if δ > δ∗
, n > n∗
, and every
agent other than i is playing Sγ in game G(n, δ), then there
is an integer k such that the best response for agent i is Sk .
Either k is unique (that is, there is a unique best response
that is also a threshold strategy), or there exists an integer
k such that Sγ is a best response for agent i for all γ in
the interval [k , k +1] (and these are the only best responses
among threshold strategies).
Proof. (Sketch:) If δ is sufficiently large, we can ignore
what happens before the system converges to the
maximumentropy distribution. If n is sufficiently large, then the
strategy played by one agent will not affect the distribution of
money significantly. Thus, the probability of i moving from
one state (dollar amount) to another depends only on i"s
strategy (since we can take the probability that i will be
chosen to make a request and the probability that i will
be chosen to satisfy a request to be constant). Thus, from
i"s point of view, the system is a Markov decision process
(MDP), and i needs to compute the optimal policy
(strategy) for this MDP. It follows from standard results [23,
Theorem 6.11.6] that there is an optimal policy that is a
threshold policy.
The argument that the best response is either unique or
there is an interval of best responses follows from a more
careful analysis of the value function for the MDP.
We remark that there may be best responses that are not
threshold strategies. All that Theorem 4.1 shows is that,
among best responses, there is at least one that is a threshold
strategy. Since we know that there is a best response that
is a threshold strategy, we can look for a Nash equilibrium
in the space of threshold strategies.
Theorem 4.2. For all M, there exists δ∗
< 1 and n∗
such
that if δ > δ∗
and n > n∗
, there exists a Nash equilibrium in
the game G(n, δ) where all agents play Sγ for some integer
γ > 0.
Proof. It follows easily from the proof Theorem 4.1 that
if br(δ, γ) is the minimal best response threshold strategy if
all the other agents are playing Sγ and the discount factor is
δ then, for fixed δ, br(δ, ·) is a step function. It also follows
3
Formally, we need to define the strategies when the system
is far from equilibrium. However, these far from (stochastic)
equilibrium strategies will not affect the equilibrium
behavior when n is large and deviations from stochastic
equilibrium are extremely rare.
145
from the theorem that if there are two best responses, then
a mixture of them is also a best response. Therefore, if we
can join the steps by a vertical line, we get a best-response
curve. It is easy to see that everywhere that this
bestresponse curve crosses the diagonal y = x defines a Nash
equilibrium where all agents are using the same threshold
strategy. As we have already observed, one such
equilibrium occurs at 0. If there are only $M in the system, we can
restrict to threshold strategies Sk where k ≤ M + 1. Since
no one can have more than $M, all strategies Sk for k > M
are equivalent to SM ; these are just the strategies where
the agent always volunteers in response to request made by
someone who can pay. Clearly br(δ, SM ) ≤ M for all δ, so
the best response function is at or below the equilibrium at
M. If k ≤ M/n, every player will have at least k dollars
and so will be unwilling to work and the best response is
just 0. Consider k∗
, the smallest k such that k > M/n. It
is not hard to show that for k∗
there exists a δ∗
such that
for all δ ≥ δ∗
, br(δ, k∗
) ≥ k∗
. It follows by continuity that if
δ ≥ δ∗
, there must be some γ such that br(δ, γ) = γ. This
is the desired Nash equilibrium.
This argument also shows us that we cannot in general
expect fixed points to be unique. If br(δ, k∗
) = k∗
and
br(δ, k + 1) > k + 1 then our argument shows there must be
a second fixed point. In general there may be multiple fixed
points even when br(δ, k∗
) > k∗
, as illustrated in the Figure
4 with n = 1000 and M = 3000.
0 5 10 15 20 25
Strategy of Rest of Agents
0
5
10
15
20
25
BestResponse
Figure 4: The best response function for n = 1000
and M = 3000.
Theorem 4.2 allows us to restrict our design to agents
using threshold strategies with the confidence that there will
be a nontrivial equilibrium. However, it does not rule out
the possibility that there may be other equilibria that do
not involve threshold stratgies. It is even possible (although
it seems unlikely) that some of these equilibria might be
better.
5. SOCIAL WELFARE AND SCALABITY
Our theorems show that for each value of M and n, for
sufficiently large δ, there is a nontrivial Nash equilibrium
where all the agents use some threshold strategy Sγ(M,n).
From the point of view of the system designer, not all
equilibria are equally good; we want an equilibrium where as few
as possible agents have $0 when they get a chance to make a
request (so that they can pay for the request) and relatively
few agents have more than the threshold amount of money
(so that there are always plenty of agents to fulfill the
request). There is a tension between these objectives. It is not
hard to show that as the fraction of agents with $0 increases
in the maximum entropy distribution, the fraction of agents
with the maximum amount of money decreases. Thus, our
goal is to understand what the optimal amount of money
should be in the system, given the number of agents. That
is, we want to know the amount of money M that maximizes
efficiency, i.e., the total expected utility if all the agents use
Sγ(M,n). 4
We first observe that the most efficient equilibrium
depends only on the ratio of M to n, not on the actual values
of M and n.
Theorem 5.1. There exists n∗
such that for all games
G(n1, δ) and G(n2, δ) where n1, n2 > n∗
, if M1/n1 = M2/n2,
then Sγ(M1,n1) = Sγ(M2,n2).
Proof. Fix M/n = r. Theorem 3.1 shows that the
maximum-entropy distribution depends only on k and the
ratio M/n, not on M and n separately. Thus, given r, for
each choice of k, there is a unique maximum entropy
distribution dk,r. The best response br(δ, k) depends only on the
distribution dk,r, not M or n. Thus, the Nash equilibrium
depends only on the ratio r. That is, for all choices of M
and n such that n is sufficiently large (so that Theorem 3.1
applies) and M/n = r, the equilibrium strategies are the
same.
In light of Theorem 5.1, the system designer should ensure
that there is enough money M in the system so that the
ratio between M/n is optimal. We are currently exploring
exactly what the optimal ratio is. As our very preliminary
results for β = 1 show in Figure 5, the ratio appears to be
monotone increasing in δ, which matches the intuition that
we should provide more patient agents with the opportunity
to save more money. Additionally, it appears to be relatively
smooth, which suggests that it may have a nice analytic
solution.
0.9 0.91 0.92 0.93 0.94 0.95
Discount Rate ∆
5
5.5
6
6.5
7
OptimalRatioofMn
Figure 5: Optimal average amount of money to the
nearest .25 for β = 1
We remark that, in practice, it may be easier for the
designer to vary the price of fulfilling a request rather than
4
If there are multiple equilibria, we take Sγ(M,n) to be the
Nash equilibrium that has highest efficiency for fixed M and
n.
146
injecting money in the system. This produces the same
effect. For example, changing the cost of fulfilling a request
from $1 to $2 is equivalent to halving the amount of money
that each agent has. Similarly, halving the the cost of
fulfilling a request is equivalent to doubling the amount of money
that everyone has. With a fixed amount of money M, there
is an optimal product nc of the number of agents and the
cost c of fulfilling a request.
Theorem 5.1 also tells us how to deal with a dynamic pool
of agents. Our system can handle newcomers relatively
easily: simply allow them to join with no money. This gives
existing agents no incentive to leave and rejoin as
newcomers. We then change the price of fulfilling a request so that
the optimal ratio is maintained. This method has the nice
feature that it can be implemented in a distributed fashion;
if all nodes in the system have a good estimate of n then
they can all adjust prices automatically. (Alternatively, the
number of agents in the system can be posted in a
public place.) Approaches that rely on adjusting the amount
of money may require expensive system-wide computations
(see [26] for an example), and must be carefully tuned to
avoid creating incentives for agents to manipulate the
system by which this is done.
Note that, in principle, the realization that the cost of
fulfilling a request can change can affect an agent"s
strategy. For example, if an agent expects the cost to increase,
then he may want to defer volunteering to fulfill a request.
However, if the number of agents in the system is always
increasing, then the cost always decreases, so there is never
any advantage in waiting.
There may be an advantage in delaying a request, but it
is far more costly, in terms of waiting costs than in
providing service, since we assume the need for a service is often
subject to real waiting costs, while the need to supply the
service is merely to augment a money supply. (Related
issues are discussed in [10].)
We ultimately hope to modify the mechanism so that the
price of a job can be set endogenously within the system
(as in real-world economies), with agents bidding for jobs
rather than there being a fixed cost set externally. However,
we have not yet explored the changes required to implement
this change. Thus, for now, we assume that the cost is set
as a function of the number of agents in the system (and
that there is no possibility for agents to satisfy a request for
less than the official cost or for requesters to offer to pay
more than it).
6. SYBILS AND COLLUSION
In a naive sense, our system is essentially sybil-proof. To
get d dollars, his sybils together still have to perform d units
of work. Moreover, since newcomers enter the system with
$0, there is no benefit to creating new agents simply to take
advantage of an initial endowment. Nevertheless, there are
some less direct ways that an agent could take advantage
of sybils. First, by having more identities he will have a
greater probability of getting chosen to make a request. It
is easy to see that this will lead to the agent having higher
total utility. However, this is just an artifact of our model.
To make our system simple to analyze, we have assumed
that request opportunities came uniformly at random. In
practice, requests are made to satisfy a desire. Our model
implicitly assumed that all agents are equally likely to have
a desire at any particular time. Having sybils should not
increase the need to have a request satisfied. Indeed, it would
be reasonable to assume that sybils do not make requests at
all.
Second, having sybils makes it more likely that one of the
sybils will be chosen to fulfill a request. This can allow a
user to increase his utility by setting a lower threshold; that
is, to use a strategy Sk where k is smaller than the k used
by the Nash equilibrium strategy. Intuitively, the need for
money is not as critical if money is easier to obtain.
Unlike the first concern, this seems like a real issue. It seems
reasonable to believe that when people make a decision
between a number of nodes to satisfy a request they do so
at random, at least to some extent. Even if they look for
advertised node features to help make this decision, sybils
would allow a user to advertise a wide range of features.
Third, an agent can drive down the cost of fulfilling a
request by introducing many sybils. Similarly, he could
increase the cost (and thus the value of his money) by making
a number of sybils leave the system. Concievably he could
alternate between these techniques to magnify the effects of
work he does. We have not yet calculated the exact effect of
this change (it interacts with the other two effects of having
sybils that we have already noted). Given the number of
sybils that would be needed to cause a real change in the
perceived size of a large P2P network, the practicality of this
attack depends heavily on how much sybils cost an attacker
and what resources he has available.
The second point raised regarding sybils also applies to
collusion if we allow money to be loaned. If k agents
collude, they can agree that, if one runs out of money, another
in the group will loan him money. By pooling their money
in this way, the k agents can again do better by setting a
higher threshold. Note that the loan mechanism doesn"t
need to be built into the system; the agents can simply use
a fake transaction to transfer the money. These appear
to be the main avenues for collusive attacks, but we are still
exploring this issue.
7. CONCLUSION
We have given a formal analysis of a scrip system and
have shown that the existence of a Nash equilibrium where
all agents use a threshold strategy. Moreover, we can
compute efficiency of equilibrium strategy and optimize the price
(or money supply) to maximize efficiency. Thus, our
analysis provides a formal mechanisms for solving some important
problems in implementing scrip systems. It tells us that with
a fixed population of rational users, such systems are very
unlikely to become unstable. Thus if this stability is
common belief among the agents we would not expect inflation,
bubbles, or crashes because of agent speculation. However,
we cannot rule out the possibility that that agents may have
other beliefs that will cause them to speculate. Our
analysis also tells us how to scale the system to handle an influx
of new users without introducing these problems: scale the
money supply to keep the average amount of money constant
(or equivalently adjust prices to achieve the same goal).
There are a number of theoretical issues that are still open,
including a characterization of the multiplicity of
equilibria - are there usually 2? In addition, we expect that one
should be able to compute analytic estimates for the best
response function and optimal pricing which would allow us
to understand the relationship between pricing and various
parameters in the model.
147
It would also be of great interest to extend our analysis
to handle more realistic settings. We mention a few possible
extensions here:
• We have assumed that the world is homogeneous in a
number of ways, including request frequency, utility,
and ability to satisfy requests. It would be
interesting to examine how relaxing any of these assumptions
would alter our results.
• We have assumed that there is no cost to an agent
to be a member of the system. Suppose instead that
we imposed a small cost simply for being present in
the system to reflect the costs of routing messages and
overlay maintainance. This modification could have a
significant impact on sybil attacks.
• We have described a scrip system that works when
there are no altruists and have shown that no system
can work once there there are sufficiently many
altruists. What happens between these extremes?
• One type of irrational behavior encountered with
scrip systems is hoarding. There are some similarities
between hoarding and altruistic behavior. While an
altruist provide service for everyone, a hoarder will
volunteer for all jobs (in order to get more money) and
rarely request service (so as not to spend money). It
would be interesting to investigate the extent to which
our system is robust against hoarders. Clearly with
too many hoarders, there may not be enough money
remaining among the non-hoarders to guarantee that,
typically, a non-hoarder would have enough money to
satisfy a request.
• Finally, in P2P filesharing systems, there are
overlapping communities of various sizes that are significantly
more likely to be able to satisfy each other"s requests.
It would be interesting to investigate the effect of such
communities on the equilibrium of our system.
There are also a number of implementation issues that
would have to be resolved in a real system. For example, we
need to worry about the possibility of agents counterfeiting
money or lying about whether service was actually provided.
Karma [26] provdes techniques for dealing with both of these
issues and a number of others, but some of Karma"s
implementation decisions point to problems for our model. For
example, it is prohibitively expensive to ensure that bank
account balances can never go negative, a fact that our model
does not capture. Another example is that Karma has nodes
serve as bookkeepers for other nodes account balances. Like
maintaining a presence in the network, this imposes a cost
on the node, but unlike that, responsibility it can be easily
shirked. Karma suggests several ways to incentivize nodes
to perform these duties. We have not investigated whether
these mechanisms be incorporated without disturbing our
equilibrium.
8. ACKNOWLEDGEMENTS
We would like to thank Emin Gun Sirer, Shane
Henderson, Jon Kleinberg, and 3 anonymous referees for helpful
suggestions. EF, IK and JH are supported in part by NSF
under grant ITR-0325453. JH is also supported in part
by NSF under grants CTC-0208535 and IIS-0534064, by
ONR under grant N00014-01-10-511, by the DoD
Multidisciplinary University Research Initiative (MURI) program
administered by the ONR under grants N00014-01-1-0795 and
N00014-04-1-0725, and by AFOSR under grant
F49620-021-0101.
9. REFERENCES
[1] E. Adar and B. A. Huberman. Free riding on
Gnutella. First Monday, 5(10), 2000.
[2] K. G. Anagnostakis and M. Greenwald.
Exchange-based incentive mechanisms for peer-to-peer
file sharing. In International Conference on Distributed
Computing Systems (ICDCS), pages 524-533, 2004.
[3] BitTorrent Inc. BitTorrent web site.
http://www.bittorent.com.
[4] A. Cheng and E. Friedman. Sybilproof reputation
mechanisms. In Workshop on Economics of
Peer-to-Peer Systems (P2PECON), pages 128-132,
2005.
[5] Cornell Information Technologies. Cornell"s
ccommodity internet usage statistics.
http://www.cit.cornell.edu/computer/students/
bandwidth/charts.html.
[6] J. R. Douceur. The sybil attack. In International
Workshop on Peer-to-Peer Systems (IPTPS), pages
251-260, 2002.
[7] G. Ellison. Cooperation in the prisoner"s dilemma
with anonymous random matching. Review of
Economic Studies, 61:567-588, 1994.
[8] eMule Project. eMule web site.
http://www.emule-project.net/.
[9] M. Feldman, K. Lai, I. Stoica, and J. Chuang. Robust
incentive techniques for peer-to-peer networks. In
ACM Conference on Electronic Commerce (EC),
pages 102-111, 2004.
[10] E. J. Friedman and D. C. Parkes. Pricing wifi at
starbucks: issues in online mechanism design. In EC
"03: Proceedings of the 4th ACM Conference on
Electronic Commerce, pages 240-241. ACM Press,
2003.
[11] E. J. Friedman and P. Resnick. The social cost of
cheap pseudonyms. Journal of Economics and
Management Strategy, 10(2):173-199, 2001.
[12] R. Guha, R. Kumar, P. Raghavan, and A. Tomkins.
Propagation of trust and distrust. In Conference on
the World Wide Web(WWW), pages 403-412, 2004.
[13] M. Gupta, P. Judge, and M. H. Ammar. A reputation
system for peer-to-peer networks. In Network and
Operating System Support for Digital Audio and
Video(NOSSDAV), pages 144-152, 2003.
[14] Z. Gyongi, P. Berkhin, H. Garcia-Molina, and
J. Pedersen. Link spam detection based on mass
estimation. Technical report, Stanford University,
2005.
[15] J. Ioannidis, S. Ioannidis, A. D. Keromytis, and
V. Prevelakis. Fileteller: Paying and getting paid for
file storage. In Financial Cryptography, pages 282-299,
2002.
[16] E. T. Jaynes. Where do we stand on maximum
entropy? In R. D. Levine and M. Tribus, editors, The
Maximum Entropy Formalism, pages 15-118. MIT
Press, Cambridge, Mass., 1978.
148
[17] S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina.
The Eigentrust algorithm for reputation management
in P2P networks. In Conference on the World Wide
Web (WWW), pages 640-651, 2003.
[18] M. Kandori. Social norms and community
enforcement. Review of Economic Studies, 59:63-80,
1992.
[19] LogiSense Corporation. LogiSense web site.
http://www.logisense.com/tm p2p.html.
[20] L. Lovasz and P. Winkler. Mixing of random walks
and other diffusions on a graph. In Surveys in
Combinatorics, 1993, Walker (Ed.), London
Mathematical Society Lecture Note Series 187,
Cambridge University Press. 1995.
[21] Open Source Technology Group. Slashdot
FAQcomments and moderation.
http://slashdot.org/faq/com-mod.shtml#cm700.
[22] OSMB LLC. Gnutella web site.
http://www.gnutella.com/.
[23] M. L. Puterman. Markov Decision Processes. Wiley,
1994.
[24] SETI@home. SETI@home web page.
http://setiathome.ssl.berkeley.edu/.
[25] Sharman Networks Ltd. Kazaa web site.
http://www.kazaa.com/.
[26] V. Vishnumurthy, S. Chandrakumar, and E. Sirer.
Karma: A secure economic framework for peer-to-peer
resource sharing. In Workshop on Economics of
Peer-to-Peer Systems (P2PECON), 2003.
[27] L. Xiong and L. Liu. Building trust in decentralized
peer-to-peer electronic communities. In Internation
Conference on Electronic Commerce Research
(ICECR), 2002.
[28] H. Zhang, A. Goel, R. Govindan, K. Mason, and B. V.
Roy. Making eigenvector-based reputation systems
robust to collusion. In Workshop on Algorithms and
Models for the Web-Graph(WAW), pages 92-104,
2004.
149 | nash equilibrium;game theory;gnutellum network;scrip system;agent;threshold strategy;reputation system;social welfare;game;online system;maximum entropy;p2p network;bittorrent;emule |
train_J-36 | Playing Games in Many Possible Worlds | In traditional game theory, players are typically endowed with exogenously given knowledge of the structure of the game-either full omniscient knowledge or partial but fixed information. In real life, however, people are often unaware of the utility of taking a particular action until they perform research into its consequences. In this paper, we model this phenomenon. We imagine a player engaged in a questionand-answer session, asking questions both about his or her own preferences and about the state of reality; thus we call this setting Socratic game theory. In a Socratic game, players begin with an a priori probability distribution over many possible worlds, with a different utility function for each world. Players can make queries, at some cost, to learn partial information about which of the possible worlds is the actual world, before choosing an action. We consider two query models: (1) an unobservable-query model, in which players learn only the response to their own queries, and (2) an observable-query model, in which players also learn which queries their opponents made. The results in this paper consider cases in which the underlying worlds of a two-player Socratic game are either constant-sum games or strategically zero-sum games, a class that generalizes constant-sum games to include all games in which the sum of payoffs depends linearly on the interaction between the players. When the underlying worlds are constant sum, we give polynomial-time algorithms to find Nash equilibria in both the observable- and unobservable-query models. When the worlds are strategically zero sum, we give efficient algorithms to find Nash equilibria in unobservablequery Socratic games and correlated equilibria in observablequery Socratic games. | 1. INTRODUCTION
Late October 1960. A smoky room. Democratic Party
strategists huddle around a map. How should the Kennedy
campaign allocate its remaining advertising budget? Should
it focus on, say, California or New York? The Nixon
campaign faces the same dilemma. Of course, neither campaign
knows the effectiveness of its advertising in each state.
Perhaps Californians are susceptible to Nixon"s advertising, but
are unresponsive to Kennedy"s. In light of this uncertainty,
the Kennedy campaign may conduct a survey, at some cost,
to estimate the effectiveness of its advertising. Moreover, the
larger-and more expensive-the survey, the more accurate
it will be. Is the cost of a survey worth the information that
it provides? How should one balance the cost of acquiring
more information against the risk of playing a game with
higher uncertainty?
In this paper, we model situations of this type as Socratic
games. As in traditional game theory, the players in a
Socratic game choose actions to maximize their payoffs, but we
model players with incomplete information who can make
costly queries to reduce their uncertainty about the state of
the world before they choose their actions. This approach
contrasts with traditional game theory, in which players are
usually modeled as having fixed, exogenously given
information about the structure of the game and its payoffs. (In
traditional games of incomplete and imperfect information,
there is information that the players do not have; in Socratic
games, unlike in these games, the players have a chance to
acquire the missing information, at some cost.) A number of
related models have been explored by economists and
computer scientists motivated by similar situations, often with
a focus on mechanism design and auctions; a sampling of
this research includes the work of Larson and Sandholm [41,
42, 43, 44], Parkes [59], Fong [22], Compte and Jehiel [12],
Rezende [63], Persico and Matthews [48, 60], Cr´emer and
Khalil [15], Rasmusen [62], and Bergemann and V¨alim¨aki [4,
5]. The model of Bergemann and V¨alim¨aki is similar in
many regards to the one that we explore here; see Section 7
for some discussion.
A Socratic game proceeds as follows. A real world is
cho150
sen randomly from a set of possible worlds according to a
common prior distribution. Each player then selects an
arbitrary query from a set of available costly queries and
receives a corresponding piece of information about the real
world. Finally each player selects an action and receives a
payoff-a function of the players" selected actions and the
identity of the real world-less the cost of the query that
he or she made. Compared to traditional game theory, the
distinguishing feature of our model is the introduction of
explicit costs to the players for learning arbitrary partial
information about which of the many possible worlds is the
real world.
Our research was initially inspired by recent results in
psychology on decision making, but it soon became clear that
Socratic game theory is also a general tool for understanding
the exploitation versus exploration tradeoff, well studied
in machine learning, in a strategic multiplayer environment.
This tension between the risk arising from uncertainty and
the cost of acquiring information is ubiquitous in economics,
political science, and beyond.
Our results. We consider Socratic games under two
models: an unobservable-query model where players learn only
the response to their own queries and an observable-query
model where players also learn which queries their opponents
made. We give efficient algorithms to find Nash
equilibriai.e., tuples of strategies from which no player has unilateral
incentive to deviate-in broad classes of two-player Socratic
games in both models. Our first result is an efficient
algorithm to find Nash equilibria in unobservable-query
Socratic games with constant-sum worlds, in which the sum
of the players" payoffs is independent of their actions. Our
techniques also yield Nash equilibria in unobservable-query
Socratic games with strategically zero-sum worlds.
Strategically zero-sum games generalize constant-sum games by
allowing the sum of the players" payoffs to depend on
individual players" choices of strategy, but not on any interaction
of their choices. Our second result is an efficient algorithm
to find Nash equilibria in observable-query Socratic games
with constant-sum worlds. Finally, we give an efficient
algorithm to find correlated equilibria-a weaker but
increasingly well-studied solution concept for games [2, 3, 32, 56,
57]-in observable-query Socratic games with strategically
zero-sum worlds.
Like all games, Socratic games can be viewed as a
special case of extensive-form games, which represent games
by trees in which internal nodes represent choices made by
chance or by the players, and the leaves represent outcomes
that correspond to a vector of payoffs to the players.
Algorithmically, the generality of extensive-form games makes
them difficult to solve efficiently, and the special cases that
are known to be efficiently solvable do not include even
simple Socratic games. Every (complete-information) classical
game is a trivial Socratic game (with a single possible world
and a single trivial query), and efficiently finding Nash
equilibria in classical games has been shown to be hard [10, 11,
13, 16, 17, 27, 54, 55]. Therefore we would not expect to
find a straightforward polynomial-time algorithm to
compute Nash equilibria in general Socratic games. However, it
is well known that Nash equilibria can be found efficiently
via an LP for two-player constant-sum games [49, 71] (and
strategically zero-sum games [51]). A Socratic game is itself
a classical game, so one might hope that these results can
be applied to Socratic games with constant-sum (or
strategically zero-sum) worlds.
We face two major obstacles in extending these
classical results to Socratic games. First, a Socratic game with
constant-sum worlds is not itself a constant-sum classical
game-rather, the resulting classical game is only
strategically zero sum. Worse yet, a Socratic game with
strategically zero-sum worlds is not itself classically strategically
zero sum-indeed, there are no known efficient
algorithmic techniques to compute Nash equilibria in the resulting
class of classical games. (Exponential-time algorithms like
Lemke/Howson, of course, can be used [45].) Thus even
when it is easy to find Nash equilibria in each of the worlds
of a Socratic game, we require new techniques to solve the
Socratic game itself. Second, even when the Socratic game
itself is strategically zero sum, the number of possible
strategies available to each player is exponential in the natural
representation of the game. As a result, the standard linear
programs for computing equilibria have an exponential
number of variables and an exponential number of constraints.
For unobservable-query Socratic games with strategically
zero-sum worlds, we address these obstacles by
formulating a new LP that uses only polynomially many variables
(though still an exponential number of constraints) and then
use ellipsoid-based techniques to solve it. For
observablequery Socratic games, we handle the exponentiality by
decomposing the game into stages, solving the stages
separately, and showing how to reassemble the solutions
efficiently. To solve the stages, it is necessary to find Nash
equilibria in Bayesian strategically zero-sum games, and we
give an explicit polynomial-time algorithm to do so.
2. GAMES AND SOCRATIC GAMES
In this section, we review background on game theory and
formally introduce Socratic games. We present these
models in the context of two-player games, but the multiplayer
case is a natural extension. Throughout the paper,
boldface variables will be used to denote a pair of variables (e.g.,
a = ai, aii ). Let Pr[x ← π] denote the probability that a
particular value x is drawn from the distribution π, and let
Ex∼π[g(x)] denote the expectation of g(x) when x is drawn
from π.
2.1 Background on Game Theory
Consider two players, Player I and Player II, each of whom
is attempting to maximize his or her utility (or payoff). A
(two-player) game is a pair A, u , where, for i ∈ {i,ii},
• Ai is the set of pure strategies for Player i, and A =
Ai, Aii ; and
• ui : A → R is the utility function for Player i, and
u = ui, uii .
We require that A and u be common knowledge. If each
Player i chooses strategy ai ∈ Ai, then the payoffs to
Players I and II are ui(a) and uii(a), respectively. A game is
constant sum if, for all a ∈ A, we have that ui(a) + uii(a) = c
for some fixed c independent of a.
Player i can also play a mixed strategy αi ∈ Ai, where Ai
denotes the space of probability measures over the set Ai.
Payoff functions are generalized as ui (α) = ui (αi, αii) :=
Ea∼α[ui (a)] =
P
a∈A α(a)ui (a), where the quantity α(a) =
151
αi(ai) · αii(aii) denotes the joint probability of the
independent events that each Player i chooses action ai from the
distribution αi. This generalization to mixed strategies is
known as von Neumann/Morgenstern utility [70], in which
players are indifferent between a guaranteed payoff x and an
expected payoff of x.
A Nash equilibrium is a pair α of mixed strategies so that
neither player has an incentive to change his or her strategy
unilaterally. Formally, the strategy pair α is a Nash
equilibrium if and only if both ui(αi, αii) = maxαi∈Ai ui(αi, αii)
and uii(αi, αii) = maxαii∈Aii uii(αi, αii); that is, the
strategies αi and αii are mutual best responses.
A correlated equilibrium is a distribution ψ over A that
obeys the following: if a ∈ A is drawn randomly according
to ψ and Player i learns ai, then no Player i has incentive to
deviate unilaterally from playing ai. (A Nash equilibrium is
a correlated equilibrium in which ψ(a) = αi(ai) · αii(aii) is a
product distribution.) Formally, in a correlated equilibrium,
for every a ∈ A we must have that ai is a best response to
a randomly chosen ˆaii ∈ Aii drawn according to ψ(ai, ˆaii),
and the analogous condition must hold for Player II.
2.2 Socratic Games
In this section, we formally define Socratic games. A
Socratic game is a 7-tuple A, W, u, S, Q, p, δ , where, for
i ∈ {i,ii}:
• Ai is, as before, the set of pure strategies for Player i.
• W is a set of possible worlds, one of which is the real
world wreal.
• ui = {uw
i : A → R | w ∈ W} is a set of payoff functions
for Player i, one for each possible world.
• S is a set of signals.
• Qi is a set of available queries for Player i. When
Player i makes query qi : W → S, he or she receives the
signal qi(wreal). When Player i receives signal qi(wreal)
in response to query qi, he or she can infer that wreal ∈
{w : qi(w) = qi(wreal)}, i.e., the set of possible worlds
from which query qi cannot distinguish wreal.
• p : W → [0, 1] is a probability distribution over the
possible worlds.
• δi : Qi → R≥0
gives the query cost for each available
query for Player i.
Initially, the world wreal is chosen according to the
probability distribution p, but the identity of wreal remains
unknown to the players. That is, it is as if the players are
playing the game A, uwreal but do not know wreal. The
players make queries q ∈ Q, and Player i receives the signal
qi(wreal). We consider both observable queries and
unobservable queries. When queries are observable, each player
learns which query was made by the other player, and the
results of his or her own query-that is, each Player i learns
qi, qii, and qi(wreal). For unobservable queries, Player i learns
only qi and qi(wreal). After learning the results of the queries,
the players select strategies a ∈ A and receive as payoffs
u
wreal
i (a) − δi(qi).
In the Socratic game, a pure strategy for Player i consists
of a query qi ∈ Qi and a response function mapping any
result of the query qi to a strategy ai ∈ Ai to play. A player"s
state of knowledge after a query is a point in R := Q × S
or Ri := Qi × S for observable or unobservable queries,
respectively. Thus Player i"s response function maps R or
Ri to Ai. Note that the number of pure strategies is
exponential, as there are exponentially many response
functions. A mixed strategy involves both randomly choosing
a query qi ∈ Qi and randomly choosing an action ai ∈ Ai
in response to the results of the query. Formally, we will
consider a mixed-strategy-function profile f = fquery
, fresp
to have two parts:
• a function fquery
i : Qi → [0, 1], where fquery
i (qi) is the
probability that Player i makes query qi.
• a function fresp
i that maps R or Ri to a
probability distribution over actions. Player i chooses an
action ai ∈ Ai according to the probability distribution
fresp
i (q, qi(w)) for observable queries, and according to
fresp
i (qi, qi(w)) for unobservable queries. (With
unobservable queries, for example, the probability that
Player I plays action ai conditioned on making query
qi in world w is given by Pr[ai ← fresp
i (qi, qi(w))].)
Mixed strategies are typically defined as probability
distributions over the pure strategies, but here we represent a
mixed strategy by a pair fquery
, fresp
, which is commonly
referred to as a behavioral strategy in the game-theory
literature. As in any game with perfect recall, one can
easily map a mixture of pure strategies to a behavioral strategy
f = fquery
, fresp
that induces the same probability of
making a particular query qi or playing a particular action after
making a query qi in a particular world. Thus it suffices to
consider only this representation of mixed strategies.
For a strategy-function profile f for observable queries, the
(expected) payoff to Player i is given by
X
q∈Q,w∈W,a∈A
2
6
6
4
fquery
i (qi) · fquery
ii (qii) · p(w)
· Pr[ai ← fresp
i (q, qi(w))]
· Pr[aii ← fresp
ii (q, qii(w))]
· (uw
i (a) − δi(qi))
3
7
7
5 .
The payoffs for unobservable queries are analogous, with
fresp
j (qj, qj(w)) in place of fresp
j (q, qj(w)).
3. STRATEGICALLY ZERO-SUM GAMES
We can view a Socratic game G with constant-sum worlds
as an exponentially large classical game, with pure
strategies make query qi and respond according to fi.
However, this classical game is not constant sum. The sum of
the players" payoffs varies depending upon their strategies,
because different queries incur different costs. However, this
game still has significant structure: the sum of payoffs varies
only because of varying query costs. Thus the sum of
payoffs does depend on players" choice of strategies, but not on
the interaction of their choices-i.e., for fixed functions gi
and gii, we have ui(q, f) + uii(q, f) = gi(qi, fi) + gii(qii, fii)
for all strategies q, f . Such games are called strategically
zero sum and were introduced by Moulin and Vial [51], who
describe a notion of strategic equivalence and define
strategically zero-sum games as those strategically equivalent to
zero-sum games. It is interesting to note that two Socratic
games with the same queries and strategically equivalent
worlds are not necessarily strategically equivalent.
A game A, u is strategically zero sum if there exist labels
(i, ai) for every Player i and every pure strategy ai ∈ Ai
152
such that, for all mixed-strategy profiles α, we have that the
sum of the utilities satisfies
ui(α)+uii(α) =
X
ai∈Ai
αi(ai)· (i, ai)+
X
aii∈Aii
αii(aii)· (ii, aii).
Note that any constant-sum game is strategically zero sum
as well.
It is not immediately obvious that one can efficiently
decide if a given game is strategically zero sum. For
completeness, we give a characterization of classical strategically
zero-sum games in terms of the rank of a simple matrix
derived from the game"s payoffs, allowing us to efficiently
decide if a given game is strategically zero sum and, if it is, to
compute the labels (i, ai).
Theorem 3.1. Consider a game G = A, u with Ai =
{a1
i , . . . , ani
i }. Let MG
be the ni-by-nii matrix whose i, j th
entry MG
(i,j) satisfies log2 MG
(i,j) = ui(ai
i , aj
ii) + uii(ai
i , aj
ii).
Then the following are equivalent:
(i) G is strategically zero sum;
(ii) there exist labels (i, ai) for every player i ∈ {i,ii} and
every pure strategy ai ∈ Ai such that, for all pure
strategies a ∈ A, we have ui(a) + uii(a) = (i, ai) +
(ii, aii); and
(iii) rank(MG
) = 1.
Proof Sketch. (i ⇒ ii) is immediate; every pure
strategy is a trivially mixed strategy. For (ii ⇒ iii), let ci be the
n-element column vector with jth component 2 (i,a
j
i )
; then
ci · cii
T
= MG
. For (iii ⇒ i), if rank(MG
) = 1, then MG
=
u · vT
. We can prove that G is strategically zero sum by
choosing labels (i, aj
i ) := log2 uj and (ii, aj
ii) := log2 vj.
4. SOCRATIC GAMES WITH
UNOBSERVABLE QUERIES
We begin with Socratic games with unobservable queries,
where a player"s choice of query is not revealed to her
opponent. We give an efficient algorithm to solve
unobservablequery Socratic games with strategically zero-sum worlds.
Our algorithm is based upon the LP shown in Figure 1,
whose feasible points are Nash equilibria for the game. The
LP has polynomially many variables but exponentially many
constraints. We give an efficient separation oracle for the LP,
implying that the ellipsoid method [28, 38] yields an efficient
algorithm. This approach extends the techniques of Koller
and Megiddo [39] (see also [40]) to solve constant-sum games
represented in extensive form. (Recall that their result does
not directly apply in our case; even a Socratic game with
constant-sum worlds is not a constant-sum classical game.)
Lemma 4.1. Let G = A, W, u, S, Q, p, δ be an arbitrary
unobservable-query Socratic game with strategically zero-sum
worlds. Any feasible point for the LP in Figure 1 can be
efficiently mapped to a Nash equilibrium for G, and any Nash
equilibrium for G can be mapped to a feasible point for the
program.
Proof Sketch. We begin with a description of the
correspondence between feasible points for the LP and Nash
equilibria for G. First, suppose that strategy profile f =
fquery
, fresp
forms a Nash equilibrium for G. Then the
following setting for the LP variables is feasible:
yi
qi
= fquery
i (qi)
xi
ai,qi,w = Pr[ai ← fresp
i (qi, qi(w))] · yi
qi
ρi =
P
w,q∈Q,a∈A
p(w) · xi
ai,qi,w · xii
aii,qii,w · [uw
i (a) − δi(qi)].
(We omit the straightforward calculations that verify
feasibility.) Next, suppose xi
ai,qi,w, yi
qi
, ρi is feasible for the LP.
Let f be the strategy-function profile defined as
fquery
i : qi → yi
qi
fresp
i (qi, qi(w)) : ai → xi
ai,qi,w/yi
qi
.
Verifying that this strategy profile is a Nash equilibrium
requires checking that fresp
i (qi, qi(w)) is a well-defined
function (from constraint VI), that fquery
i and fresp
i (qi, qi(w)) are
probability distributions (from constraints III and IV), and
that each player is playing a best response to his or her
opponent"s strategy (from constraints I and II). Finally, from
constraints I and II, the expected payoff to Player i is at most
ρi. Because the right-hand side of constraint VII is equal
to the expected sum of the payoffs from f and is at most
ρi + ρii, the payoffs are correct and imply the lemma.
We now give an efficient separation oracle for the LP in
Figure 1, thus allowing the ellipsoid method to solve the
LP in polynomial time. Recall that a separation oracle is
a function that, given a setting for the variables in the LP,
either returns feasible or returns a particular constraint
of the LP that is violated by that setting of the variables.
An efficient, correct separation oracle allows us to solve the
LP efficiently via the ellipsoid method.
Lemma 4.2. There exists a separation oracle for the LP
in Figure 1 that is correct and runs in polynomial time.
Proof. Here is a description of the separation oracle SP.
On input xi
ai,qi,w, yi
qi
, ρi :
1. Check each of the constraints (III), (IV), (V), (VI),
and (VII). If any one of these constraints is violated,
then return it.
2. Define the strategy profile f as follows:
fquery
i : qi → yi
qi
fresp
i (qi, qi(w)) : ai → xi
ai,qi,w/yi
qi
For each query qi, we will compute a pure best-response
function ˆf
qi
i for Player I to strategy fii after making
query qi.
More specifically, given fii and the result qi(wreal) of the
query qi, it is straightforward to compute the
probability that, conditioned on the fact that the result of
query qi is qi(w), the world is w and Player II will play
action aii ∈ Aii. Therefore, for each query qi and
response qi(w), Player I can compute the expected utility
of each pure response ai to the induced mixed strategy
over Aii for Player II. Player I can then select the ai
maximizing this expected payoff.
Let ˆfi be the response function such that ˆfi(qi, qi(w)) =
ˆf
qi
i (qi(w)) for every qi ∈ Qi. Similarly, compute ˆfii.
153
Player i does not prefer ‘make query qi, then play according to the function fi" :
∀qi ∈ Qi, fi : Ri → Ai : ρi ≥
P
w∈W,aii∈Aii,qii∈Qii,ai=fi(qi,qi(w))
`
p(w) · xii
aii,qii,w · [uw
i (a) − δi(qi)]
´
(I)
∀qii ∈ Qii, fii : Rii → Aii : ρii ≥
P
w∈W,ai∈Ai,qi∈Qi,aii=fii(qii,qii(w))
`
p(w) · xi
ai,qi,w · [uw
ii (a) − δii(qii)]
´
(II)
Every player"s choices form a probability distribution in every world:
∀i ∈ {i,ii}, w ∈ W : 1 =
P
ai∈Ai,qi∈Qi
xi
ai,qi,w (III)
∀i ∈ {i,ii}, w ∈ W : 0 ≤ xi
ai,qi,w (IV)
Queries are independent of the world, and actions depend only on query output:
∀i ∈ {i,ii}, qi ∈ Qi, w ∈ W, w ∈ W such that qi(w) = qi(w ) :
yi
qi
=
P
ai∈Ai
xi
ai,qi,w (V)
xi
ai,qi,w = xi
ai,qi,w (VI)
The payoffs are consistent with the labels (i, ai, w):
ρi + ρii =
P
i∈{i,ii}
P
w∈W,qi∈Qi,ai∈Ai
`
p(w) · xi
ai,qi,w · [ (i, ai, w) − δi(qi)]
´
(VII)
Figure 1: An LP to find Nash equilibria in unobservable-query Socratic games with strategically zero-sum
worlds. The input is a Socratic game A, W, u, S, Q, p, δ so that world w is strategically zero sum with labels
(i, ai, w). Player i makes query qi ∈ Qi with probability yi
qi
and, when the actual world is w ∈ W, makes query
qi and plays action ai with probability xi
ai,qi,w. The expected payoff to Player i is given by ρi.
3. Let ˆρ
qi
i be the expected payoff to Player I using the
strategy make query qi and play response function
ˆfi if Player II plays according to fii.
Let ˆρi = maxqi∈Qq ˆρ
qi
i and let ˆqi = arg maxqi∈Qq ˆρ
qi
i .
Similarly, define ˆρ
qii
ii , ˆρii, and ˆqii.
4. For the ˆfi and ˆqi defined in Step 3, return constraint
(I-ˆqi- ˆfi) or (II-ˆqii- ˆfii) if either is violated. If both are
satisfied, then return feasible.
We first note that the separation oracle runs in polynomial
time and then prove its correctness. Steps 1 and 4 are clearly
polynomial. For Step 2, we have described how to compute
the relevant response functions by examining every action of
Player I, every world, every query, and every action of Player
II. There are only polynomially many queries, worlds, query
results, and pure actions, so the running time of Steps 2 and
3 is thus polynomial.
We now sketch the proof that the separation oracle works
correctly. The main challenge is to show that if any
constraint (I-qi-fi ) is violated then (I-ˆqi- ˆfi) is violated in Step 4.
First, we observe that, by construction, the function ˆfi
computed in Step 3 must be a best response to Player II playing
fii, no matter what query Player I makes. Therefore the
strategy make query ˆqi, then play response function ˆfi
must be a best response to Player II playing fii, by definition
of ˆqi. The right-hand side of each constraint (I-qi-fi ) is equal
to the expected payoff that Player I receives when playing
the pure strategy make query qi and then play response
function fi against Player II"s strategy of fii. Therefore,
because the pure strategy make query ˆqi and then play
response function ˆfi is a best response to Player II
playing fii, the right-hand side of constraint (I-ˆqi- ˆfi) is at least
as large as the right hand side of any constraint (I-ˆqi-fi ).
Therefore, if any constraint (I-qi-fi ) is violated, constraint
(I-ˆqi- ˆfi) is also violated. An analogous argument holds for
Player II.
These lemmas and the well-known fact that Nash
equilibria always exist [52] imply the following theorem:
Theorem 4.3. Nash equilibria can be found in
polynomial time for any two-player unobservable-query Socratic
game with strategically zero-sum worlds.
5. SOCRATIC GAMES WITH
OBSERVABLE QUERIES
In this section, we give efficient algorithms to find (1) a
Nash equilibrium for observable-query Socratic games with
constant-sum worlds and (2) a correlated equilibrium in the
broader class of Socratic games with strategically zero-sum
worlds. Recall that a Socratic game G = A, W, u, S, Q, p, δ
with observable queries proceeds in two stages:
Stage 1: The players simultaneously choose queries q ∈ Q.
Player i receives as output qi, qii, and qi(wreal).
Stage 2: The players simultaneously choose strategies a ∈
A. The payoff to Player i is u
wreal
i (a) − δi(qi).
Using backward induction, we first solve Stage 2 and then
proceed to the Stage-1 game.
For a query q ∈ Q, we would like to analyze the Stage-2
game ˆGq resulting from the players making queries q in
Stage 1. Technically, however, ˆGq is not actually a game,
because at the beginning of Stage 2 the players have different
information about the world: Player I knows qi(wreal), and
154
Player II knows qii(wreal). Fortunately, the situation in which
players have asymmetric private knowledge has been well
studied in the game-theory literature. A Bayesian game is
a quadruple A, T, r, u , where:
• Ai is the set of pure strategies for Player i.
• Ti is the set of types for Player i.
• r is a probability distribution over T; r(t) denotes the
probability that Player i has type ti for all i.
• ui : A × T → R is the payoff function for Player i.
If the players have types t and play pure strategies a,
then ui(a, t) denotes the payoff for Player i.
Initially, a type t is drawn randomly from T according to the
distribution r. Player i learns his type ti, but does not learn
any other player"s type. Player i then plays a mixed strategy
αi ∈ Ai-that is, a probability distribution over Ai-and
receives payoff ui(α, t). A strategy function is a function
hi : Ti → Ai; Player i plays the mixed strategy hi(ti) ∈ Ai
when her type is ti. A strategy-function profile h is a
Bayesian Nash equilibrium if and only if no Player i has
unilateral incentive to deviate from hi if the other players play
according to h. For a two-player Bayesian game, if α = h(t),
then the profile h is a Bayesian Nash equilibrium exactly
when the following condition and its analogue for Player II
hold: Et∼r[ui(α, t)] = maxhi
Et∼r[ui( hi(ti), αii , t)]. These
conditions hold if and only if, for all ti ∈ Ti occurring with
positive probability, Player i"s expected utility conditioned
on his type being ti is maximized by hi(ti). A Bayesian
game is constant sum if for all a ∈ A and all t ∈ T, we
have ui(a, t) + uii(a, t) = ct, for some constant ct
independent of a. A Bayesian game is strategically zero sum if the
classical game A, u(·, t) is strategically zero sum for every
t ∈ T. Whether a Bayesian game is strategically zero sum
can be determined as in Theorem 3.1. (For further
discussion of Bayesian games, see [25, 31].)
We now formally define the Stage-2 game as a Bayesian
game. Given a Socratic game G = A, W, u, S, Q, p, δ and
a query profile q ∈ Q, we define the Stage-2 Bayesian game
Gstage2(q) := A, Tq
, pstage2(q)
, ustage2(q)
, where:
• Ai, the set of pure strategies for Player i, is the same
as in the original Socratic game;
• Tq
i = {qi(w) : w ∈ W}, the set of types for Player i, is
the set of signals that can result from query qi;
• pstage2(q)
(t) = Pr[q(w) = t | w ← p]; and
• u
stage2(q)
i (a, t) =
P
w∈W Pr[w ← p | q(w) = t] · uw
i (a).
We now define the Stage-1 game in terms of the payoffs
for the Stage-2 games. Fix any algorithm alg that finds a
Bayesian Nash equilibrium hq,alg
:= alg(Gstage2(q)) for each
Stage-2 game. Define valuealg
i (Gstage2(q)) to be the expected
payoff received by Player i in the Bayesian game Gstage2(q)
if each player plays according to hq,alg
, that is,
valuealg
i (Gstage2(q))
:=
P
w∈W p(w) · u
stage2(q)
i (hq,alg
(q(w)), q(w)).
Define the game Galg
stage1 := Astage1
, ustage1(alg)
, where:
• Astage1
:= Q, the set of available queries in the Socratic
game; and
• u
stage1(alg)
i (q) := valuealg
i (Gstage2(q)) − δi(qi).
I.e., players choose queries q and receive payoffs
corresponding to valuealg
(Gstage2(q)), less query costs.
Lemma 5.1. Consider an observable-query Socratic game
G = A, W, u, S, Q, p, δ . Let Gstage2(q) be the Stage-2 games
for all q ∈ Q, let alg be an algorithm finding a Bayesian
Nash equilibrium in each Gstage2(q), and let Galg
stage1 be the
Stage-1 game. Let α be a Nash equilibrium for Galg
stage1, and
let hq,alg
:= alg(Gstage2(q)) be a Bayesian Nash equilibrium
for each Gstage2(q). Then the following strategy profile is a
Nash equilibrium for G:
• In Stage 1, Player i makes query qi with probability
αi(qi). (That is, set fquery
(q) := α(q).)
• In Stage 2, if q is the query in Stage 1 and qi(wreal)
denotes the response to Player i"s query, then Player i
chooses action ai with probability hq,alg
i (qi(wreal)). (In
other words, set fresp
i (q, qi(w)) := hq,alg
i (qi(w)).)
We now find equilibria in the stage games for Socratic games
with constant- or strategically zero-sum worlds. We first
show that the stage games are well structured in this setting:
Lemma 5.2. Consider an observable-query Socratic game
G = A, W, u, S, Q, p, δ with constant-sum worlds. Then
the Stage-1 game Galg
stage1 is strategically zero sum for every
algorithm alg, and every Stage-2 game Gstage2(q) is Bayesian
constant sum. If the worlds of G are strategically zero sum,
then every Gstage2(q) is Bayesian strategically zero sum.
We now show that we can efficiently compute equilibria for
these well-structured stage games.
Theorem 5.3. There exists a polynomial-time algorithm
BNE finding Bayesian Nash equilibria in strategically
zerosum Bayesian (and thus classical strategically zero-sum or
Bayesian constant-sum) two-player games.
Proof Sketch. Let G = A, T, r, u be a strategically
zero-sum Bayesian game. Define an unobservable-query
Socratic game G∗
with one possible world for each t ∈ T, one
available zero-cost query qi for each Player i so that qi
reveals ti, and all else as in G. Bayesian Nash equilibria in G
correspond directly to Nash equilibria in G∗
, and the worlds
of G∗
are strategically zero sum. Thus by Theorem 4.3 we
can compute Nash equilibria for G∗
, and thus we can
compute Bayesian Nash equilibria for G.
(LP"s for zero-sum two-player Bayesian games have been
previously developed and studied [61].)
Theorem 5.4. We can compute a Nash equilibrium for
an arbitrary two-player observable-query Socratic game G =
A, W, u, S, Q, p, δ with constant-sum worlds in polynomial
time.
Proof. Because each world of G is constant sum, Lemma
5.2 implies that the induced Stage-2 games Gstage2(q) are
all Bayesian constant sum. Thus we can use algorithm
BNE to compute a Bayesian Nash equilibrium hq,BNE
:=
BNE(Gstage2(q)) for each q ∈ Q, by Theorem 5.3.
Furthermore, again by Lemma 5.2, the induced Stage-1 game
GBNE
stage1 is classical strategically zero sum. Therefore we can
again use algorithm BNE to compute a Nash equilibrium
α := BNE(GBNE
stage1), again by Theorem 5.3. Therefore, by
Lemma 5.1, we can assemble α and the hq,BNE
"s into a Nash
equilibrium for the Socratic game G.
155
We would like to extend our results on observable-query
Socratic games to Socratic games with strategically
zerosum worlds. While we can still find Nash equilibria in the
Stage-2 games, the resulting Stage-1 game is not in
general strategically zero sum. Thus, finding Nash equilibria
in observable-query Socratic games with strategically
zerosum worlds seems to require substantially new techniques.
However, our techniques for decomposing observable-query
Socratic games do allow us to find correlated equilibria in
this case.
Lemma 5.5. Consider an observable-query Socratic game
G = A, W, u, S, Q, p, δ . Let alg be an arbitrary algorithm
that finds a Bayesian Nash equilibrium in each of the derived
Stage-2 games Gstage2(q), and let Galg
stage1 be the derived
Stage1 game. Let φ be a correlated equilibrium for Galg
stage1, and let
hq,alg
:= alg(Gstage2(q)) be a Bayesian Nash equilibrium for
each Gstage2(q). Then the following distribution over pure
strategies is a correlated equilibrium for G:
ψ(q, f) := φ(q)
Y
i∈{i,ii}
Y
s∈S
Pr
h
fi(q, s) ← hq,alg
i (s)
i
.
Thus to find a correlated equilibrium in an observable-query
Socratic game with strategically zero-sum worlds, we need
only algorithm BNE from Theorem 5.3 along with an
efficient algorithm for finding a correlated equilibrium in a
general game. Such an algorithm exists (the definition of
correlated equilibria can be directly translated into an LP [3]),
and therefore we have the following theorem:
Theorem 5.6. We can provide both efficient oracle
access and efficient sampling access to a correlated
equilibrium for any observable-query two-player Socratic game with
strategically zero-sum worlds.
Because the support of the correlated equilibrium may be
exponentially large, providing oracle and sampling access is
the natural way to represent the correlated equilibrium.
By Lemma 5.5, we can also compute correlated equilibria
in any observable-query Socratic game for which Nash
equilibria are computable in the induced Gstage2(q) games (e.g.,
when Gstage2(q) is of constant size).
Another potentially interesting model of queries in
Socratic games is what one might call public queries, in which
both the choice and outcome of a player"s query is
observable by all players in the game. (This model might be most
appropriate in the presence of corporate espionage or media
leaks, or in a setting in which the queries-and thus their
results-are done in plain view.) The techniques that we
have developed in this section also yield exactly the same
results as for observable queries. The proof is actually
simpler: with public queries, the players" payoffs are common
knowledge when Stage 2 begins, and thus Stage 2 really is a
complete-information game. (There may still be uncertainty
about the real world, but all players use the observed
signals to infer exactly the same set of possible worlds in which
wreal may lie; thus they are playing a complete-information
game against each other.) Thus we have the same results
as in Theorems 5.4 and 5.6 more simply, by solving Stage 2
using a (non-Bayesian) Nash-equilibrium finder and solving
Stage 1 as before.
Our results for observable queries are weaker than for
unobservable: in Socratic games with worlds that are
strategically zero sum but not constant sum, we find only a
correlated equilibrium in the observable case, whereas we find a
Nash equilibrium in the unobservable case. We might hope
to extend our unobservable-query techniques to observable
queries, but there is no obvious way to do so. The
fundamental obstacle is that the LP"s payoff constraint becomes
nonlinear if there is any dependence on the probability that
the other player made a particular query. This dependence
arises with observable queries, suggesting that observable
Socratic games with strategically zero-sum worlds may be
harder to solve.
6. RELATED WORK
Our work was initially motivated by research in the social
sciences indicating that real people seem (irrationally)
paralyzed when they are presented with additional options. In
this section, we briefly review some of these social-science
experiments and then discuss technical approaches related
to Socratic game theory.
Prima facie, a rational agent"s happiness given an added
option can only increase. However, recent research has found
that more choices tend to decrease happiness: for
example, students choosing among extra-credit options are more
likely to do extra credit if given a small subset of the choices
and, moreover, produce higher-quality work [35]. (See also
[19].) The psychology literature explores a number of
explanations: people may miscalculate their opportunity cost by
comparing their choice to a component-wise maximum of
all other options instead of the single best alternative [65],
a new option may draw undue attention to aspects of the
other options [67], and so on. The present work explores
an economic explanation of this phenomenon: information
is not free. When there are more options, a decision-maker
must spend more time to achieve a satisfactory outcome.
See, e.g., the work of Skyrms [68] for a philosophical
perspective on the role of deliberation in strategic situations.
Finally, we note the connection between Socratic games and
modal logic [34], a formalism for the logic of possibility and
necessity.
The observation that human players typically do not play
rational strategies has inspired some attempts to model
partially rational players. The typical model of this
socalled bounded rationality [36, 64, 66] is to postulate bounds
on computational power in computing the consequences of
a strategy. The work on bounded rationality [23, 24, 53, 58]
differs from the models that we consider here in that instead
of putting hard limitations on the computational power of
the agents, we instead restrict their a priori knowledge of
the state of the world, requiring them to spend time (and
therefore money/utility) to learn about it.
Partially observable stochastic games (POSGs) are a
general framework used in AI to model situations of multi-agent
planning in an evolving, unknown environment, but the
generality of POSGs seems to make them very difficult [6].
Recent work has been done in developing algorithms for
restricted classes of POSGs, most notably classes of
cooperative POSGs-e.g., [20, 30]-which are very different from
the competitive strategically zero-sum games we address in
this paper.
The fundamental question in Socratic games is deciding
on the comparative value of making a more costly but more
informative query, or concluding the data-gathering phase
and picking the best option, given current information. This
tradeoff has been explored in a variety of other contexts;
a sampling of these contexts includes aggregating results
156
from delay-prone information sources [8], doing approximate
reasoning in intelligent systems [72], deciding when to take
the current best guess of disease diagnosis from a
beliefpropagation network and when to let it continue inference
[33], among many others.
This issue can also be viewed as another perspective on
the general question of exploration versus exploitation that
arises often in AI: when is it better to actively seek
additional information instead of exploiting the knowledge one
already has? (See, e.g., [69].) Most of this work differs
significantly from our own in that it considers single-agent
planning as opposed to the game-theoretic setting. A
notable exception is the work of Larson and Sandholm [41, 42,
43, 44] on mechanism design for interacting agents whose
computation is costly and limited. They present a model
in which players must solve a computationally intractable
valuation problem, using costly computation to learn some
hidden parameters, and results for auctions and bargaining
games in this model.
7. FUTURE DIRECTIONS
Efficiently finding Nash equilibria in Socratic games with
non-strategically zero-sum worlds is probably difficult
because the existence of such an algorithm for classical games
has been shown to be unlikely [10, 11, 13, 16, 17, 27, 54,
55]. There has, however, been some algorithmic success in
finding Nash equilibria in restricted classical settings (e.g.,
[21, 46, 47, 57]); we might hope to extend our results to
analogous Socratic games.
An efficient algorithm to find correlated equilibria in
general Socratic games seems more attainable. Suppose the
players receive recommended queries and responses. The
difficulty is that when a player considers a deviation from
his recommended query, he already knows his recommended
response in each of the Stage-2 games. In a correlated
equilibrium, a player"s expected payoff generally depends on his
recommended strategy, and thus a player may deviate in
Stage 1 so as to land in a Stage-2 game where he has been
given a better than average recommended response.
(Socratic games are succinct games of superpolynomial type,
so Papadimitriou"s results [56] do not imply correlated
equilibria for them.)
Socratic games can be extended to allow players to make
adaptive queries, choosing subsequent queries based on
previous results. Our techniques carry over to O(1) rounds of
unobservable queries, but it would be interesting to
compute equilibria in Socratic games with adaptive observable
queries or with ω(1) rounds of unobservable queries.
Special cases of adaptive Socratic games are closely related to
single-agent problems like minimum latency [1, 7, 26],
determining strategies for using priced information [9, 29, 37],
and an online version of minimum test cover [18, 50].
Although there are important technical distinctions between
adaptive Socratic games and these problems, approximation
techniques from this literature may apply to Socratic games.
The question of approximation raises interesting questions
even in non-adaptive Socratic games. An -approximate
Nash equilibrium is a strategy profile α so that no player
can increase her payoff by an additive by deviating from α.
Finding approximate Nash equilibria in both adaptive and
non-adaptive Socratic games is an interesting direction to
pursue.
Another natural extension is the model where query
results are stochastic. In this paper, we model a query as
deterministically partitioning the possible worlds into
subsets that the query cannot distinguish. However, one could
instead model a query as probabilistically mapping the set
of possible worlds into the set of signals. With this
modification, our unobservable-query model becomes equivalent
to the model of Bergemann and V¨alim¨aki [4, 5], in which
the result of a query is a posterior distribution over the
worlds. Our techniques allow us to compute equilibria in
such a stochastic-query model provided that each query is
represented as a table that, for each world/signal pair, lists
the probability that the query outputs that signal in that
world. It is also interesting to consider settings in which the
game"s queries are specified by a compact representation of
the relevant probability distributions. (For example, one
might consider a setting in which the algorithm has only a
sampling oracle for the posterior distributions envisioned by
Bergemann and V¨alim¨aki.) Efficiently finding equilibria in
such settings remains an open problem.
Another interesting setting for Socratic games is when the
set Q of available queries is given by Q = P(Γ)-i.e., each
player chooses to make a set q ∈ P(Γ) of queries from a
specified groundset Γ of queries. Here we take the query
cost to be a linear function, so that δ(q) =
P
γ∈q δ({γ}).
Natural groundsets include comparison queries (if my
opponent is playing strategy aii, would I prefer to play ai or
ˆai?), strategy queries (what is my vector of payoffs if I
play strategy ai?), and world-identity queries (is the world
w ∈ W the real world?). When one can infer a polynomial
bound on the number of queries made by a rational player,
then our results yield efficient solutions. (For example, we
can efficiently solve games in which every groundset element
γ ∈ Γ has δ({γ}) = Ω(M − M), where M and M denote
the maximum and minimum payoffs to any player in any
world.) Conversely, it is NP-hard to compute a Nash
equilibrium for such a game when every δ({γ}) ≤ 1/|W|2
, even
when the worlds are constant sum and Player II has only
a single available strategy. Thus even computing a best
response for Player I is hard. (This proof proceeds by
reduction from set cover; intuitively, for sufficiently low query
costs, Player I must fully identify the actual world through
his queries. Selecting a minimum-sized set of these queries
is hard.) Computing Player I"s best response can be viewed
as maximizing a submodular function, and thus a best
response can be (1 − 1/e) ≈ 0.63 approximated greedily [14].
An interesting open question is whether this approximate
best-response calculation can be leveraged to find an
approximate Nash equilibrium.
8. ACKNOWLEDGEMENTS
Part of this work was done while all authors were at MIT
CSAIL. We thank Erik Demaine, Natalia Hernandez
Gardiol, Claire Monteleoni, Jason Rennie, Madhu Sudan, and
Katherine White for helpful comments and discussions.
9. REFERENCES
[1] Aaron Archer and David P. Williamson. Faster
approximation algorithms for the minimum latency
problem. In Proceedings of the Symposium on Discrete
Algorithms, pages 88-96, 2003.
[2] R. J. Aumann. Subjectivity and correlation in
randomized strategies. J. Mathematical Economics,
1:67-96, 1974.
157
[3] Robert J. Aumann. Correlated equilibrium as an
expression of Bayesian rationality. Econometrica,
55(1):1-18, January 1987.
[4] Dick Bergemann and Juuso V¨alim¨aki. Information
acquisition and efficient mechanism design.
Econometrica, 70(3):1007-1033, May 2002.
[5] Dick Bergemann and Juuso V¨alim¨aki. Information in
mechanism design. Technical Report 1532, Cowles
Foundation for Research in Economics, 2005.
[6] Daniel S. Bernstein, Shlomo Zilberstein, and Neil
Immerman. The complexity of decentralized control of
Markov Decision Processes. Mathematics of
Operations Research, pages 819-840, 2002.
[7] Avrim Blum, Prasad Chalasani, Don Coppersmith,
Bill Pulleyblank, Prabhakar Raghavan, and Madhu
Sudan. The minimum latency problem. In Proceedings
of the Symposium on the Theory of Computing, pages
163-171, 1994.
[8] Andrei Z. Broder and Michael Mitzenmacher. Optimal
plans for aggregation. In Proceedings of the Principles
of Distributed Computing, pages 144-152, 2002.
[9] Moses Charikar, Ronald Fagin, Venkatesan
Guruswami, Jon Kleinberg, Prabhakar Raghavan, and
Amit Sahai. Query strategies for priced information.
J. Computer and System Sciences, 64(4):785-819,
June 2002.
[10] Xi Chen and Xiaotie Deng. 3-NASH is
PPAD-complete. In Electronic Colloquium on
Computational Complexity, 2005.
[11] Xi Chen and Xiaotie Deng. Settling the complexity of
2-player Nash-equilibrium. In Electronic Colloquium
on Computational Complexity, 2005.
[12] Olivier Compte and Philippe Jehiel. Auctions and
information acquisition: Sealed-bid or dynamic
formats? Technical report, Centre d"Enseignement et
de Recherche en Analyse Socio-´economique, 2002.
[13] Vincent Conitzer and Tuomas Sandholm. Complexity
results about Nash equilibria. In Proceedings of the
International Joint Conference on Artificial
Intelligence, pages 765-771, 2003.
[14] Gerard Cornuejols, Marshall L. Fisher, and George L.
Nemhauser. Location of bank accounts to optimize
float: An analytic study of exact and approximate
algorithms. Management Science, 23(8), April 1977.
[15] Jacques Cr´emer and Fahad Khalil. Gathering
information before signing a contract. American
Economic Review, 82:566-578, 1992.
[16] Constantinos Daskalakis, Paul W. Goldberg, and
Christos H. Papadimitriou. The complexity of
computing a Nash equilbrium. In Electronic
Colloquium on Computational Complexity, 2005.
[17] Konstantinos Daskalakis and Christos H.
Papadimitriou. Three-player games are hard. In
Electronic Colloquium on Computational Complexity,
2005.
[18] K. M. J. De Bontridder, B. V. Halld´orsson, M. M.
Halld´orsson, C. A. J. Hurkens, J. K. Lenstra, R. Ravi,
and L. Stougie. Approximation algorithms for the test
cover problem. Mathematical Programming,
98(1-3):477-491, September 2003.
[19] Ap Dijksterhuis, Maarten W. Bos, Loran F. Nordgren,
and Rick B. van Baaren. On making the right choice:
The deliberation-without-attention effect. Science,
311:1005-1007, 17 February 2006.
[20] Rosemary Emery-Montemerlo, Geoff Gordon, Jeff
Schneider, and Sebastian Thrun. Approximate
solutions for partially observable stochastic games
with common payoffs. In Autonomous Agents and
Multi-Agent Systems, 2004.
[21] Alex Fabrikant, Christos Papadimitriou, and Kunal
Talwar. The complexity of pure Nash equilibria. In
Proceedings of the Symposium on the Theory of
Computing, 2004.
[22] Kyna Fong. Multi-stage Information Acquisition in
Auction Design. Senior thesis, Harvard College, 2003.
[23] Lance Fortnow and Duke Whang. Optimality and
domination in repeated games with bounded players.
In Proceedings of the Symposium on the Theory of
Computing, pages 741-749, 1994.
[24] Yoav Freund, Michael Kearns, Yishay Mansour, Dana
Ron, Ronitt Rubinfeld, and Robert E. Schapire.
Efficient algorithms for learning to play repeated
games against computationally bounded adversaries.
In Proceedings of the Foundations of Computer
Science, pages 332-341, 1995.
[25] Drew Fudenberg and Jean Tirole. Game Theory. MIT,
1991.
[26] Michel X. Goemans and Jon Kleinberg. An improved
approximation ratio for the minimum latency problem.
Mathematical Programming, 82:111-124, 1998.
[27] Paul W. Goldberg and Christos H. Papadimitriou.
Reducibility among equilibrium problems. In
Electronic Colloquium on Computational Complexity,
2005.
[28] M. Grotschel, L. Lovasz, and A. Schrijver. The
ellipsoid method and its consequences in combinatorial
optimization. Combinatorica, 1:70-89, 1981.
[29] Anupam Gupta and Amit Kumar. Sorting and
selection with structured costs. In Proceedings of the
Foundations of Computer Science, pages 416-425,
2001.
[30] Eric A. Hansen, Daniel S. Bernstein, and Shlomo
Zilberstein. Dynamic programming for partially
observable stochastic games. In National Conference
on Artificial Intelligence (AAAI), 2004.
[31] John C. Harsanyi. Games with incomplete information
played by Bayesian players. Management Science,
14(3,5,7), 1967-1968.
[32] Sergiu Hart and David Schmeidler. Existence of
correlated equilibria. Mathematics of Operations
Research, 14(1):18-25, 1989.
[33] Eric Horvitz and Geoffrey Rutledge. Time-dependent
utility and action under uncertainty. In Uncertainty in
Artificial Intelligence, pages 151-158, 1991.
[34] G. E. Hughes and M. J. Cresswell. A New
Introduction to Modal Logic. Routledge, 1996.
[35] Sheena S. Iyengar and Mark R. Lepper. When choice
is demotivating: Can one desire too much of a good
thing? J. Personality and Social Psychology,
79(6):995-1006, 2000.
[36] Ehud Kalai. Bounded rationality and strategic
complexity in repeated games. Game Theory and
Applications, pages 131-157, 1990.
158
[37] Sampath Kannan and Sanjeev Khanna. Selection with
monotone comparison costs. In Proceedings of the
Symposium on Discrete Algorithms, pages 10-17, 2003.
[38] L.G. Khachiyan. A polynomial algorithm in linear
programming. Dokklady Akademiia Nauk SSSR, 244,
1979.
[39] Daphne Koller and Nimrod Megiddo. The complexity
of two-person zero-sum games in extensive form.
Games and Economic Behavior, 4:528-552, 1992.
[40] Daphne Koller, Nimrod Megiddo, and Bernhard von
Stengel. Efficient computation of equilibria for
extensive two-person games. Games and Economic
Behavior, 14:247-259, 1996.
[41] Kate Larson. Mechanism Design for Computationally
Limited Agents. PhD thesis, CMU, 2004.
[42] Kate Larson and Tuomas Sandholm. Bargaining with
limited computation: Deliberation equilibrium.
Artificial Intelligence, 132(2):183-217, 2001.
[43] Kate Larson and Tuomas Sandholm. Costly valuation
computation in auctions. In Proceedings of the
Theoretical Aspects of Rationality and Knowledge,
July 2001.
[44] Kate Larson and Tuomas Sandholm. Strategic
deliberation and truthful revelation: An impossibility
result. In Proceedings of the ACM Conference on
Electronic Commerce, May 2004.
[45] C. E. Lemke and J. T. Howson, Jr. Equilibrium points
of bimatrix games. J. Society for Industrial and
Applied Mathematics, 12, 1964.
[46] Richard J. Lipton, Evangelos Markakis, and Aranyak
Mehta. Playing large games using simple strategies. In
Proceedings of the ACM Conference on Electronic
Commerce, pages 36-41, 2003.
[47] Michael L. Littman, Michael Kearns, and Satinder
Singh. An efficient exact algorithm for singly
connected graphical games. In Proceedings of Neural
Information Processing Systems, 2001.
[48] Steven A. Matthews and Nicola Persico. Information
acquisition and the excess refund puzzle. Technical
Report 05-015, Department of Economics, University
of Pennsylvania, March 2005.
[49] Richard D. McKelvey and Andrew McLennan.
Computation of equilibria in finite games. In
H. Amman, D. A. Kendrick, and J. Rust, editors,
Handbook of Compututational Economics, volume 1,
pages 87-142. Elsevier, 1996.
[50] B.M.E. Moret and H. D. Shapiro. On minimizing a set
of tests. SIAM J. Scientific Statistical Computing,
6:983-1003, 1985.
[51] H. Moulin and J.-P. Vial. Strategically zero-sum
games: The class of games whose completely mixed
equilibria cannot be improved upon. International J.
Game Theory, 7(3/4), 1978.
[52] John F. Nash, Jr. Equilibrium points in n-person
games. Proceedings of the National Academy of
Sciences, 36:48-49, 1950.
[53] Abraham Neyman. Finitely repeated games with finite
automata. Mathematics of Operations Research,
23(3):513-552, August 1998.
[54] Christos Papadimitriou. On the complexity of the
parity argument and other inefficient proofs of
existence. J. Computer and System Sciences,
48:498-532, 1994.
[55] Christos Papadimitriou. Algorithms, games, and the
internet. In Proceedings of the Symposium on the
Theory of Computing, pages 749-753, 2001.
[56] Christos H. Papadimitriou. Computing correlated
equilibria in multi-player games. In Proceedings of the
Symposium on the Theory of Computing, 2005.
[57] Christos H. Papadimitriou and Tim Roughgarden.
Computing equilibria in multiplayer games. In
Proceedings of the Symposium on Discrete Algorithms,
2005.
[58] Christos H. Papadimitriou and Mihalis Yannakakis.
On bounded rationality and computational
complexity. In Proceedings of the Symposium on the
Theory of Computing, pages 726-733, 1994.
[59] David C. Parkes. Auction design with costly
preference elicitation. Annals of Mathematics and
Artificial Intelligence, 44:269-302, 2005.
[60] Nicola Persico. Information acquisition in auctions.
Econometrica, 68(1):135-148, 2000.
[61] Jean-Pierre Ponssard and Sylvain Sorin. The LP
formulation of finite zero-sum games with incomplete
information. International J. Game Theory,
9(2):99-105, 1980.
[62] Eric Rasmussen. Strategic implications of uncertainty
over one"s own private value in auctions. Technical
report, Indiana University, 2005.
[63] Leonardo Rezende. Mid-auction information
acquisition. Technical report, University of Illinois,
2005.
[64] Ariel Rubinstein. Modeling Bounded Rationality. MIT,
1988.
[65] Barry Schwartz. The Paradox of Choice: Why More is
Less. Ecco, 2004.
[66] Herbert Simon. Models of Bounded Rationality. MIT,
1982.
[67] I. Simonson and A. Tversky. Choice in context:
Tradeoff contrast and extremeness aversion. J.
Marketing Research, 29:281-295, 1992.
[68] Brian Skyrms. Dynamic models of deliberation and
the theory of games. In Proceedings of the Theoretical
Aspects of Rationality and Knowledge, pages 185-200,
1990.
[69] Richard Sutton and Andrew Barto. Reinforcement
Learning: An Introduction. MIT, 1998.
[70] John von Neumann and Oskar Morgenstern. Theory of
Games and Economic Behavior. Princeton, 1957.
[71] Bernhard von Stengel. Computing equilibria for
two-person games. In R. J. Aumann and S. Hart,
editors, Handbook of Game Theory with Econonic
Applications, volume 3, pages 1723-1759. Elsevier,
2002.
[72] S. Zilberstein and S. Russell. Approximate reasoning
using anytime algorithms. In S. Natarajan, editor,
Imprecise and Approximate Computation. Kluwer,
1995.
159 | algorithm;game theory;nash equilibrium;strategic multiplayer environment;constant-sum game;correlate equilibrium;auction;arbitrary partial information;questionand-answer session;information acquisition;socratic game;priori probability distribution;observable-query model;missing information;game-either full omniscient knowledge;unobservable-query model |
train_J-37 | Finding Equilibria in Large Sequential Games of Imperfect Information | Finding an equilibrium of an extensive form game of imperfect information is a fundamental problem in computational game theory, but current techniques do not scale to large games. To address this, we introduce the ordered game isomorphism and the related ordered game isomorphic abstraction transformation. For a multi-player sequential game of imperfect information with observable actions and an ordered signal space, we prove that any Nash equilibrium in an abstracted smaller game, obtained by one or more applications of the transformation, can be easily converted into a Nash equilibrium in the original game. We present an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively. Its complexity is ˜O(n2 ), where n is the number of nodes in a structure we call the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree. Using GameShrink, we find an equilibrium to a poker game with 3.1 billion nodes-over four orders of magnitude more than in the largest poker game solved previously. We discuss several electronic commerce applications for GameShrink. To address even larger games, we introduce approximation methods that do not preserve equilibrium, but nevertheless yield (ex post) provably close-to-optimal strategies. | 1. INTRODUCTION
In environments with more than one agent, an agent"s
outcome is generally affected by the actions of the other
agent(s). Consequently, the optimal action of one agent can
depend on the others. Game theory provides a normative
framework for analyzing such strategic situations. In
particular, it provides solution concepts that define what rational
behavior is in such settings. The most famous and
important solution concept is that of Nash equilibrium [36]. It is
a strategy profile (one strategy for each agent) in which no
agent has incentive to deviate to a different strategy.
However, for the concept to be operational, we need algorithmic
techniques for finding an equilibrium.
Games can be classified as either games of perfect
information or imperfect information. Chess and Go are examples
of the former, and, until recently, most game playing work
has been on games of this type. To compute an optimal
strategy in a perfect information game, an agent traverses
the game tree and evaluates individual nodes. If the agent
is able to traverse the entire game tree, she simply computes
an optimal strategy from the bottom-up, using the principle
of backward induction.1
In computer science terms, this is
done using minimax search (often in conjunction with
αβ-pruning to reduce the search tree size and thus enhance
speed). Minimax search runs in linear time in the size of the
game tree.2
The differentiating feature of games of imperfect
information, such as poker, is that they are not fully observable:
when it is an agent"s turn to move, she does not have access
to all of the information about the world. In such games,
the decision of what to do at a point in time cannot
generally be optimally made without considering decisions at all
other points in time (including ones on other paths of play)
because those other decisions affect the probabilities of
being at different states at the current point in time. Thus
the algorithms for perfect information games do not solve
games of imperfect information.
For sequential games with imperfect information, one could
try to find an equilibrium using the normal (matrix) form,
where every contingency plan of the agent is a pure strategy
for the agent.3
Unfortunately (even if equivalent strategies
1
This actually yields a solution that satisfies not only the
Nash equilibrium solution concept, but a stronger solution
concept called subgame perfect Nash equilibrium [45].
2
This type of algorithm still does not scale to huge
trees (such as in chess or Go), but effective game-playing
agents can be developed even then by evaluating
intermediate nodes using a heuristic evaluation and then treating
those nodes as leaves.
3
An -equilibrium in a normal form game with any
160
are replaced by a single strategy [27]) this representation is
generally exponential in the size of the game tree [52].
By observing that one needs to consider only sequences of
moves rather than pure strategies [41, 46, 22, 52], one arrives
at a more compact representation, the sequence form, which
is linear in the size of the game tree.4
For 2-player games,
there is a polynomial-sized (in the size of the game tree)
linear programming formulation (linear complementarity in
the non-zero-sum case) based on the sequence form such that
strategies for players 1 and 2 correspond to primal and dual
variables. Thus, the equilibria of reasonable-sized 2-player
games can be computed using this method [52, 24, 25].5
However, this approach still yields enormous (unsolvable)
optimization problems for many real-world games, such as
poker.
1.1 Our approach
In this paper, we take a different approach to tackling
the difficult problem of equilibrium computation. Instead
of developing an equilibrium-finding method per se, we
instead develop a methodology for automatically abstracting
games in such a way that any equilibrium in the smaller
(abstracted) game corresponds directly to an equilibrium in
the original game. Thus, by computing an equilibrium in
the smaller game (using any available equilibrium-finding
algorithm), we are able to construct an equilibrium in the
original game. The motivation is that an equilibrium for
the smaller game can be computed drastically faster than
for the original game.
To this end, we introduce games with ordered signals
(Section 2), a broad class of games that has enough structure
which we can exploit for abstraction purposes. Instead of
operating directly on the game tree (something we found
to be technically challenging), we instead introduce the use
of information filters (Section 2.1), which coarsen the
information each player receives. They are used in our analysis
and abstraction algorithm. By operating only in the space
of filters, we are able to keep the strategic structure of the
game intact, while abstracting out details of the game in a
way that is lossless from the perspective of equilibrium
finding. We introduce the ordered game isomorphism to describe
strategically symmetric situations and the ordered game
isomorphic abstraction transformation to take advantange of
such symmetries (Section 3). As our main equilibrium
result we have the following:
constant number of agents can be constructed in
quasipolynomial time [31], but finding an exact equilibrium is
PPAD-complete even in a 2-player game [8]. The most
prevalent algorithm for finding an equilibrium in a 2-agent
game is Lemke-Howson [30], but it takes exponentially many
steps in the worst case [44]. For a survey of equilibrium
computation in 2-player games, see [53]. Recently,
equilibriumfinding algorithms that enumerate supports (i.e., sets of
pure strategies that are played with positive probability)
have been shown efficient on many games [40], and efficient
mixed integer programming algorithms that search in the
space of supports have been developed [43]. For more than
two players, many algorithms have been proposed, but they
currently only scale to very small games [19, 34, 40].
4
There were also early techniques that capitalized in
different ways on the fact that in many games the vast majority
of pure strategies are not played in equilibrium [54, 23].
5
Recently this approach was extended to handle
computing sequential equilibria [26] as well [35].
Theorem 2 Let Γ be a game with ordered
signals, and let F be an information filter for Γ. Let
F be an information filter constructed from F by
one application of the ordered game isomorphic
abstraction transformation, and let σ be a Nash
equilibrium strategy profile of the induced game
ΓF (i.e., the game Γ using the filter F ). If σ is
constructed by using the corresponding strategies
of σ , then σ is a Nash equilibrium of ΓF .
The proof of the theorem uses an equivalent
characterization of Nash equilibria: σ is a Nash equilibrium if and
only if there exist beliefs μ (players" beliefs about unknown
information) at all points of the game reachable by σ such
that σ is sequentially rational (i.e., a best response) given μ,
where μ is updated using Bayes" rule. We can then use the
fact that σ is a Nash equilibrium to show that σ is a Nash
equilibrium considering only local properties of the game.
We also give an algorithm, GameShrink, for abstracting
the game using our isomorphism exhaustively (Section 4).
Its complexity is ˜O(n2
), where n is the number of nodes in
a structure we call the signal tree. It is no larger than the
game tree, and on nontrivial games it is drastically smaller,
so GameShrink has time and space complexity sublinear in
the size of the game tree. We present several algorithmic and
data structure related speed improvements (Section 4.1),
and we demonstrate how a simple modification to our
algorithm yields an approximation algorithm (Section 5).
1.2 Electronic commerce applications
Sequential games of imperfect information are ubiquitous,
for example in negotiation and in auctions. Often aspects
of a player"s knowledge are not pertinent for deciding what
action the player should take at a given point in the game.
On the trivial end, some aspects of a player"s knowledge are
never pertinent (e.g., whether it is raining or not has no
bearing on the bidding strategy in an art auction), and such
aspects can be completely left out of the model specification.
However, some aspects can be pertinent in certain states
of the game while they are not pertinent in other states,
and thus cannot be left out of the model completely.
Furthermore, it may be highly non-obvious which aspects are
pertinent in which states of the game. Our algorithm
automatically discovers which aspects are irrelevant in different
states, and eliminates those aspects of the game, resulting
in a more compact, equivalent game representation.
One broad application area that has this property is
sequential negotiation (potentially over multiple issues).
Another broad application area is sequential auctions
(potentially over multiple goods). For example, in those states of
a 1-object auction where bidder A can infer that his
valuation is greater than that of bidder B, bidder A can ignore all
his other information about B"s signals, although that
information would be relevant for inferring B"s exact valuation.
Furthermore, in some states of the auction, a bidder might
not care which exact other bidders have which valuations,
but cares about which valuations are held by the other
bidders in aggregate (ignoring their identities). Many open-cry
sequential auction and negotiation mechanisms fall within
the game model studied in this paper (specified in detail
later), as do certain other games in electronic commerce,
such as sequences of take-it-or-leave-it offers [42].
Our techniques are in no way specific to an application.
The main experiment that we present in this paper is on
161
a recreational game. We chose a particular poker game as
the benchmark problem because it yields an extremely
complicated and enormous game tree, it is a game of imperfect
information, it is fully specified as a game (and the data is
available), and it has been posted as a challenge problem
by others [47] (to our knowledge no such challenge
problem instances have been proposed for electronic commerce
applications that require solving sequential games).
1.3 Rhode Island Hold"em poker
Poker is an enormously popular card game played around
the world. The 2005 World Series of Poker had over $103
million dollars in total prize money, including $56 million
for the main event. Increasingly, poker players compete
in online casinos, and television stations regularly
broadcast poker tournaments. Poker has been identified as an
important research area in AI due to the uncertainty
stemming from opponents" cards, opponents" future actions, and
chance moves, among other reasons [5].
Almost since the field"s founding, game theory has been
used to analyze different aspects of poker [28; 37; 3; 51, pp.
186-219]. However, this work was limited to tiny games that
could be solved by hand. More recently, AI researchers have
been applying the computational power of modern hardware
to computing game theory-based strategies for larger games.
Koller and Pfeffer determined solutions to poker games with
up to 140,000 nodes using the sequence form and linear
programming [25]. Large-scale approximations have been
developed [4], but those methods do not provide any
guarantees about the performance of the computed strategies.
Furthermore, the approximations were designed manually
by a human expert. Our approach yields an automated
abstraction mechanism along with theoretical guarantees on
the strategies" performance.
Rhode Island Hold"em was invented as a testbed for
computational game playing [47]. It was designed so that it was
similar in style to Texas Hold"em, yet not so large that
devising reasonably intelligent strategies would be impossible.
(The rules of Rhode Island Hold"em, as well as a discussion
of how Rhode Island Hold"em can be modeled as a game
with ordered signals, that is, it fits in our model, is available
in an extended version of this paper [13].) We applied the
techniques developed in this paper to find an exact
(minimax) solution to Rhode Island Hold"em, which has a game
tree exceeding 3.1 billion nodes.
Applying the sequence form to Rhode Island Hold"em
directly without abstraction yields a linear program with
91,224,226 rows, and the same number of columns. This is
much too large for (current) linear programming algorithms
to handle. We used our GameShrink algorithm to reduce
this with lossless abstraction, and it yielded a linear program
with 1,237,238 rows and columns-with 50,428,638 non-zero
coefficients. We then applied iterated elimination of
dominated strategies, which further reduced this to 1,190,443
rows and 1,181,084 columns. (Applying iterated
elimination of dominated strategies without GameShrink yielded
89,471,986 rows and 89,121,538 columns, which still would
have been prohibitively large to solve.) GameShrink
required less than one second to perform the shrinking (i.e.,
to compute all of the ordered game isomorphic abstraction
transformations). Using a 1.65GHz IBM eServer p5 570 with
64 gigabytes of RAM (the linear program solver actually
needed 25 gigabytes), we solved it in 7 days and 17 hours
using the interior-point barrier method of CPLEX version
9.1.2. We recently demonstrated our optimal Rhode Island
Hold"em poker player at the AAAI-05 conference [14], and
it is available for play on-line at http://www.cs.cmu.edu/
~gilpin/gsi.html.
While others have worked on computer programs for
playing Rhode Island Hold"em [47], no optimal strategy has been
found before. This is the largest poker game solved to date
by over four orders of magnitude.
2. GAMES WITH ORDERED SIGNALS
We work with a slightly restricted class of games, as
compared to the full generality of the extensive form. This class,
which we call games with ordered signals, is highly
structured, but still general enough to capture a wide range of
strategic situations. A game with ordered signals consists
of a finite number of rounds. Within a round, the players
play a game on a directed tree (the tree can be different in
different rounds). The only uncertainty players face stems
from private signals the other players have received and from
the unknown future signals. In other words, players observe
each others" actions, but potentially not nature"s actions.
In each round, there can be public signals (announced to all
players) and private signals (confidentially communicated
to individual players). For simplicity, we assume-as is the
case in most recreational games-that within each round,
the number of private signals received is the same across
players (this could quite likely be relaxed). We also assume
that the legal actions that a player has are independent of
the signals received. For example, in poker, the legal
betting actions are independent of the cards received. Finally,
the strongest assumption is that there is a partial ordering
over sets of signals, and the payoffs are increasing (not
necessarily strictly) in these signals. For example, in poker, this
partial ordering corresponds exactly to the ranking of card
hands.
Definition 1. A game with ordered signals is a tuple
Γ = I, G, L, Θ, κ, γ, p, , ω, u where:
1. I = {1, . . . , n} is a finite set of players.
2. G = G1
, . . . , Gr
, Gj
=
`
V j
, Ej
´
, is a finite collection
of finite directed trees with nodes V j
and edges Ej
. Let
Zj
denote the leaf nodes of Gj
and let Nj
(v) denote
the outgoing neighbors of v ∈ V j
. Gj
is the stage game
for round j.
3. L = L1
, . . . , Lr
, Lj
: V j
\ Zj
→ I indicates which
player acts (chooses an outgoing edge) at each internal
node in round j.
4. Θ is a finite set of signals.
5. κ = κ1
, . . . , κr
and γ = γ1
, . . . , γr
are vectors of
nonnegative integers, where κj
and γj
denote the
number of public and private signals (per player),
respectively, revealed in round j. Each signal θ ∈ Θ may
only be revealed once, and in each round every player
receives the same number of private signals, so we
require
Pr
j=1 κj
+ nγj
≤ |Θ|. The public information
revealed in round j is αj
∈ Θκj
and the public
information revealed in all rounds up through round j
is ˜αj
=
`
α1
, . . . , αj
´
. The private information
revealed to player i ∈ I in round j is βj
i ∈ Θγj
and
the private information revaled to player i ∈ I in all
rounds up through round j is ˜βj
i =
`
β1
i , . . . , βj
i
´
. We
162
also write ˜βj
=
˜βj
1, . . . , ˜βj
n
to represent all private
information up through round j, and
˜β
j
i , ˜βj
−i
=
˜βj
1, . . . , ˜βj
i−1, ˜β
j
i , ˜βj
i+1, . . . , ˜βj
n
is ˜βj
with ˜βj
i replaced
with ˜β
j
i . The total information revealed up through
round j,
˜αj
, ˜βj
, is said to be legal if no signals are
repeated.
6. p is a probability distribution over Θ, with p(θ) > 0
for all θ ∈ Θ. Signals are drawn from Θ according
to p without replacement, so if X is the set of signals
already revealed, then
p(x | X) =
(
p(x)P
y /∈X p(y)
if x /∈ X
0 if x ∈ X.
7. is a partial ordering of subsets of Θ and is defined
for at least those pairs required by u.
8. ω :
rS
j=1
Zj
→ {over, continue} is a mapping of
terminal nodes within a stage game to one of two
values: over, in which case the game ends, or continue,
in which case the game continues to the next round.
Clearly, we require ω(z) = over for all z ∈ Zr
. Note
that ω is independent of the signals. Let ωj
over = {z ∈
Zj
| ω(z) = over} and ωj
cont = {z ∈ Zj
| ω(z) =
continue}.
9. u = (u1
, . . . , ur
), uj
:
j−1
k=1
ωk
cont × ωj
over ×
j
k=1
Θκk
×
n
i=1
j
k=1
Θγk
→ Rn
is a utility function such that for
every j, 1 ≤ j ≤ r, for every i ∈ I, and for every
˜z ∈
j−1
k=1
ωk
cont × ωj
over, at least one of the following two
conditions holds:
(a) Utility is signal independent: uj
i (˜z, ϑ) = uj
i (˜z, ϑ )
for all legal ϑ, ϑ ∈
j
k=1
Θκk
×
n
i=1
j
k=1
Θγk
.
(b) is defined for all legal signals (˜αj
, ˜βj
i ), (˜αj
, ˜β j
i )
through round j and a player"s utility is increasing
in her private signals, everything else equal:
˜αj
, ˜βj
i
˜αj
, ˜β j
i
=⇒
ui
˜z, ˜αj
,
˜βj
i , ˜βj
−i
≥ ui
˜z, ˜αj
,
˜β
j
i , ˜βj
−i
.
We will use the term game with ordered signals and the
term ordered game interchangeably.
2.1 Information filters
In this subsection, we define an information filter for
ordered games. Instead of completely revealing a signal (either
public or private) to a player, the signal first passes through
this filter, which outputs a coarsened signal to the player. By
varying the filter applied to a game, we are able to obtain
a wide variety of games while keeping the underlying action
space of the game intact. We will use this when designing
our abstraction techniques. Formally, an information filter
is as follows.
Definition 2. Let Γ = I, G, L, Θ, κ, γ, p, , ω, u be an
ordered game. Let Sj
⊆
j
k=1
Θκk
×
j
k=1
Θγk
be the set of
legal signals (i.e., no repeated signals) for one player through
round j. An information filter for Γ is a collection F =
F1
, . . . , Fr
where each Fj
is a function Fj
: Sj
→ 2Sj
such that each of the following conditions hold:
1. (Truthfulness) (˜αj
, ˜βj
i ) ∈ Fj
(˜αj
, ˜βj
i ) for all legal (˜αj
, ˜βj
i ).
2. (Independence) The range of Fj
is a partition of Sj
.
3. (Information preservation) If two values of a signal are
distinguishable in round k, then they are
distinguishable fpr each round j > k. Let mj
=
Pj
l=1 κl
+γl
. We
require that for all legal (θ1, . . . , θmk , . . . , θmj ) ⊆ Θ and
(θ1, . . . , θmk , . . . , θmj ) ⊆ Θ:
(θ1, . . . , θmk ) /∈ Fk
(θ1, . . . , θmk ) =⇒
(θ1, . . . , θmk , . . . , θmj ) /∈ Fj
(θ1, . . . , θmk , . . . , θmj ).
A game with ordered signals Γ and an information filter
F for Γ defines a new game ΓF . We refer to such games as
filtered ordered games. We are left with the original game
if we use the identity filter Fj
˜αj
, ˜βj
i
=
n
˜αj
, ˜βj
i
o
. We
have the following simple (but important) result:
Proposition 1. A filtered ordered game is an extensive
form game satisfying perfect recall.
A simple proof proceeds by constructing an extensive form
game directly from the ordered game, and showing that it
satisfies perfect recall. In determining the payoffs in a game
with filtered signals, we take the average over all real signals
in the filtered class, weighted by the probability of each real
signal occurring.
2.2 Strategies and Nash equilibrium
We are now ready to define behavior strategies in the
context of filtered ordered games.
Definition 3. A behavior strategy for player i in round
j of Γ = I, G, L, Θ, κ, γ, p, , ω, u with information filter
F is a probability distribution over possible actions, and is
defined for each player i, each round j, and each v ∈ V j
\Zj
for Lj
(v) = i:
σj
i,v :
j−1
k=1
ωk
cont×Range
Fj
→ Δ
n
w ∈ V j
| (v, w) ∈ Ej
o
.
(Δ(X) is the set of probability distributions over a finite set
X.) A behavior strategy for player i in round j is σj
i =
(σj
i,v1
, . . . , σj
i,vm
) for each vk ∈ V j
\ Zj
where Lj
(vk) = i.
A behavior strategy for player i in Γ is σi =
`
σ1
i , . . . , σr
i
´
.
A strategy profile is σ = (σ1, . . . , σn). A strategy profile with
σi replaced by σi is (σi, σ−i) = (σ1, . . . , σi−1, σi, σi+1, . . . , σn).
By an abuse of notation, we will say player i receives an
expected payoff of ui(σ) when all players are playing the
strategy profile σ. Strategy σi is said to be player i"s best
response to σ−i if for all other strategies σi for player i we
have ui(σi, σ−i) ≥ ui(σi, σ−i). σ is a Nash equilibrium if,
for every player i, σi is a best response for σ−i. A Nash
equilibrium always exists in finite extensive form games [36],
and one exists in behavior strategies for games with perfect
recall [29]. Using these observations, we have the following
corollary to Proposition 1:
163
Corollary 1. For any filtered ordered game, a Nash
equilibrium exists in behavior strateges.
3. EQUILIBRIUM-PRESERVING
ABSTRACTIONS
In this section, we present our main technique for reducing
the size of games. We begin by defining a filtered signal tree
which represents all of the chance moves in the game. The
bold edges (i.e. the first two levels of the tree) in the game
trees in Figure 1 correspond to the filtered signal trees in
each game.
Definition 4. Associated with every ordered game Γ =
I, G, L, Θ, κ, γ, p, , ω, u and information filter F is a
filtered signal tree, a directed tree in which each node
corresponds to some revealed (filtered) signals and edges
correspond to revealing specific (filtered) signals. The nodes in the
filtered signal tree represent the set of all possible revealed
filtered signals (public and private) at some point in time. The
filtered public signals revealed in round j correspond to the
nodes in the κj
levels beginning at level
Pj−1
k=1
`
κk
+ nγk
´
and the private signals revealed in round j correspond to
the nodes in the nγj
levels beginning at level
Pj
k=1 κk
+
Pj−1
k=1 nγk
. We denote children of a node x as N(x). In
addition, we associate weights with the edges corresponding
to the probability of the particular edge being chosen given
that its parent was reached.
In many games, there are certain situations in the game
that can be thought of as being strategically equivalent to
other situations in the game. By melding these situations
together, it is possible to arrive at a strategically
equivalent smaller game. The next two definitions formalize this
notion via the introduction of the ordered game isomorphic
relation and the ordered game isomorphic abstraction
transformation.
Definition 5. Two subtrees beginning at internal nodes
x and y of a filtered signal tree are ordered game
isomorphic if x and y have the same parent and there is a
bijection f : N(x) → N(y), such that for w ∈ N(x) and
v ∈ N(y), v = f(w) implies the weights on the edges (x, w)
and (y, v) are the same and the subtrees beginning at w and
v are ordered game isomorphic. Two leaves
(corresponding to filtered signals ϑ and ϑ up through round r) are
ordered game isomorphic if for all ˜z ∈
r−1
j=1
ωj
cont × ωr
over,
ur
(˜z, ϑ) = ur
(˜z, ϑ ).
Definition 6. Let Γ = I, G, L, Θ, κ, γ, p, , ω, u be an
ordered game and let F be an information filter for Γ. Let
ϑ and ϑ be two nodes where the subtrees in the induced
filtered signal tree corresponding to the nodes ϑ and ϑ are
ordered game isomorphic, and ϑ and ϑ are at either levelPj−1
k=1
`
κk
+ nγk
´
or
Pj
k=1 κk
+
Pj−1
k=1 nγk
for some round
j. The ordered game isomorphic abstraction transformation
is given by creating a new information filter F :
F j
˜αj
, ˜βj
i
=
8
<
:
Fj
˜αj
, ˜βj
i
if
˜αj
, ˜βj
i
/∈ ϑ ∪ ϑ
ϑ ∪ ϑ if
˜αj
, ˜βj
i
∈ ϑ ∪ ϑ .
Figure 1 shows the ordered game isomorphic abstraction
transformation applied twice to a tiny poker game.
Theorem 2, our main equilibrium result, shows how the ordered
game isomorphic abstraction transformation can be used to
compute equilibria faster.
Theorem 2. Let Γ = I, G, L, Θ, κ, γ, p, , ω, u be an
ordered game and F be an information filter for Γ. Let F be
an information filter constructed from F by one application
of the ordered game isomorphic abstraction transformation.
Let σ be a Nash equilibrium of the induced game ΓF . If
we take σj
i,v
˜z, Fj
˜αj
, ˜βj
i
= σ j
i,v
˜z, F j
˜αj
, ˜βj
i
, σ is
a Nash equilibrium of ΓF .
Proof. For an extensive form game, a belief system μ
assigns a probability to every decision node x such thatP
x∈h μ(x) = 1 for every information set h. A strategy
profile σ is sequentially rational at h given belief system μ if
ui(σi, σ−i | h, μ) ≥ ui(τi, σ−i | h, μ)
for all other strategies τi, where i is the player who controls
h. A basic result [33, Proposition 9.C.1] characterizing Nash
equilibria dictates that σ is a Nash equilibrium if and only
if there is a belief system μ such that for every information
set h with Pr(h | σ) > 0, the following two conditions hold:
(C1) σ is sequentially rational at h given μ; and (C2) μ(x) =
Pr(x | σ)
Pr(h | σ)
for all x ∈ h. Since σ is a Nash equilibrium of Γ ,
there exists such a belief system μ for ΓF . Using μ , we will
construct a belief system μ for Γ and show that conditions
C1 and C2 hold, thus supporting σ as a Nash equilibrium.
Fix some player i ∈ I. Each of i"s information sets in some
round j corresponds to filtered signals Fj
˜α∗j
, ˜β∗j
i
, history
in the first j − 1 rounds (z1, . . . , zj−1) ∈
j−1
k=1
ωk
cont, and
history so far in round j, v ∈ V j
\ Zj
. Let ˜z = (z1, . . . , zj−1, v)
represent all of the player actions leading to this
information set. Thus, we can uniquely specify this information set
using the information
Fj
˜α∗j
, ˜β∗j
i
, ˜z
.
Each node in an information set corresponds to the
possible private signals the other players have received. Denote
by ˜β some legal
(Fj
(˜αj
, ˜βj
1), . . . , Fj
(˜αj
, ˜βj
i−1), Fj
(˜αj
, ˜βj
i+1), . . . , Fj
(˜αj
, ˜βj
n)).
In other words, there exists (˜αj
, ˜βj
1, . . . , ˜βj
n) such that
(˜αj
, ˜βj
i ) ∈ Fj
(˜α∗j
, ˜β∗j
i ), (˜αj
, ˜βj
k) ∈ Fj
(˜αj
, ˜βj
k)
for k = i, and no signals are repeated. Using such a set
of signals (˜αj
, ˜βj
1, . . . , ˜βj
n), let ˆβ denote (F j
(˜αj
, ˜βj
1), . . . ,
F j
(˜αj
, ˜βj
i−1), F j
(˜αj
, ˜βj
i+1), . . . , F j
(˜αj
, ˜βj
n). (We will abuse
notation and write F j
−i
ˆβ
= ˆβ .) We can now compute μ
directly from μ :
μ
ˆβ | Fj
˜αj
, ˜βj
i
, ˜z
=
8
>>>>>><
>>>>>>:
μ
ˆβ | F j
˜αj
, ˜βj
i
, ˜z
if Fj
˜αj
, ˜βj
i
= F j
˜αj
, ˜βj
i
or ˆβ = ˆβ
p∗
μ
ˆβ | F j
˜αj
, ˜βj
i
, ˜z
if Fj
˜αj
, ˜βj
i
= F j
˜αj
, ˜βj
i
and ˆβ = ˆβ
164
J1
J2
J2 K1
K1
K2
K2
c b
C B F B
f b
c b
C B F B
f b
c b
C
f b
B BF
c b
C
f b
B BF
c b
C B F B
f b
c b
C BF B
f b
c b
C
f b
B BF
c b
C
f b
B BF
c b
C
f b
B BF
c b
C
f b
B BF
c b
C B F B
f b
c b
C B F B
f b
0 0
0-1
-1
-1
-1
-1
-1
-1
-1-1
-1 -1
-1
-1
-1
-1
-1
-1
-1-1
-1 -1
-1
-1
-10 0
0
0 0
0
0 0
0
-1
-2
-2 -1
-2
-2 -1
-2
-2 -1
-2
-2 1
2
2 1
2
2 1
2
2 1
2
2
J1 K1 K2 J1 J2 K2 J1 J2 K1
1
1
1
1 1
1 1
1
2
2
2
2
2
2
2
2
{{J1}, {J2}, {K1}, {K2}}
{{J1,J2}, {K1}, {K2}}
c b
C BF B
f b
c b
C
f b
B BF
c b
C B F B
f b
J1,J2 K1 K2
1
1
c b
C
f b
B BF
c b
C BF B
f b
c b
C BF B
f b
c b
C B F B
f b
J1,J2
K1
K2
1
1
1
1
J1,J2 K2 J1,J2 K1
0 0
0-1
-1
-1
-1 -1
-1
-1
-2
-2 -1
-2
-2
2
2
2
2
2
2
-1
-1-1
-1
0 0
0
1
2
2
-1
-1-1
-1
0 0
0
1
2
2
c b
C B F B
f b
-1
-10 0
0
c b
B F B
f b
-1
-1-1
-2
-2
c b
C BF B
f b
0 0
0-1
-1
c b
C BF B
f b
J1,J2
J1,J2 J1,J2K1,K2
K1,K2
K1,K2
-1
-1
1
2
2
2
2
2
2
{{J1,J2}, {K1,K2}}
1
1 1
1
1/4 1/4 1/4 1/4
1/3 1/3 1/3 1/3 1/3 1/3 1/3 1/3 1/3 1/3 1/3 1/3
1/4
1/41/2
1/3 1/3 1/3
1/32/3 1/32/3
1/2 1/2
1/3 2/3 2/3 1/3
Figure 1: GameShrink applied to a tiny two-person four-card (two Jacks and two Kings) poker game. Next
to each game tree is the range of the information filter F. Dotted lines denote information sets, which are
labeled by the controlling player. Open circles are chance nodes with the indicated transition probabilities.
The root node is the chance node for player 1"s card, and the next level is for player 2"s card. The payment
from player 2 to player 1 is given below each leaf. In this example, the algorithm reduces the game tree from
53 nodes to 19 nodes.
where p∗
=
Pr(ˆβ | F j
(˜αj
, ˜β
j
i ))
Pr(ˆβ | F j (˜αj , ˜β
j
i ))
. The following three claims
show that μ as calculated above supports σ as a Nash
equilibrium.
Claim 1. μ is a valid belief system for ΓF .
Claim 2. For all information sets h with Pr(h | σ) > 0,
μ(x) = Pr(x | σ)
Pr(h | σ)
for all x ∈ h.
Claim 3. For all information sets h with Pr(h | σ) > 0,
σ is sequentially rational at h given μ.
The proofs of Claims 1-3 are in an extended version of this
paper [13]. By Claims 1 and 2, we know that condition C2
holds. By Claim 3, we know that condition C1 holds. Thus,
σ is a Nash equilibrium.
3.1 Nontriviality of generalizing beyond this
model
Our model does not capture general sequential games of
imperfect information because it is restricted in two ways
(as discussed above): 1) there is a special structure
connecting the player actions and the chance actions (for one,
the players are assumed to observe each others" actions, but
nature"s actions might not be publicly observable), and 2)
there is a common ordering of signals. In this subsection we
show that removing either of these conditions can make our
technique invalid.
First, we demonstrate a failure when removing the first
assumption. Consider the game in Figure 2.6
Nodes a and
b are in the same information set, have the same parent
(chance) node, have isomorphic subtrees with the same
payoffs, and nodes c and d also have similar structural
properties. By merging the subtrees beginning at a and b, we get
the game on the right in Figure 2. In this game, player
1"s only Nash equilibrium strategy is to play left. But in
the original game, player 1 knows that node c will never be
reached, and so should play right in that information set.
1/4
1/4 1/4
1/4
2 2 2
1
1
1 2 1 2 3 0 3 0
-10 10
1/2 1/4 1/4
2 2 2
1
1 2 3 0 3 0
a b
2 2 2 10-10
c
d
Figure 2: Example illustrating difficulty in
developing a theory of equilibrium-preserving abstractions
for general extensive form games.
Removing the second assumption (that the utility
functions are based on a common ordering of signals) can also
cause failure. Consider a simple three-card game with a
deck containing two Jacks (J1 and J2) and a King (K),
where player 1"s utility function is based on the ordering
6
We thank Albert Xin Jiang for providing this example.
165
K J1 ∼ J2 but player 2"s utility function is based on the
ordering J2 K J1. It is easy to check that in the
abstracted game (where Player 1 treats J1 and J2 as being
equivalent) the Nash equilibrium does not correspond to
a Nash equilibrium in the original game.7
4. GAMESHRINK: AN EFFICIENT
ALGORITHM FOR COMPUTING ORDERED
GAME ISOMORPHIC ABSTRACTION
TRANSFORMATIONS
This section presents an algorithm, GameShrink, for
conducting the abstractions. It only needs to analyze the signal
tree discussed above, rather than the entire game tree.
We first present a subroutine that GameShrink uses. It
is a dynamic program for computing the ordered game
isomorphic relation. Again, it operates on the signal tree.
Algorithm 1. OrderedGameIsomorphic? (Γ, ϑ, ϑ )
1. If ϑ and ϑ have different parents, then return false.
2. If ϑ and ϑ are both leaves of the signal tree:
(a) If ur
(ϑ | ˜z) = ur
(ϑ | ˜z) for all ˜z ∈
r−1
j=1
ωj
cont ×
ωr
over, then return true.
(b) Otherwise, return false.
3. Create a bipartite graph Gϑ,ϑ = (V1, V2, E) with V1 =
N(ϑ) and V2 = N(ϑ ).
4. For each v1 ∈ V1 and v2 ∈ V2:
If OrderedGameIsomorphic? (Γ, v1, v2)
Create edge (v1, v2)
5. Return true if Gϑ,ϑ has a perfect matching; otherwise,
return false.
By evaluating this dynamic program from bottom to top,
Algorithm 1 determines, in time polynomial in the size of
the signal tree, whether or not any pair of equal depth
nodes x and y are ordered game isomorphic. We can
further speed up this computation by only examining nodes
with the same parent, since we know (from step 1) that
no nodes with different parents are ordered game
isomorphic. The test in step 2(a) can be computed in O(1) time
by consulting the relation from the specification of the
game. Each call to OrderedGameIsomorphic? performs at
most one perfect matching computation on a bipartite graph
with O(|Θ|) nodes and O(|Θ|2
) edges (recall that Θ is the
set of signals). Using the Ford-Fulkerson algorithm [12] for
finding a maximal matching, this takes O(|Θ|3
) time. Let
S be the maximum number of signals possibly revealed in
the game (e.g., in Rhode Island Hold"em, S = 4 because
each of the two players has one card in the hand plus there
are two cards on the table). The number of nodes, n, in
the signal tree is O(|Θ|S
). The dynamic program visits each
node in the signal tree, with each visit requiring O(|Θ|2
)
calls to the OrderedGameIsomorphic? routine. So, it takes
O(|Θ|S
|Θ|3
|Θ|2
) = O(|Θ|S+5
) time to compute the entire
ordered game isomorphic relation.
While this is exponential in the number of revealed
signals, we now show that it is polynomial in the size of the
signal tree-and thus polynomial in the size of the game tree
7
We thank an anonymous person for this example.
because the signal tree is smaller than the game tree. The
number of nodes in the signal tree is
n = 1 +
SX
i=1
iY
j=1
(|Θ| − j + 1)
(Each term in the summation corresponds to the number of
nodes at a specific depth of the tree.) The number of leaves
is
SY
j=1
(|Θ| − j + 1) =
|Θ|
S
!
S!
which is a lower bound on the number of nodes. For large
|Θ| we can use the relation
`n
k
´
∼ nk
k!
to get
|Θ|
S
!
S! ∼
„
|Θ|S
S!
«
S! = |Θ|S
and thus the number of leaves in the signal tree is Ω(|Θ|S
).
Thus, O(|Θ|S+5
) = O(n|Θ|5
), which proves that we can
indeed compute the ordered game isomorphic relation in time
polynomial in the number of nodes, n, of the signal tree.
The algorithm often runs in sublinear time (and space) in
the size of the game tree because the signal tree is
significantly smaller than the game tree in most nontrivial games.
(Note that the input to the algorithm is not an explicit game
tree, but a specification of the rules, so the algorithm does
not need to read in the game tree.) See Figure 1. In
general, if an ordered game has r rounds, and each round"s stage
game has at least b nonterminal leaves, then the size of the
signal tree is at most 1
br of the size of the game tree. For
example, in Rhode Island Hold"em, the game tree has 3.1
billion nodes while the signal tree only has 6,632,705.
Given the OrderedGameIsomorphic? routine for
determining ordered game isomorphisms in an ordered game, we
are ready to present the main algorithm, GameShrink.
Algorithm 2. GameShrink (Γ)
1. Initialize F to be the identity filter for Γ.
2. For j from 1 to r:
For each pair of sibling nodes ϑ, ϑ at either levelPj−1
k=1
`
κk
+ nγk
´
or
Pj
k=1 κk
+
Pj−1
k=1 nγk
in the
filtered (according to F) signal tree:
If OrderedGameIsomorphic?(Γ, ϑ, ϑ ), then
Fj
(ϑ) ← Fj
(ϑ ) ← Fj
(ϑ) ∪ Fj
(ϑ ).
3. Output F.
Given as input an ordered game Γ, GameShrink applies
the shrinking ideas presented above as aggressively as
possible. Once it finishes, there are no contractible nodes (since it
compares every pair of nodes at each level of the signal tree),
and it outputs the corresponding information filter F. The
correctness of GameShrink follows by a repeated application
of Theorem 2. Thus, we have the following result:
Theorem 3. GameShrink finds all ordered game
isomorphisms and applies the associated ordered game isomorphic
abstraction transformations. Furthermore, for any Nash
equilibrium, σ , of the abstracted game, the strategy profile
constructed for the original game from σ is a Nash equilibrium.
The dominating factor in the run time of GameShrink is
in the rth
iteration of the main for-loop. There are at most
166
`|Θ|
S
´
S! nodes at this level, where we again take S to be the
maximum number of signals possibly revealed in the game.
Thus, the inner for-loop executes O
„`|Θ|
S
´
S!
2
«
times. As
discussed in the next subsection, we use a union-find data
structure to represent the information filter F. Each
iteration of the inner for-loop possibly performs a union
operation on the data structure; performing M operations on
a union-find data structure containing N elements takes
O(α(M, N)) amortized time per operation, where α(M, N)
is the inverse Ackermann"s function [1, 49] (which grows
extremely slowly). Thus, the total time for GameShrink is
O
„`|Θ|
S
´
S!
2
α
„`|Θ|
S
´
S!
2
, |Θ|S
««
. By the inequality
`n
k
´
≤ nk
k!
, this is O
`
(|Θ|S
)2
α
`
(|Θ|S
)2
, |Θ|S
´´
. Again,
although this is exponential in S, it is ˜O(n2
), where n is the
number of nodes in the signal tree. Furthermore,
GameShrink tends to actually run in sublinear time and space in
the size of the game tree because the signal tree is
significantly smaller than the game tree in most nontrivial games,
as discussed above.
4.1 Efficiency enhancements
We designed several speed enhancement techniques for
GameShrink, and all of them are incorporated into our
implementation. One technique is the use of the union-find
data structure for storing the information filter F. This
data structure uses time almost linear in the number of
operations [49]. Initially each node in the signalling tree is
its own set (this corresponds to the identity information
filter); when two nodes are contracted they are joined into
a new set. Upon termination, the filtered signals for the
abstracted game correspond exactly to the disjoint sets in
the data structure. This is an efficient method of recording
contractions within the game tree, and the memory
requirements are only linear in the size of the signal tree.
Determining whether two nodes are ordered game
isomorphic requires us to determine if a bipartite graph has a
perfect matching. We can eliminate some of these computations
by using easy-to-check necessary conditions for the ordered
game isomorphic relation to hold. One such condition is to
check that the nodes have the same number of chances as
being ranked (according to ) higher than, lower than, and
the same as the opponents. We can precompute these
frequencies for every game tree node. This substantially speeds
up GameShrink, and we can leverage this database across
multiple runs of the algorithm (for example, when trying
different abstraction levels; see next section). The indices
for this database depend on the private and public signals,
but not the order in which they were revealed, and thus
two nodes may have the same corresponding database
entry. This makes the database significantly more compact.
(For example in Texas Hold"em, the database is reduced by
a factor
`50
3
´`47
1
´`46
1
´
/
`50
5
´
= 20.) We store the histograms
in a 2-dimensional database. The first dimension is indexed
by the private signals, the second by the public signals. The
problem of computing the index in (either) one of the
dimensions is exactly the problem of computing a bijection
between all subsets of size r from a set of size n and
integers in
ˆ
0, . . . ,
`n
r
´
− 1
˜
. We efficiently compute this using
the subsets" colexicographical ordering [6]. Let {c1, . . . , cr},
ci ∈ {0, . . . , n − 1}, denote the r signals and assume that
ci < ci+1. We compute a unique index for this set of signals
as follows: index(c1, . . . , cr) =
Pr
i=1
`ci
i
´
.
5. APPROXIMATION METHODS
Some games are too large to compute an exact
equilibrium, even after using the presented abstraction technique.
This section discusses general techniques for computing
approximately optimal strategy profiles. For a two-player game,
we can always evaluate the worst-case performance of a
strategy, thus providing some objective evaluation of the
strength of the strategy. To illustrate this, suppose we know
player 2"s planned strategy for some game. We can then fix
the probabilities of player 2"s actions in the game tree as
if they were chance moves. Then player 1 is faced with a
single-agent decision problem, which can be solved
bottomup, maximizing expected payoff at every node. Thus, we can
objectively determine the expected worst-case performance
of player 2"s strategy. This will be most useful when we
want to evaluate how well a given strategy performs when
we know that it is not an equilibrium strategy. (A
variation of this technique may also be applied in n-person games
where only one player"s strategies are held fixed.) This
technique provides ex post guarantees about the worst-case
performance of a strategy, and can be used independently of
the method that is used to compute the strategies.
5.1 State-space approximations
By slightly modifying GameShrink, we can obtain an
algorithm that yields even smaller game trees, at the expense
of losing the equilibrium guarantees of Theorem 2. Instead
of requiring the payoffs at terminal nodes to match exactly,
we can instead compute a penalty that increases as the
difference in utility between two nodes increases.
There are many ways in which the penalty function could
be defined and implemented. One possibility is to create
edge weights in the bipartite graphs used in Algorithm 1,
and then instead of requiring perfect matchings in the
unweighted graph we would instead require perfect matchings
with low cost (i.e., only consider two nodes to be ordered
game isomorphic if the corresponding bipartite graph has a
perfect matching with cost below some threshold). Thus,
with this threshold as a parameter, we have a knob to turn
that in one extreme (threshold = 0) yields an optimal
abstraction and in the other extreme (threshold = ∞) yields
a highly abstracted game (this would in effect restrict
players to ignoring all signals, but still observing actions). This
knob also begets an anytime algorithm. One can solve
increasingly less abstracted versions of the game, and evaluate
the quality of the solution at every iteration using the ex post
method discussed above.
5.2 Algorithmic approximations
In the case of two-player zero-sum games, the
equilibrium computation can be modeled as a linear program (LP),
which can in turn be solved using the simplex method. This
approach has inherent features which we can leverage into
desirable properties in the context of solving games.
In the LP, primal solutions correspond to strategies of
player 2, and dual solutions correspond to strategies of player
1. There are two versions of the simplex method: the primal
simplex and the dual simplex. The primal simplex maintains
primal feasibility and proceeds by finding better and better
primal solutions until the dual solution vector is feasible,
167
at which point optimality has been reached. Analogously,
the dual simplex maintains dual feasibility and proceeds by
finding increasingly better dual solutions until the primal
solution vector is feasible. (The dual simplex method can
be thought of as running the primal simplex method on the
dual problem.) Thus, the primal and dual simplex
methods serve as anytime algorithms (for a given abstraction)
for players 2 and 1, respectively. At any point in time, they
can output the best strategies found so far.
Also, for any feasible solution to the LP, we can get bounds
on the quality of the strategies by examining the primal and
dual solutions. (When using the primal simplex method,
dual solutions may be read off of the LP tableau.) Every
feasible solution of the dual yields an upper bound on the
optimal value of the primal, and vice versa [9, p. 57]. Thus,
without requiring further computation, we get lower bounds
on the expected utility of each agent"s strategy against that
agent"s worst-case opponent.
One problem with the simplex method is that it is not a
primal-dual algorithm, that is, it does not maintain both
primal and dual feasibility throughout its execution. (In fact,
it only obtains primal and dual feasibility at the very end of
execution.) In contrast, there are interior-point methods for
linear programming that maintain primal and dual
feasibility throughout the execution. For example, many
interiorpoint path-following algorithms have this property [55, Ch.
5]. We observe that running such a linear programming
method yields a method for finding -equilibria (i.e.,
strategy profiles in which no agent can increase her expected
utility by more than by deviating). A threshold on can also
be used as a termination criterion for using the method as
an anytime algorithm. Furthermore, interior-point methods
in this class have polynomial-time worst-case run time, as
opposed to the simplex algorithm, which takes exponentially
many steps in the worst case.
6. RELATED RESEARCH
Functions that transform extensive form games have been
introduced [50, 11]. In contrast to our work, those
approaches were not for making the game smaller and easier
to solve. The main result is that a game can be derived
from another by a sequence of those transformations if and
only if the games have the same pure reduced normal form.
The pure reduced normal form is the extensive form game
represented as a game in normal form where duplicates of
pure strategies (i.e., ones with identical payoffs) are removed
and players essentially select equivalence classes of
strategies [27]. An extension to that work shows a similar result,
but for slightly different transformations and mixed reduced
normal form games [21]. Modern treatments of this prior
work on game transformations exist [38, Ch. 6], [10].
The recent notion of weak isomorphism in extensive form
games [7] is related to our notion of restricted game
isomorphism. The motivation of that work was to justify solution
concepts by arguing that they are invariant with respect
to isomorphic transformations. Indeed, the author shows,
among other things, that many solution concepts, including
Nash, perfect, subgame perfect, and sequential equilibrium,
are invariant with respect to weak isomorphisms. However,
that definition requires that the games to be tested for weak
isomorphism are of the same size. Our focus is totally
different: we find strategically equivalent smaller games. Also,
their paper does not provide algorithms.
Abstraction techniques have been used in artificial
intelligence research before. In contrast to our work, most (but
not all) research involving abstraction has been for
singleagent problems (e.g. [20, 32]). Furthermore, the use of
abstraction typically leads to sub-optimal solutions, unlike
the techniques presented in this paper, which yield
optimal solutions. A notable exception is the use of abstraction
to compute optimal strategies for the game of Sprouts [2].
However, a significant difference to our work is that Sprouts
is a game of perfect information.
One of the first pieces of research to use abstraction in
multi-agent settings was the development of partition search,
which is the algorithm behind GIB, the world"s first
expertlevel computer bridge player [17, 18]. In contrast to other
game tree search algorithms which store a particular game
position at each node of the search tree, partition search
stores groups of positions that are similar. (Typically, the
similarity of two game positions is computed by ignoring the
less important components of each game position and then
checking whether the abstracted positions are similar-in
some domain-specific expert-defined sense-to each other.)
Partition search can lead to substantial speed improvements
over α-β-search. However, it is not game theory-based (it
does not consider information sets in the game tree), and
thus does not solve for the equilibrium of a game of
imperfect information, such as poker.8
Another difference is that
the abstraction is defined by an expert human while our
abstractions are determined automatically.
There has been some research on the use of abstraction
for imperfect information games. Most notably, Billings et
al [4] describe a manually constructed abstraction for Texas
Hold"em poker, and include promising results against expert
players. However, this approach has significant drawbacks.
First, it is highly specialized for Texas Hold"em. Second,
a large amount of expert knowledge and effort was used in
constructing the abstraction. Third, the abstraction does
not preserve equilibrium: even if applied to a smaller game,
it might not yield a game-theoretic equilibrium. Promising
ideas for abstraction in the context of general extensive form
games have been described in an extended abstract [39], but
to our knowledge, have not been fully developed.
7. CONCLUSIONS AND DISCUSSION
We introduced the ordered game isomorphic abstraction
transformation and gave an algorithm, GameShrink, for
abstracting the game using the isomorphism exhaustively. We
proved that in games with ordered signals, any Nash
equilibrium in the smaller abstracted game maps directly to a
Nash equilibrium in the original game.
The complexity of GameShrink is ˜O(n2
), where n is the
number of nodes in the signal tree. It is no larger than the
game tree, and on nontrivial games it is drastically smaller,
so GameShrink has time and space complexity sublinear in
8
Bridge is also a game of imperfect information, and
partition search does not find the equilibrium for that
game either. Instead, partition search is used in
conjunction with statistical sampling to simulate the uncertainty
in bridge. There are also other bridge programs that use
search techniques for perfect information games in
conjunction with statistical sampling and expert-defined
abstraction [48]. Such (non-game-theoretic) techniques are unlikely
to be competitive in poker because of the greater importance
of information hiding and bluffing.
168
the size of the game tree. Using GameShrink, we found
a minimax equilibrium to Rhode Island Hold"em, a poker
game with 3.1 billion nodes in the game tree-over four
orders of magnitude more than in the largest poker game
solved previously.
To further improve scalability, we introduced an
approximation variant of GameShrink, which can be used as an
anytime algorithm by varying a parameter that controls
the coarseness of abstraction. We also discussed how (in
a two-player zero-sum game), linear programming can be
used in an anytime manner to generate approximately
optimal strategies of increasing quality. The method also yields
bounds on the suboptimality of the resulting strategies. We
are currently working on using these techniques for full-scale
2-player limit Texas Hold"em poker, a highly popular card
game whose game tree has about 1018
nodes. That game
tree size has required us to use the approximation version of
GameShrink (as well as round-based abstraction) [16, 15].
8. REFERENCES
[1] W. Ackermann. Zum Hilbertschen Aufbau der reellen Zahlen.
Math. Annalen, 99:118-133, 1928.
[2] D. Applegate, G. Jacobson, and D. Sleator. Computer analysis
of sprouts. Technical Report CMU-CS-91-144, 1991.
[3] R. Bellman and D. Blackwell. Some two-person games involving
bluffing. PNAS, 35:600-605, 1949.
[4] D. Billings, N. Burch, A. Davidson, R. Holte, J. Schaeffer,
T. Schauenberg, and D. Szafron. Approximating game-theoretic
optimal strategies for full-scale poker. In IJCAI, 2003.
[5] D. Billings, A. Davidson, J. Schaeffer, and D. Szafron. The
challenge of poker. Artificial Intelligence, 134:201-240, 2002.
[6] B. Bollob´as. Combinatorics. Cambridge University Press, 1986.
[7] A. Casajus. Weak isomorphism of extensive games.
Mathematical Social Sciences, 46:267-290, 2003.
[8] X. Chen and X. Deng. Settling the complexity of 2-player Nash
equilibrium. ECCC, Report No. 150, 2005.
[9] V. Chv´atal. Linear Programming. W. H. Freeman & Co., 1983.
[10] B. P. de Bruin. Game transformations and game equivalence.
Technical note x-1999-01, University of Amsterdam, Institute
for Logic, Language, and Computation, 1999.
[11] S. Elmes and P. J. Reny. On the strategic equivalence of
extensive form games. J. of Economic Theory, 62:1-23, 1994.
[12] L. R. Ford, Jr. and D. R. Fulkerson. Flows in Networks.
Princeton University Press, 1962.
[13] A. Gilpin and T. Sandholm. Finding equilibria in large
sequential games of imperfect information. Technical Report
CMU-CS-05-158, Carnegie Mellon University, 2005.
[14] A. Gilpin and T. Sandholm. Optimal Rhode Island Hold"em
poker. In AAAI, pages 1684-1685, Pittsburgh, PA, USA, 2005.
[15] A. Gilpin and T. Sandholm. A competitive Texas Hold"em
poker player via automated abstraction and real-time
equilibrium computation. Mimeo, 2006.
[16] A. Gilpin and T. Sandholm. A Texas Hold"em poker player
based on automated abstraction and real-time equilibrium
computation. In AAMAS, Hakodate, Japan, 2006.
[17] M. L. Ginsberg. Partition search. In AAAI, pages 228-233,
Portland, OR, 1996.
[18] M. L. Ginsberg. GIB: Steps toward an expert-level
bridge-playing program. In IJCAI, Stockholm, Sweden, 1999.
[19] S. Govindan and R. Wilson. A global Newton method to
compute Nash equilibria. J. of Econ. Theory, 110:65-86, 2003.
[20] C. A. Knoblock. Automatically generating abstractions for
planning. Artificial Intelligence, 68(2):243-302, 1994.
[21] E. Kohlberg and J.-F. Mertens. On the strategic stability of
equilibria. Econometrica, 54:1003-1037, 1986.
[22] D. Koller and N. Megiddo. The complexity of two-person
zero-sum games in extensive form. Games and Economic
Behavior, 4(4):528-552, Oct. 1992.
[23] D. Koller and N. Megiddo. Finding mixed strategies with small
supports in extensive form games. International Journal of
Game Theory, 25:73-92, 1996.
[24] D. Koller, N. Megiddo, and B. von Stengel. Efficient
computation of equilibria for extensive two-person games.
Games and Economic Behavior, 14(2):247-259, 1996.
[25] D. Koller and A. Pfeffer. Representations and solutions for
game-theoretic problems. Artificial Intelligence, 94(1):167-215,
July 1997.
[26] D. M. Kreps and R. Wilson. Sequential equilibria.
Econometrica, 50(4):863-894, 1982.
[27] H. W. Kuhn. Extensive games. PNAS, 36:570-576, 1950.
[28] H. W. Kuhn. A simplified two-person poker. In Contributions
to the Theory of Games, volume 1 of Annals of Mathematics
Studies, 24, pages 97-103. Princeton University Press, 1950.
[29] H. W. Kuhn. Extensive games and the problem of information.
In Contributions to the Theory of Games, volume 2 of Annals
of Mathematics Studies, 28, pages 193-216. Princeton
University Press, 1953.
[30] C. Lemke and J. Howson. Equilibrium points of bimatrix
games. Journal of the Society for Industrial and Applied
Mathematics, 12:413-423, 1964.
[31] R. Lipton, E. Markakis, and A. Mehta. Playing large games
using simple strategies. In ACM-EC, pages 36-41, 2003.
[32] C.-L. Liu and M. Wellman. On state-space abstraction for
anytime evaluation of Bayesian networks. SIGART Bulletin,
7(2):50-57, 1996.
[33] A. Mas-Colell, M. Whinston, and J. R. Green. Microeconomic
Theory. Oxford University Press, 1995.
[34] R. D. McKelvey and A. McLennan. Computation of equilibria
in finite games. In Handbook of Computational Economics,
volume 1, pages 87-142. Elsevier, 1996.
[35] P. B. Miltersen and T. B. Sørensen. Computing sequential
equilibria for two-player games. In SODA, pages 107-116, 2006.
[36] J. Nash. Equilibrium points in n-person games. Proc. of the
National Academy of Sciences, 36:48-49, 1950.
[37] J. F. Nash and L. S. Shapley. A simple three-person poker
game. In Contributions to the Theory of Games, volume 1,
pages 105-116. Princeton University Press, 1950.
[38] A. Perea. Rationality in extensive form games. Kluwer
Academic Publishers, 2001.
[39] A. Pfeffer, D. Koller, and K. Takusagawa. State-space
approximations for extensive form games, July 2000. Talk given
at the First International Congress of the Game Theory
Society, Bilbao, Spain.
[40] R. Porter, E. Nudelman, and Y. Shoham. Simple search
methods for finding a Nash equilibrium. In AAAI, pages
664-669, San Jose, CA, USA, 2004.
[41] I. Romanovskii. Reduction of a game with complete memory to
a matrix game. Soviet Mathematics, 3:678-681, 1962.
[42] T. Sandholm and A. Gilpin. Sequences of take-it-or-leave-it
offers: Near-optimal auctions without full valuation revelation.
In AAMAS, Hakodate, Japan, 2006.
[43] T. Sandholm, A. Gilpin, and V. Conitzer. Mixed-integer
programming methods for finding Nash equilibria. In AAAI,
pages 495-501, Pittsburgh, PA, USA, 2005.
[44] R. Savani and B. von Stengel. Exponentially many steps for
finding a Nash equilibrium in a bimatrix game. In FOCS, pages
258-267, 2004.
[45] R. Selten. Spieltheoretische behandlung eines oligopolmodells
mit nachfragetr¨agheit. Zeitschrift f¨ur die gesamte
Staatswissenschaft, 12:301-324, 1965.
[46] R. Selten. Evolutionary stability in extensive two-person games
- correction and further development. Mathematical Social
Sciences, 16:223-266, 1988.
[47] J. Shi and M. Littman. Abstraction methods for game theoretic
poker. In Computers and Games, pages 333-345.
Springer-Verlag, 2001.
[48] S. J. J. Smith, D. S. Nau, and T. Throop. Computer bridge: A
big win for AI planning. AI Magazine, 19(2):93-105, 1998.
[49] R. E. Tarjan. Efficiency of a good but not linear set union
algorithm. Journal of the ACM, 22(2):215-225, 1975.
[50] F. Thompson. Equivalence of games in extensive form. RAND
Memo RM-759, The RAND Corporation, Jan. 1952.
[51] J. von Neumann and O. Morgenstern. Theory of games and
economic behavior. Princeton University Press, 1947.
[52] B. von Stengel. Efficient computation of behavior strategies.
Games and Economic Behavior, 14(2):220-246, 1996.
[53] B. von Stengel. Computing equilibria for two-person games. In
Handbook of Game Theory, volume 3. North Holland,
Amsterdam, 2002.
[54] R. Wilson. Computing equilibria of two-person games from the
extensive form. Management Science, 18(7):448-460, 1972.
[55] S. J. Wright. Primal-Dual Interior-Point Methods. SIAM,
1997.
169 | imperfect information;ordered game isomorphism;related ordered game isomorphic abstraction transformation;nash equilibrium;game theory;strategy profile;computer poker;sequential game;computational game theory;equilibrium find;ordered signal space;observable action;gameshrink;rational behavior;signal tree;automate abstraction;equilibrium;sequential game of imperfect information;normative framework |
train_J-38 | Multi-Attribute Coalitional Games∗ | We study coalitional games where the value of cooperation among the agents are solely determined by the attributes the agents possess, with no assumption as to how these attributes jointly determine this value. This framework allows us to model diverse economic interactions by picking the right attributes. We study the computational complexity of two coalitional solution concepts for these gamesthe Shapley value and the core. We show how the positive results obtained in this paper imply comparable results for other games studied in the literature. | 1. INTRODUCTION
When agents interact with one another, the value of their
contribution is determined by what they can do with their
skills and resources, rather than simply their identities.
Consider the problem of forming a soccer team. For a team to
be successful, a team needs some forwards, midfielders,
defenders, and a goalkeeper. The relevant attributes of the
players are their skills at playing each of the four positions.
The value of a team depends on how well its players can play
these positions. At a finer level, we can extend the model
to consider a wider range of skills, such as passing,
shooting, and tackling, but the value of a team remains solely a
function of the attributes of its players.
Consider an example from the business world.
Companies in the metals industry are usually vertically-integrated
and diversified. They have mines for various types of ores,
and also mills capable of processing and producing
different kinds of metal. They optimize their production profile
according to the market prices for their products. For
example, when the price of aluminum goes up, they will
allocate more resources to producing aluminum. However, each
company is limited by the amount of ores it has, and its
capacities in processing given kinds of ores. Two or more
companies may benefit from trading ores and processing
capacities with one another. To model the metal industry, the
relevant attributes are the amount of ores and the
processing capacities of the companies. Given the exogenous input
of market prices, the value of a group of companies will be
determined by these attributes.
Many real-world problems can be likewise modeled by
picking the right attributes. As attributes apply to both
individual agents and groups of agents, we propose the use
of coalitional game theory to understand what groups may
form and what payoffs the agents may expect in such models.
Coalitional game theory focuses on what groups of agents
can achieve, and thus connects strongly with e-commerce,
as the Internet economies have significantly enhanced the
abilities of business to identify and capitalize on profitable
opportunities of cooperation. Our goal is to understand
the computational aspects of computing the solution
concepts (stable and/or fair distribution of payoffs, formally
defined in Section 3) for coalitional games described using
attributes. Our contributions can be summarized as follows:
• We define a formal representation for coalitional games
based on attributes, and relate this representation to
others proposed in the literature. We show that when
compared to other representations, there exists games
for which a multi-attribute description can be
exponentially more succinct, and for no game it is worse.
• Given the generality of the model, positive results carry
over to other representations. We discuss two positive
results in the paper, one for the Shapley value and one
for the core, and show how these imply related results
in the literature.
170
• We study an approximation heuristic for the Shapley
value when its exact values cannot be found efficiently.
We provide an explicit bound on the maximum error
of the estimate, and show that the bound is
asymptotically tight. We also carry out experiments to evaluate
how the heuristic performs on random instances.1
2. RELATED WORK
Coalitional game theory has been well studied in
economics [9, 10, 14]. A vast amount of literature have focused
on defining and comparing solution concepts, and
determining their existence and properties. The first algorithmic
study of coalitional games, as far as we know, is performed
by Deng and Papadimitriou in [5]. They consider coalitional
games defined on graphs, where the players are the vertices
and the value of coalition is determined by the sum of the
weights of the edges spanned by these players. This can be
efficiently modeled and generalized using attributes.
As a formal representation, multi-attribute coalitional games
is closely related to the multi-issue representation of Conitzer
and Sandholm [3] and our work on marginal contribution
networks [7]. Both of these representations are based on
dividing a coalitional game into subgames (termed issues
in [3] and rules in [7]), and aggregating the subgames via
linear combination. The key difference in our work is the
unrestricted aggregation of subgames: the aggregation could
be via a polynomial function of the attributes, or even by
treating the subgames as input to another computational
problem such as a min-cost flow problem. The relationship
of these models will be made clear after we define the
multiattribute representation in Section 4.
Another representation proposed in the literature is one
specialized for superadditive games by Conitzer and
Sandholm [2]. This representation is succinct, but to find the
values of some coalitions may require solving an NP-hard
problem. While it is possible for multi-attribute coalitional
games to efficiently represent these games, it necessarily
requires the solution to an NP-hard problem in order to find
out the values of some coalitions. In this paper, we stay
within the boundary of games that admits efficient
algorithm for determining the value of coalitions. We will
therefore not make further comparisons with [2].
The model of coalitional games with attributes has been
considered in the works of Shehory and Kraus. They model
the agents as possessing capabilities that indicates their
proficiencies in different areas, and consider how to efficiently
allocate tasks [12] and the dynamics of coalition formation
[13]. Our work differs significantly as our focus is on
reasoning about solution concepts. Our model also covers a wider
scope as attributes generalize the notion of capabilities.
Yokoo et al. have also considered a model of coalitional
games where agents are modeled by sets of skills, and these
skills in turn determine the value of coalitions [15]. There are
two major differences between their work and ours. Firstly,
Yokoo et al. assume that each skill is fundamentally different
from another, hence no two agents may possess the same
skill. Also, they focus on developing new solution concepts
that are robust with respect to manipulation by agents. Our
focus is on reasoning about traditional solution concepts.
1
We acknowledge that random instances may not be typical
of what happens in practice, but given the generality of our
model, it provides the most unbiased view.
Our work is also related to the study of cooperative games
with committee control [4]. In these games, there is usually
an underlying set of resources each controlled by a
(possibly overlapping) set of players known as the committee,
engaged in a simple game (defined in Section 3).
multiattribute coalitional games generalize these by considering
relationship between the committee and the resources
beyond simple games. We note that when restricted to simple
games, we derive similar results to that in [4].
3. PRELIMINARIES
In this section, we will review the relevant concepts of
coalitional game theory and its two most important solution
concepts - the Shapley value and the core. We will then
define the computational questions that will be studied in
the second half of the paper.
3.1 Coalitional Games
Throughout this paper, we assume that payoffs to groups
of agents can be freely distributed among its members. This
transferable utility assumption is commonly made in
coalitional game theory. The canonical representation of a
coalitional game with transferable utility is its characteristic form.
Definition 1. A coalition game with transferable utility in
characteristic form is denoted by the pair N, v , where
• N is the set of agents; and
• v : 2N
→ R is a function that maps each group of
agents S ⊆ N to a real-valued payoff.
A group of agents in a game is known as a coalition, and the
entire set of agents is known as the grand coalition.
An important class of coalitional games is the class of
monotonic games.
Definition 2. A coalitional game is monotonic if for all
S ⊂ T ⊆ N, v(S) ≤ v(T).
Another important class of coalitional games is the class
of simple games. In a simple game, a coalition either wins,
in which case it has a value of 1, or loses, in which case it
has a value of 0. It is often used to model voting situations.
Simple games are often assumed to be monotonic, i.e., if S
wins, then for all T ⊇ S, T also wins. This coincides with
the notion of using simple games as a model for voting. If a
simple game is monotonic, then it is fully described by the
set of minimal winning coalitions, i.e., coalitions S for which
v(S) = 1 but for all coalitions T ⊂ S, v(T) = 0.
An outcome in a coalitional game specifies the utilities
the agents receive. A solution concept assigns to each
coalitional game a set of reasonable outcomes. Different
solution concepts attempt to capture in some way outcomes
that are stable and/or fair. Two of the best known solution
concepts are the Shapley value and the core.
The Shapley value is a normative solution concept that
prescribes a fair way to divide the gains from cooperation
when the grand coalition is formed. The division of payoff
to agent i is the average marginal contribution of agent i
over all possible permutations of the agents. Formally,
Definition 3. The Shapley value of agent i, φi(v), in game
N, v is given by the following formula
φi(v) =
S⊆N\{i}
|S|!(|N| − |S| − 1)!
|N|!
(v(S ∪ {i}) − v(S))
171
The core is a descriptive solution concept that focuses on
outcomes that are stable. Stability under core means that
no set of players can jointly deviate to improve their payoffs.
Definition 4. An outcome x ∈ R|N|
is in the core of the
game N, v if for all S ⊆ N,
i∈S
xi ≥ v(S)
Note that the core of a game may be empty, i.e., there may
not exist any payoff vector that satisfies the stability
requirement for the given game.
3.2 Computational Problems
We will study the following three problems related to
solution concepts in coalitional games.
Problem 1. (Shapley Value) Given a description of the
coalitional game and an agent i, compute the Shapley value
of agent i.
Problem 2. (Core Membership) Given a description of
the coalitional game and a payoff vector x such that
È
i∈N xi =
v(N), determine if
È
i∈S xi ≥ v(S) for all S ⊆ N.
Problem 3. (Core Non-emptiness) Given a description
of the coalitional game, determine if there exists any payoff
vector x such that
È
i∈S xi ≥ V (S) for all S ⊆ N, andÈ
i∈N xi = v(N).
Note that the complexity of the above problems depends
on the how the game is described. All these problems will
be easy if the game is described by its characteristic form,
but only so because the description takes space
exponential in the number of agents, and hence simple brute-force
approach takes time polynomial to the input description.
To properly understand the computational complexity
questions, we have to look at compact representation.
4. FORMAL MODEL
In this section, we will give a formal definition of
multiattribute coalitional games, and show how it is related to
some of the representations discussed in the literature. We
will also discuss some limitations to our proposed approach.
4.1 Multi-Attribute Coalitional Games
A multi-attribute coalitional game (MACG) consists of
two parts: a description of the attributes of the agents,
which we termed an attribute model, and a function that
assigns values to combination of attributes. Together, they
induce a coalitional game over the agents. We first define
the attribute model.
Definition 5. An attribute model is a tuple N, M, A , where
• N denotes the set of agents, of size n;
• M denotes the set of attributes, of size m;
• A ∈ Rm×n
, the attribute matrix, describes the values
of the attributes of the agents, with Aij denoting the
value of attribute i for agent j.
We can directly define a function that maps combinations
of attributes to real values. However, for many problems,
we can describe the function more compactly by computing
it in two steps: we first compute an aggregate value for
each attribute, then compute the values of combination of
attributes using only the aggregated information. Formally,
Definition 6. An aggregating function (or aggregator) takes
as input a row of the attribute matrix and a coalition S, and
summarizes the attributes of the agents in S with a single
number. We can treat it as a mapping from Rn
× 2N
→ R.
Aggregators often perform basic arithmetic or logical
operations. For example, it may compute the sum of the
attributes, or evaluate a Boolean expression by treating the
agents i ∈ S as true and j /∈ S as false. Analogous to the
notion of simple games, we call an aggregator simple if its
range is {0, 1}. For any aggregator, there is a set of relevant
agents, and a set of irrelevant agents. An agent i is
irrelevant to aggregator aj
if aj
(S ∪ {i}) = aj
(S) for all S ⊆ N.
A relevant agent is one not irrelevant.
Given the attribute matrix, an aggregator assigns a value
to each coalition S ⊆ N. Thus, each aggregator defines a
game over N. For aggregator aj
, we refer to this induced
game as the game of attribute j, and denote it with aj
(A).
When the attribute matrix is clear from the context, we may
drop A and simply denote the game as aj
. We may refer to
the game as the aggregator when no ambiguities arise.
We now define the second step of the computation with
the help of aggregators.
Definition 7. An aggregate value function takes as input
the values of the aggregators and maps these to a real value.
In this paper, we will focus on having one aggregator per
attribute. Therefore, in what follows, we will refer to the
aggregate value function as a function over the attributes.
Note that when all aggregators are simple, the aggregate
value function implicitly defines a game over the attributes,
as it assigns a value to each set of attributes T ⊆ M. We
refer to this as the game among attributes.
We now define multi-attribute coalitional game.
Definition 8. A multi-attribute coalitional game is defined
by the tuple N, M, A, a, w , where
• N, M, A is an attribute model;
• a is a set of aggregators, one for each attribute; we can
treat the set together as a vector function, mapping
Rm×n
× 2N
→ Rm
• w : Rm
→ R is an aggregate value function.
This induces a coalitional game with transferable payoffs
N, v with players N and the value function defined by
v(S) = w(a(A, S))
Note that MACG as defined is fully capable of
representing any coalitional game N, v . We can simply take the set
of attributes as equal to the set of agents, i.e., M = N, an
identity matrix for A, aggregators of sums, and the
aggregate value function w to be v.
172
4.2 An Example
Let us illustrate how MACG can be used to represent a
game with a simple example. Suppose there are four types
of resources in the world: gold, silver, copper, and iron, that
each agent is endowed with some amount of these resources,
and there is a fixed price for each of the resources in the
market. This game can be described using MACG with an
attribute matrix A, where Aij denote the amount of resource
i that agent j is endowed. For each resource, the
aggregator sums together the amount of resources the agents have.
Finally, the aggregate value function takes the dot product
between the market price vector and the aggregate vector.
Note the inherent flexibility in the model: only limited
work would be required to update the game as the market
price changes, or when a new agent arrives.
4.3 Relationship with Other Representations
As briefly discussed in Section 2, MACG is closely related
to two other representations in the literature, the
multiissue representation of Conitzer and Sandholm [3], and our
work on marginal contribution nets [7]. To make their
relationships clear, we first review these two representations.
We have changed the notations from the original papers to
highlight their similarities.
Definition 9. A multi-issue representation is given as a
vector of coalitional games, (v1, v2, . . . vm), each possibly
with a varying set of agents, say N1, . . . , Nm. The
coalitional game N, v induced by multi-issue representation has
player set N =
Ëm
i=1 Ni, and for each coalition S ⊆ N,
v(S) =
Èm
i=1 v(S ∩ Ni). The games vi are assumed to be
represented in characteristic form.
Definition 10. A marginal contribution net is given as a
set of rules (r1, r2, . . . , rm), where rule ri has a weight wi,
and a pattern pi that is a conjunction over literals
(positive or negative). The agents are represented as literals. A
coalition S is said to satisfy the pattern pi, if we treat the
agents i ∈ S as true, an agent j /∈ S as false, pi(S)
evaluates to true. Denote the set of literals involved in rule i
by Ni. The coalitional game N, v induced by a marginal
contribution net has player set N =
Ëm
i=1 Ni, and for each
coalition S ⊆ N, v(S) =
È
i:pi(S)=true wi.
From these definitions, we can see the relationships among
these three representations clearly. An issue of a multi-issue
representation corresponds to an attribute in MACG.
Similarly, a rule of a marginal contribution net corresponds to
an attribute in MACG. The aggregate value functions are
simple sums and weighted sums for the respective
representations. Therefore, it is clear that MACG will be no less
succinct than either representation.
However, MACG differs in two important way. Firstly,
there is no restriction on the operations performed by the
aggregate value function over the attributes. This is an
important generalization over the linear combination of issues
or rules in the other two approaches. In particular, there are
games for which MACG can be exponentially more compact.
The proof of the following proposition can be found in the
Appendix.
Proposition 1. Consider the parity game N, v where
coalition S ⊆ N has value v(S) = 1 if |S| is odd, and v(S) =
0 otherwise. MACG can represent the game in O(n) space.
Both multi-issue representation and marginal contribution
nets requires O(2n
) space.
A second important difference of MACG is that the
attribute model and the value function is cleanly separated.
As suggested in the example in Section 4.2, this often
allows us more efficient update of the values of the game as it
changes. Also, the same attribute model can be evaluated
using different value functions, and the same value function
can be used to evaluate different attribute model. Therefore,
MACG is very suitable for representing multiple games. We
believe the problems of updating games and representing
multiple games are interesting future directions to explore.
4.4 Limitation of One Aggregator per Attribute
Before focusing on one aggregator per attribute for the
rest of the paper, it is natural to wonder if any is lost per
such restriction. The unfortunate answer is yes, best
illustrated by the following. Consider again the problem of
forming a soccer team discussed in the introduction, where
we model the attributes of the agents as their ability to take
the four positions of the field, and the value of a team
depends on the positions covered. If we first aggregate each
of the attribute individually, we will lose the distributional
information of the attributes. In other words, we will not
be able to distinguish between two teams, one of which has
a player for each position, the other has one player who can
play all positions, but the rest can only play the same one
position.
This loss of distributional information can be recovered
by using aggregators that take as input multiple rows of
the attribute matrix rather than just a single row.
Alternatively, if we leave such attributes untouched, we can leave
the burden of correctly evaluating these attributes to the
aggregate value function. However, for many problems that we
found in the literature, such as the transportation domain
of [12] and the flow game setting of [4], the distribution of
attributes does not affect the value of the coalitions. In
addition, the problem may become unmanageably complex as
we introduce more complicated aggregators. Therefore, we
will focus on the representation as defined in Definition 8.
5. SHAPLEY VALUE
In this section, we focus on computational issues of
finding the Shapley value of a player in MACG. We first set
up the problem with the use of oracles to avoid
complexities arising from the aggregators. We then show that when
attributes are linearly separable, the Shapley value can be
efficiently computed. This generalizes the proofs of related
results in the literature. For the non-linearly separable case,
we consider a natural heuristic for estimating the Shapley
value, and study the heuristic theoretically and empirically.
5.1 Problem Setup
We start by noting that computing the Shapley value for
simple aggregators can be hard in general. In particular, we
can define aggregators to compute weighted majority over
its input set of agents. As noted in [6], finding the Shapley
value of a weighted majority game is #P-hard. Therefore,
discussion of complexity of Shapley value for MACG with
unrestricted aggregators is moot.
Instead of placing explicit restriction on the aggregator,
we assume that the Shapley value of the aggregator can be
173
answered by an oracle. For notation, let φi(u) denote the
Shapley value for some game u. We make the following
assumption:
Assumption 1. For each aggregator aj
in a MACG, there
is an associated oracle that answers the Shapley value of the
game of attribute j. In other words, φi(aj
) is known.
For many aggregators that perform basic operations over
its input, polynomial time oracle for Shapley value exists.
This include operations such as sum, and symmetric
functions when the attributes are restricted to {0, 1}. Also, when
only few agents have an effect on the aggregator, brute-force
computation for Shapley value is feasible. Therefore, the
above assumption is reasonable for many settings. In any
case, such abstraction allows us to focus on the aggregate
value function.
5.2 Linearly Separable Attributes
When the aggregate value function can be written as a
linear function of the attributes, the Shapley value of the
game can be efficiently computed.
Theorem 1. Given a game N, v represented as a MACG
N, M, A, a, w , if the aggregate value function can be
written as a linear function of its attributes, i.e.,
w(a(A, S)) =
m
j=1
cj aj
(A, S)
The Shapley value of agent i in N, v is given by
φi(v) =
m
j=1
cj φi(aj
) (1)
Proof. First, we note that Shapley value satisfies an
additivity axiom [11].
The Shapley value satisfies additivity, namely,
φi(a + b) = φi(a) + φi(b), where N, a + b is
a game defined to be (a + b)(S) = a(S) + b(S)
for all S ⊆ N.
It is also clear that Shapley value satisfies scaling, namely
φi(αv) = αφi(v)
where (αv)(S) = αv(S) for all S ⊆ N.
Since the aggregate value function can be expressed as a
weighted sum of games of attributes,
φi(v) = φi(w(a)) = φi(
m
j=1
cjaj
) =
m
j=1
cjφi(aj
)
Many positive results regarding efficient computation of
Shapley value in the literature depends on some form of
linearity. Examples include the edge-spanning game on graphs
by Deng and Papadimitriou [5], the multi-issue
representation of [3], and the marginal contribution nets of [7]. The
key to determine if the Shapley value can be efficiently
computed depends on the linear separability of attributes. Once
this is satisfied, as long as the Shapley value of the game of
attributes can be efficiently determined, the Shapley value
of the entire game can be efficiently computed.
Corollary 1. The Shapley value for the edge-spanning
game of [5], games in multi-issue representation [3], and
games in marginal contribution nets [7], can be computed in
polynomial time.
5.3 Polynomial Combination of Attributes
When the aggregate value function cannot be expressed
as a linear function of its attributes, computing the Shapley
value exactly is difficult. Here, we will focus on aggregate
value function that can be expressed as some polynomial
of its attributes. If we do not place a limit on the degree
of the polynomial, and the game N, v is not necessarily
monotonic, the problem is #P-hard.
Theorem 2. Computing the Shapley value of a MACG
N, M, A, a, w , when w can be an arbitrary polynomial of
the aggregates a, is #P-hard, even when the Shapley value
of each aggregator can be efficiently computed.
The proof is via reduction from three-dimensional matching,
and details can be found in the Appendix.
Even if we restrict ourselves to monotonic games, and
non-negative coefficients for the polynomial aggregate value
function, computing the exact Shapley value can still be
hard. For example, suppose there are two attributes. All
agents in some set B ⊆ N possess the first attribute, and all
agents in some set C ⊆ N possess the second, and B and C
are disjoint. For a coalition S ⊆ N, the aggregator for the
first evaluates to 1 if and only if |S ∩ B| ≥ b , and similarly,
the aggregator for the second evaluates to 1 if and only if
|S ∩ C| ≥ c . Let the cardinality of the sets B and C be b
and c. We can verify that the Shapley value of an agent i in
B equals
φi =
1
b
b −1
i=0
b
i
¡ c
c −1
¡
b+c
c +i−1
¡
c − c + 1
b + c − c − i + 1
The equation corresponds to a weighted sum of probability
values of hypergeometric random variables. The
correspondence with hypergeometric distribution is due to sampling
without replacement nature of Shapley value. As far as we
know, there is no close-form formula to evaluate the sum
above. In addition, as the number of attributes involved
increases, we move to multi-variate hypergeometric random
variables, and the number of summands grow exponentially
in the number of attributes. Therefore, it is highly unlikely
that the exact Shapley value can be determined efficiently.
Therefore, we look for approximation.
5.3.1 Approximation
First, we need a criteria for evaluating how well an
estimate, ˆφ, approximates the true Shapley value, φ. We
consider the following three natural criteria:
• Maximum underestimate: maxi φi/ˆφi
• Maximum overestimate: maxi
ˆφi/φi
• Total variation: 1
2
È
i |φi − ˆφi|, or alternatively
maxS |
È
i∈S φi −
È
i∈S
ˆφi|
The total variation criterion is more meaningful when we
normalize the game to having a value of 1 for the grand
coalition, i.e., v(N) = 1. We can also define additive
analogues of the under- and overestimates, especially when the
games are normalized.
174
We will assume for now that the aggregate value
function is a polynomial over the attributes with non-negative
coefficients. We will also assume that the aggregators are
simple. We will evaluate a specific heuristic that is
analogous to Equation (1). Suppose the aggregate function can
be written as a polynomial with p terms
w(a(A, S)) =
p
j=1
cj aj(1)
(A, S)aj(2)
(A, S) · · · aj(kj )
(A, S)
(2)
For term j, the coefficient of the term is cj , its degree kj ,
and the attributes involved in the term are j(1), . . . , j(kj ).
We compute an estimate ˆφ to the Shapley value as
ˆφi =
p
j=1
kj
l=1
cj
kj
φi(aj(l)
) (3)
The idea behind the estimate is that for each term, we divide
the value of the term equally among all its attributes. This
is represented by the factor
cj
kj
. Then for for each attribute
of an agent, we assign the player a share of value from the
attribute. This share is determined by the Shapley value
of the simple game of that attribute. Without considering
the details of the simple games, this constitutes a fair (but
blind) rule of sharing.
5.3.2 Theoretical analysis of heuristic
We can derive a simple and tight bound for the maximum
(multiplicative) underestimate of the heuristic estimate.
Theorem 3. Given a game N, v represented as a MACG
N, M, A, a, w , suppose w can be expressed as a
polynomial function of its attributes (cf Equation (2)). Let K =
maxjkj, i.e., the maximum degree of the polynomial. Let ˆφ
denote the estimated Shapley value using Equation (3), and
φ denote the true Shapley value. For all i ∈ N, φi ≥ K ˆφi.
Proof. We bound the maximum underestimate
term-byterm. Let tj be the j-th term of the polynomial. We note
that the term can be treated as a game among attributes,
as it assigns a value to each coalition S ⊆ N. Without loss
of generality, renumber attributes j(1) through j(kj ) as 1
through kj.
tj (S) = cj
kj
l=1
al
(A, S)
To make the equations less cluttered, let
B(N, S) =
|S|!(|N| − |S| − 1)!
|N|!
and for a game a, contribution of agent i to group S : i /∈ S,
∆i(a, S) = a(S ∪ {i}) − a(S)
The true Shapley value of the game tj is
φi(tj) = cj
S⊆N\{i}
B(N, S)∆i(tj, S)
For each coalition S, i /∈ S, ∆i(tj , S) = 1 if and only if for
at least one attribute, say l∗
, ∆i(al∗
, S) = 1. Therefore, if
we sum over all the attributes, we would have included l∗
for sure.
φi(tj) ≤ cj
kj
j=1 S⊆N\{i}
B(N, S)∆i(aj
, S)
= kj
kj
j=1
cj
kj
φi(aj
)
= kj
ˆφi(T)
Summing over the terms, we see that the worst case
underestimate is by the maximum degree.
Without loss of generality, since the bound is
multiplicative, we can normalize the game to having v(N) = 1. As a
corollary, because we cannot overestimate any set by more
than K times, we obtain a bound on the total variation:
Corollary 2. The total variation between the estimated
Shapley value and the true Shapley value, for K-degree bounded
polynomial aggregate value function, is K−1
K
.
We can show that this bound is tight.
Example 1. Consider a game with n players and K
attributes. Let the first (n−1) agents be a member of the first
(K − 1) attributes, and that the corresponding aggregator
returns 1 if any one of the first (K − 1) agents is present.
Let the n-th agent be the sole member of the K-th attribute.
The estimated Shapley will assign a value of K−1
K
1
n−1
to the
first (n − 1) agents and 1
K
to the n-th agent. However, the
true Shapley value of the n-th agent tends to 1 as n → ∞,
and the total variation approaches K−1
K
.
In general, we cannot bound how much ˆφ may
overestimate the true Shapley value. The problem is that ˆφi may
be non-zero for agent i even though may have no influence
over the outcome of a game when attributes are multiplied
together, as illustrated by the following example.
Example 2. Consider a game with 2 players and 2
attributes, and let the first agent be a member of both
attributes, and the other agent a member of the second
attribute. For a coalition S, the first aggregator evaluates to
1 if agent 1 ∈ S, and the second aggregator evaluates to 1 if
both agents are in S. While agent 2 is not a dummy with
respect to the second attribute, it is a dummy with respect
to the product of the attributes. Agent 2 will be assigned a
value of 1
4
by the estimate.
As mentioned, for simple monotonic games, a game is fully
described by its set of minimal winning coalitions. When
the simple aggregators are represented as such, it is possible
to check, in polynomial time, for agents turning dummies
after attributes are multiplied together. Therefore, we can
improve the heuristic estimate in this special case.
5.3.3 Empirical evaluation
Due to a lack of benchmark problems for coalitional games,
we have tested the heuristic on random instances. We
believe more meaningful results can be obtained when we have
real instances to test this heuristic on.
Our experiment is set up as follows. We control three
parameters of the experiment: the number of players (6 − 10),
175
0
0.025
0.05
0.075
0.1
0.125
0.15
0.175
0.2
6 7 8 9 10
No. of Players
TotalVariationDistance
2
3
4
5
(a) Effect of Max Degree
0
0.025
0.05
0.075
0.1
0.125
0.15
0.175
0.2
6 7 8 9 10
No. of Players
TotalVariationDistance
4
5
6
(b) Effect of Number of Attributes
Figure 1: Experimental results
the number of attributes (3 − 8), and the maximum degree
of the polynomial (2 − 5). For each attribute, we randomly
sample one to three minimal winning coalitions. We then
randomly generate a polynomial of the desired maximum
degree with a random number (3 − 12) of terms, each with
a random positive weight. We normalize each game to have
v(N) = 1. The results of the experiments are shown in
Figure 1. The y-axis of the graphs shows the total variation,
and the x-axis the number of players. Each datapoint is an
average of approximately 700 random samples.
Figure 1(a) explores the effect of the maximum degree
and the number of players when the number of attributes is
fixed (at six). As expected, the total variation increases as
the maximum degree increases. On the other hand, there is
only a very small increase in error as the number of players
increases. The error is nowhere near the theoretical
worstcase bound of 1
2
to 4
5
for polynomials of degrees 2 to 5.
Figure 1(b) explores the effect of the number of attributes
and the number of players when the maximum degree of the
polynomial is fixed (at three). We first note that these three
lines are quite tightly clustered together, suggesting that the
number of attributes has relatively little effect on the error
of the estimate. As the number of attributes increases, the
total variation decreases. We think this is an interesting
phenomenon. This is probably due to the precise construct
required for the worst-case bound, and so as more attributes
are available, we have more diverse terms in the polynomial,
and the diversity pushes away from the worst-case bound.
6. CORE-RELATED QUESTIONS
In this section, we look at the complexity of the two
computational problems related to the core: Core
Nonemptiness and Core Membership. We show that the
nonemptiness of core of the game among attributes and the
cores of the aggregators imply non-emptiness of the core of
the game induced by the MACG. We also show that there
appears to be no such general relationship that relates the
core memberships of the game among attributes, games of
attributes, and game induced by MACG.
6.1 Problem Setup
There are many problems in the literature for which the
questions of Core Non-emptiness and Core
Membership are known to be hard [1]. For example, for the
edgespanning game that Deng and Papadimitriou studied [5],
both of these questions are coNP-complete. As MACG can
model the edge-spanning game in the same amount of space,
these hardness results hold for MACG as well.
As in the case for computing Shapley value, we attempt
to find a way around the hardness barrier by assuming the
existence of oracles, and try to build algorithms with these
oracles. First, we consider the aggregate value function.
Assumption 2. For a MACG N, M, A, a, w , we assume
there are oracles that answers the questions of Core
Nonemptiness, and Core Membership for the aggregate value
function w.
When the aggregate value function is a non-negative linear
function of its attributes, the core is always non-empty, and
core membership can be determined efficiently.
The concept of core for the game among attributes makes
the most sense when the aggregators are simple games. We
will further assume that these simple games are monotonic.
Assumption 3. For a MACG N, M, A, a, w , we assume
all aggregators are monotonic and simple. We also assume
there are oracles that answers the questions of Core
Nonemptiness, and Core Membership for the aggregators.
We consider this a mild assumption. Recall that monotonic
simple games are fully described by their set of minimal
winning coalitions (cf Section 3). If the aggregators are
represented as such, Core Non-emptiness and Core
Membership can be checked in polynomial time. This is due to the
following well-known result regarding simple games:
Lemma 1. A simple game N, v has a non-empty core
if and only if it has a set of veto players, say V , such that
v(S) = 0 for all S ⊇ V . Further, A payoff vector x is in the
core if and only if xi = 0 for all i /∈ V .
6.2 Core Non-emptiness
There is a strong connection between the non-emptiness
of the cores of the games among attributes, games of the
attributes, and the game induced by a MACG.
Theorem 4. Given a game N, v represented as a MACG
N, M, A, a, w , if the core of the game among attributes,
176
M, w , is non-empty, and the cores of the games of
attributes are non-empty, then the core of N, v is non-empty.
Proof. Let u be an arbitrary payoff vector in the core of
the game among attributes, M, w . For each attribute j,
let θj
be an arbitrary payoff vector in the core of the game
of attribute j. By Lemma 1, each attribute j must have a
set of veto players; let this set be denoted by Pj
. For each
agent i ∈ N, let yi =
È
j ujθj
i . We claim that this vector y
is in the core of N, v . Consider any coalition S ⊆ N,
v(S) = w(a(A, S)) ≤
j:S⊇P j
uj (4)
This is true because an aggregator cannot evaluate to 1
without all members of the veto set. For any attribute j, by
Lemma 1,
È
i∈P j θj
i = 1. Therefore,
j:S⊇P j
uj =
j:S⊇P j
uj
i∈P j
θj
i
=
i∈S j:S⊇P j
ujθj
i
≤
i∈S
yi
Note that the proof is constructive, and hence if we are
given an element in the core of the game among attributes,
we can construct an element of the core of the coalitional
game. From Theorem 4, we can obtain the following
corollaries that have been previously shown in the literature.
Corollary 3. The core of the edge-spanning game of [5]
is non-empty when the edge weights are non-negative.
Proof. Let the players be the vertices, and their
attributes the edges incident on them. For each attribute,
there is a veto set - namely, both endpoints of the edges.
As previously observed, an aggregate value function that is a
non-negative linear function of its aggregates has non-empty
core. Therefore, the precondition of Theorem 4 is satisfied,
and the edge-spanning game with non-negative edge weights
has a non-empty core.
Corollary 4 (Theorem 1 of [4]). The core of a flow
game with committee control, where each edge is controlled
by a simple game with a veto set of players, is non-empty.
Proof. We treat each edge of the flow game as an
attribute, and so each attribute has a veto set of players. The
core of a flow game (without committee) has been shown
to be non-empty in [8]. We can again invoke Theorem 4 to
show the non-emptiness of core for flow games with
committee control.
However, the core of the game induced by a MACG may
be non-empty even when the core of the game among
attributes is empty, as illustrated by the following example.
Example 3. Suppose the minimal winning coalition of all
aggregators in a MACG N, M, A, a, w is N, then v(S) = 0
for all coalitions S ⊂ N. As long as v(N) ≥ 0, any
nonnegative vector x that satisfies
È
i∈N xi = v(N) is in the
core of N, v .
Complementary to the example above, when all the
aggregators have empty cores, the core of N, v is also empty.
Theorem 5. Given a game N, v represented as a MACG
N, M, A, a, w , if the cores of all aggregators are empty,
v(N) > 0, and for each i ∈ N, v({i}) ≥ 0, then the core of
N, v is empty.
Proof. Suppose the core of N, v is non-empty. Let x
be a member of the core, and pick an agent i such that xi >
0. However, for each attribute, since the core is empty, by
Lemma 1, there are at least two disjoint winning coalitions.
Pick the winning coalition Sj
that does not include i for
each attribute j. Let S∗
=
Ë
j Sj
. Because S∗
is winning
for all coalitions, v(S∗
) = v(N). However,
v(N) =
j∈N
xj = xi +
j /∈N
xj ≥ xi +
j∈S∗
xj >
j∈S∗
xj
Therefore, v(S∗
) >
È
j∈S∗ xj, contradicting the fact that x
is in the core of N, v .
We do not have general results regarding the problem of
Core Non-emptiness when some of the aggregators have
non-empty cores while others have empty cores. We suspect
knowledge about the status of the cores of the aggregators
alone is insufficient to decide this problem.
6.3 Core Membership
Since it is possible for the game induced by the MACG
to have a non-empty core when the core of the aggregate
value function is empty (Example 3), we try to explore the
problem of Core Membership assuming that the core of
both the game among attributes, M, w , and the underlying
game, N, v , is known to be non-empty, and see if there
is any relationship between their members. One reasonable
requirement is whether a payoff vector x in the core of N, v
can be decomposed and re-aggregated to a payoff vector y
in the core of M, w . Formally,
Definition 11. We say that a vector x ∈ Rn
≥0 can be
decomposed and re-aggregated into a vector y ∈ Rm
≥0 if there
exists Z ∈ Rm×n
≥0 , such that
yi =
n
j=1
Zij for all i
xj =
m
i=1
Zij for all j
We may refer Z as shares.
When there is no restriction on the entries of Z, it is
always possible to decompose a payoff vector x in the core of
N, v to a payoff vector y in the core of M, w . However, it
seems reasonable to restrict that if an agent j is irrelevant to
the aggregator i, i.e., i never changes the outcome of
aggregator j, then Zij should be restricted to be 0. Unfortunately,
this restriction is already too strong.
Example 4. Consider a MACG N, M, A, a, w with two
players and three attributes. Suppose agent 1 is irrelevant
to attribute 1, and agent 2 is irrelevant to attributes 2 and
3. For any set of attributes T ⊆ M, let w be defined as
w(T) =
0 if |T| = 0 or 1
6 if |T| = 2
10 if |T| = 3
177
Since the core of a game with a finite number of players forms
a polytope, we can verify that the set of vectors (4, 4, 2),
(4, 2, 4), and (2, 4, 4), fully characterize the core C of M, w .
On the other hand, the vector (10, 0) is in the core of N, v .
This vector cannot be decomposed and re-aggregated to a
vector in C under the stated restriction.
Because of the apparent lack of relationship among
members of the core of N, v and that of M, w , we believe an
algorithm for testing Core Membership will require more
input than just the veto sets of the aggregators and the
oracle of Core Membership for the aggregate value function.
7. CONCLUDING REMARKS
Multi-attribute coalitional games constitute a very
natural way of modeling problems of interest. Its space
requirement compares favorably with other representations
discussed in the literature, and hence it serves well as a
prototype to study computational complexity of coalitional
game theory for a variety of problems. Positive results
obtained under this representation can easily be translated to
results about other representations. Some of these corollary
results have been discussed in Sections 5 and 6.
An important direction to explore in the future is the
question of efficiency in updating a game, and how to
evaluate the solution concepts without starting from scratch. As
pointed out at the end of Section 4.3, MACG is very
naturally suited for updates. Representation results regarding
efficiency of updates, and algorithmic results regarding how
to compute the different solution concepts from updates, will
both be very interesting.
Our work on approximating the Shapley value when the
aggregate value function is a non-linear function of the
attributes suggests more work to be done there as well. Given
the natural probabilistic interpretation of the Shapley value,
we believe that a random sampling approach may have
significantly better theoretical guarantees.
8. REFERENCES
[1] J. M. Bilbao, J. R. Fern´andez, and J. J. L´opez.
Complexity in cooperative game theory.
http://www.esi.us.es/~mbilbao.
[2] V. Conitzer and T. Sandholm. Complexity of
determining nonemptiness of the core. In Proc. 18th
Int. Joint Conf. on Artificial Intelligence, pages
613-618, 2003.
[3] V. Conitzer and T. Sandholm. Computing Shapley
values, manipulating value division schemes, and
checking core membership in multi-issue domains. In
Proc. 19th Nat. Conf. on Artificial Intelligence, pages
219-225, 2004.
[4] I. J. Curiel, J. J. Derks, and S. H. Tijs. On balanced
games and games with committee control. OR
Spectrum, 11:83-88, 1989.
[5] X. Deng and C. H. Papadimitriou. On the complexity
of cooperative solution concepts. Math. Oper. Res.,
19:257-266, May 1994.
[6] M. R. Garey and D. S. Johnson. Computers and
Intractability: A Guide to the Theory of
NP-Completeness. W. H. Freeman, New York, 1979.
[7] S. Ieong and Y. Shoham. Marginal contribution nets:
A compact representation scheme for coalitional
games. In Proc. 6th ACM Conf. on Electronic
Commerce, pages 193-202, 2005.
[8] E. Kalai and E. Zemel. Totally balanced games and
games of flow. Math. Oper. Res., 7:476-478, 1982.
[9] A. Mas-Colell, M. D. Whinston, and J. R. Green.
Microeconomic Theory. Oxford University Press, New
York, 1995.
[10] M. J. Osborne and A. Rubinstein. A Course in Game
Theory. The MIT Press, Cambridge, Massachusetts,
1994.
[11] L. S. Shapley. A value for n-person games. In H. W.
Kuhn and A. W. Tucker, editors, Contributions to the
Theory of Games II, number 28 in Annals of
Mathematical Studies, pages 307-317. Princeton
University Press, 1953.
[12] O. Shehory and S. Kraus. Task allocation via coalition
formation among autonomous agents. In Proc. 14th
Int. Joint Conf. on Artificial Intelligence, pages 31-45,
1995.
[13] O. Shehory and S. Kraus. A kernel-oriented model for
autonomous-agent coalition-formation in general
environments: Implentation and results. In Proc. 13th
Nat. Conf. on Artificial Intelligence, pages 134-140,
1996.
[14] J. von Neumann and O. Morgenstern. Theory of
Games and Economic Behvaior. Princeton University
Press, 1953.
[15] M. Yokoo, V. Conitzer, T. Sandholm, N. Ohta, and
A. Iwasaki. Coalitional games in open anonymous
environments. In Proc. 20th Nat. Conf. on Artificial
Intelligence, pages 509-515, 2005.
Appendix
We complete the missing proofs from the main text here.
To prove Proposition 1, we need the following lemma.
Lemma 2. Marginal contribution nets when all coalitions
are restricted to have values 0 or 1 have the same
representation power as an AND/OR circuit with negation at the
literal level (i.e., AC0
circuit) of depth two.
Proof. If a rule assigns a negative value in the marginal
contribution nets, we can write the rule by a corresponding
set of at most n rules, where n is the number of agents, such
that each of which has positive values through application of
De Morgan"s Law. With all values of the rules non-negative,
we can treat the weighted summation step of marginal
contribution nets can be viewed as an OR, and each rule as
a conjunction over literals, possibly negated. This exactly
match up with an AND/OR circuit of depth two.
Proof (Proposition 1). The parity game can be
represented with a MACG using a single attribute, aggregator
of sum, and an aggregate value function that evaluates that
sum modulus two.
As a Boolean function, parity is known to require an
exponential number of prime implicants. By Lemma 2, a prime
implicant is the exact analogue of a pattern in a rule of
marginal contribution nets. Therefore, to represent the parity
function, a marginal contribution nets must be an
exponential number of rules.
Finally, as shown in [7], a marginal contribution net is at
worst a factor of O(n) less compact than multi-issue
representation. Therefore, multi-issue representation will also
178
take exponential space to represent the parity game. This
is assuming that each issue in the game is represented in
characteristic form.
Proof (Theorem 2). An instance of three-dimensional
matching is as follows [6]: Given set P ⊆ W × X × Y ,
where W , X, Y are disjoint sets having the same number q
of elements, does there exist a matching P ⊆ P such that
|P | = q and no two elements of P agree in any
coordinate. For notation, let P = {p1, p2, . . . , pK}. We construct
a MACG N, M, A, a, w as follows:
• M: Let attributes 1 to q correspond to elements in W ,
(q+1) to 2q correspond to elements in X, (2q+1) to 3q
corresponds to element in Y , and let there be a special
attribute (3q + 1).
• N: Let player i corresponds to pi, and let there be a
special player .
• A: Let Aji = 1 if the element corresponding to
attribute j is in pi. Thus, for the first K columns, there
are exactly three non-zero entries. We also set
A(3q+1) = 1.
• a: for each aggregator j, aj
(A(S)) = 1 if and only if
sum of row j of A(S) equals 1.
• w: product over all aj
.
In the game N, v that corresponds to this construction,
v(S) = 1 if and only if all attributes are covered exactly
once. Therefore, for /∈ T ⊆ N, v(T ∪ { }) − v(T) = 1 if
and only if T covers attributes 1 to 3q exactly once. Since
all such T, if exists, must be of size q, the number of
threedimensional matchings is given by
φ (v)
(K + 1)!
q!(K − q)!
179 | cooperation;polynomial function min-cost flow problem;diverse economic interaction;coalitional game theory;coalitional game;linear combination;min-cost flow problem;graph;agent;unrestricted aggregation of subgame;shapley value;multi-issue representation;superadditive game;compact representation;computational complexity;core;multi-attribute model;multi-attribute coalitional game |
train_J-39 | The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution | Bidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest. As seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions. These misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers. This paper proposes a novel options-based extension to eBay"s proxy-bidding system that resolves this strategic issue for buyers in commoditized markets. An empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue. | 1. INTRODUCTION
Electronic markets represent an application of information
systems that has generated significant new trading
opportunities while allowing for the dynamic pricing of goods. In
addition to marketplaces such as eBay, electronic marketplaces
are increasingly used for business-to-consumer auctions (e.g.
to sell surplus inventory [19]).
Many authors have written about a future in which
commerce is mediated by online, automated trading agents [10,
25, 1]. There is still little evidence of automated trading
in e-markets, though. We believe that one leading place of
resistance is in the lack of provably optimal bidding
strategies for any but the simplest of market designs. Without
this, we do not expect individual consumers, or firms, to
be confident in placing their business in the hands of an
automated agent.
One of the most common examples today of an electronic
marketplace is eBay, where the gross merchandise volume
(i.e., the sum of all successfully closed listings) during 2005
was $44B. Among items listed on eBay, many are essentially
identical. This is especially true in the Consumer
Electronics category [9], which accounted for roughly $3.5B of eBay"s
gross merchandise volume in 2005. This presence of
essentially identical items can expose bidders, and sellers, to risks
because of the sequential auction problem.
For example, Alice may want an LCD monitor, and could
potentially bid in either a 1 o"clock or 3 o"clock eBay
auction. While Alice would prefer to participate in whichever
auction will have the lower winning price, she cannot
determine beforehand which auction that may be, and could
end up winning the wrong auction. This is a problem of
multiple copies.
Another problem bidders may face is the exposure
problem. As investigated by Bykowsky et al. [6], exposure
problems exist when buyers desire a bundle of goods but may
only participate in single-item auctions.1
For example, if
Alice values a video game console by itself for $200, a video
game by itself for $30, and both a console and game for $250,
Alice must determine how much of the $20 of synergy value
she might include in her bid for the console alone. Both
problems arise in eBay as a result of sequential auctions of
single items coupled with patient bidders with substitutes
or complementary valuations.
Why might the sequential auction problem be bad?
Complex games may lead to bidders employing costly strategies
and making mistakes. Potential bidders who do not wish
to bear such costs may choose not to participate in the
1
The exposure problem has been primarily investigated by
Bykowsky et al. in the context of simultaneous single-item
auctions. The problem is also a familiar one of online
decision making.
180
market, inhibiting seller revenue opportunities.
Additionally, among those bidders who do choose to participate, the
mistakes made may lead to inefficient allocations, further
limiting revenue opportunities.
We are interested in creating modifications to eBay-style
markets that simplify the bidder problem, leading to simple
equilibrium strategies, and preferably better efficiency and
revenue properties.
1.1 Options + Proxies: A Proposed Solution
Retail stores have developed policies to assist their
customers in addressing sequential purchasing problems.
Return policies alleviate the exposure problem by allowing
customers to return goods at the purchase price. Price
matching alleviates the multiple copies problem by allowing buyers
to receive from sellers after purchase the difference between
the price paid for a good and a lower price found elsewhere
for the same good [7, 15, 18]. Furthermore, price matching
can reduce the impact of exactly when a seller brings an
item to market, as the price will in part be set by others
selling the same item. These two retail policies provide the
basis for the scheme proposed in this paper.2
We extend the proxy bidding technology currently
employed by eBay. Our super-proxy extension will take
advantage of a new, real options-based, market infrastructure
that enables simple, yet optimal, bidding strategies. The
extensions are computationally simple, handle temporal
issues, and retain seller autonomy in deciding when to enter
the market and conduct individual auctions.
A seller sells an option for a good, which will ultimately
lead to either a sale of the good or the return of the option.
Buyers interact through a proxy agent, defining a value on
all possible bundles of goods in which they have interest
together with the latest time period in which they are
willing to wait to receive the good(s). The proxy agents use
this information to determine how much to bid for options,
and follow a dominant bidding strategy across all relevant
auctions. A proxy agent exercises options held when the
buyer"s patience has expired, choosing options that
maximize a buyer"s payoff given the reported valuation. All other
options are returned to the market and not exercised. The
options-based protocol makes truthful and immediate
revelation to a proxy a dominant strategy for buyers, whatever
the future auction dynamics.
We conduct an empirical analysis of eBay, collecting data
on over four months of bids for Dell LCD screens (model
E193FP) starting in the Summer of 2005. LCD screens are
a high-ticket item, for which we demonstrate evidence of
the sequential bidding problem. We first infer a
conservative model for the arrival time, departure time and value of
bidders on eBay for LCD screens during this period. This
model is used to simulate the performance of the
optionsbased infrastructure, in order to make direct comparisons to
the actual performance of eBay in this market.
We also extend the work of Haile and Tamer [11] to
estimate an upper bound on the distribution of value of eBay
bidders, taking into account the sequential auction
problem when making the adjustments. Using this estimate, one
can approximate how much greater a bidder"s true value is
2
Prior work has shown price matching as a potential
mechanism for colluding firms to set monopoly prices. However,
in our context, auction prices will be matched, which are
not explicitly set by sellers but rather by buyers" bids.
from the maximum bid they were observed to have placed
on eBay. Based on this approximation, revenue generated
in a simulation of the options-based scheme exceeds
revenue on eBay for the comparable population and sequence of
auctions by 14.8%, while the options-based scheme
demonstrates itself as being 7.5% more efficient.
1.2 Related Work
A number of authors [27, 13, 28, 29] have analyzed the
multiple copies problem, often times in the context of
categorizing or modeling sniping behavior for reasons other
than those first brought forward by Ockenfels and Roth
[20]. These papers perform equilibrium analysis in simpler
settings, assuming bidders can participate in at most two
auctions. Peters & Severinov [21] extend these models to
allow buyers to consider an arbitrary number of auctions, and
characterize a perfect Bayesian equilibrium. However, their
model does not allow auctions to close at distinct times and
does not consider the arrival and departure of bidders.
Previous work have developed a data-driven approach
toward developing a taxonomy of strategies employed by
bidders in practice when facing multi-unit auctions, but have
not considered the sequential bidding problem [26, 2].
Previous work has also sought to provide agents with smarter
bidding strategies [4, 3, 5, 1]. Unfortunately, it seems hard
to design artificial agents with equilibrium bidding
strategies, even for a simple simultaneous ascending price auction.
Iwasaki et al. [14] have considered the role of options in
the context of a single, monolithic, auction design to help
bidders with marginal-increasing values avoid exposure in
a multi-unit, homogeneous item auction problem. In other
contexts, options have been discussed for selling coal mine
leases [23], or as leveled commitment contracts for use in a
decentralized market place [24]. Most similar to our work,
Gopal et al. [9] use options for reducing the risks of buyers
and sellers in the sequential auction problem. However, their
work uses costly options and does not remove the sequential
bidding problem completely.
Work on online mechanisms and online auctions [17, 12,
22] considers agents that can dynamically arrive and depart
across time. We leverage a recent price-based
characterization by Hajiaghayi et al. [12] to provide a dominant strategy
equilibrium for buyers within our options-based protocol.
The special case for single-unit buyers is equivalent to the
protocol of Hajiaghayi et al., albeit with an options-based
interpretation.
Jiang and Leyton-Brown [16] use machine learning
techniques for bid identification in online auctions.
2. EBAY AND THE DELL E193FP
The most common type of auction held on eBay is a
singleitem proxy auction. Auctions open at a given time and
remain open for a set period of time (usually one week).
Bidders bid for the item by giving a proxy a value ceiling. The
proxy will bid on behalf of the bidder only as much as is
necessary to maintain a winning position in the auction, up to
the ceiling received from the bidder. Bidders may
communicate with the proxy multiple times before an auction closes.
In the event that a bidder"s proxy has been outbid, a bidder
may give the proxy a higher ceiling to use in the auction.
eBay"s proxy auction implements an incremental version of
a Vickrey auction, with the item sold to the highest bidder
for the second-highest bid plus a small increment.
181
10
0
10
1
10
2
10
3
10
4
10
0
10
1
10
2
10
3
10
4
Number of Auctions
NumberofBidders
Auctions Available
Auctions in Which Bid
Figure 1: Histogram of number of LCD auctions
available to each bidder and number of LCD auctions in which
a bidder participates.
The market analyzed in this paper is that of a specific
model of an LCD monitor, a 19 Dell LCD model E193FP.
This market was selected for a variety of reasons including:
• The mean price of the monitor was $240 (with
standard deviation $32), so we believe it reasonable to
assume that bidders on the whole are only interested in
acquiring one copy of the item on eBay.3
• The volume transacted is fairly high, at approximately
500 units sold per month.
• The item is not usually bundled with other items.
• The item is typically sold as new, and so suitable for
the price-matching of the options-based scheme.
Raw auction information was acquired via a PERL script.
The script accesses the eBay search engine,4
and returns all
auctions containing the terms ‘Dell" and ‘LCD" that have
closed within the past month.5
Data was stored in a text
file for post-processing. To isolate the auctions in the
domain of interest, queries were made against the titles of eBay
auctions that closed between 27 May, 2005 through 1
October, 2005.6
Figure 1 provides a general sense of how many LCD
auctions occur while a bidder is interested in pursuing a
monitor.7
8,746 bidders (86%) had more than one auction
available between when they first placed a bid on eBay and the
3
For reference, Dell"s October 2005 mail order catalogue
quotes the price of the monitor as being $379 without a
desktop purchase, and $240 as part of a desktop purchase
upgrade.
4
http://search.ebay.com
5
The search is not case-sensitive.
6
Specifically, the query found all auctions where the title
contained all of the following strings: ‘Dell," ‘LCD" and
‘E193FP," while excluding all auctions that contained any
of the following strings: ‘Dimension," ‘GHZ," ‘desktop," ‘p4"
and ‘GB." The exclusion terms were incorporated so that the
only auctions analyzed would be those selling exclusively the
LCD of interest. For example, the few bundled auctions
selling both a Dell Dimension desktop and the E193FP LCD
are excluded.
7
As a reference, most auctions close on eBay between noon
and midnight EDT, with almost two auctions for the Dell
LCD monitor closing each hour on average during peak time
periods. Bidders have an average observed patience of 3.9
days (with a standard deviation of 11.4 days).
latest closing time of an auction in which they bid (with an
average of 78 auctions available). Figure 1 also illustrates
the number of auctions in which each bidder participates.
Only 32.3% of bidders who had more than one auction
available are observed to bid in more than one auction (bidding
in 3.6 auctions on average). A simple regression analysis
shows that bidders tend to submit maximal bids to an
auction that are $1.22 higher after spending twice as much time
in the system, as well as bids that are $0.27 higher in each
subsequent auction.
Among the 508 bidders that won exactly one monitor and
participated in multiple auctions, 201 (40%) paid more than
$10 more than the closing price of another auction in which
they bid, paying on average $35 more (standard deviation
$21) than the closing price of the cheapest auction in which
they bid but did not win. Furthermore, among the 2,216
bidders that never won an item despite participating in
multiple auctions, 421 (19%) placed a losing bid in one auction
that was more than $10 higher than the closing price of
another auction in which they bid, submitting a losing bid on
average $34 more (standard deviation $23) than the
closing price of the cheapest auction in which they bid but did
not win. Although these measures do not say a bidder that
lost could have definitively won (because we only consider
the final winning price and not the bid of the winner to her
proxy), or a bidder that won could have secured a better
price, this is at least indicative of some bidder mistakes.
3. MODELING THE SEQUENTIAL
AUCTION PROBLEM
While the eBay analysis was for simple bidders who
desire only a single item, let us now consider a more general
scenario where people may desire multiple goods of different
types, possessing general valuations over those goods.
Consider a world with buyers (sometimes called bidders)
B and K different types of goods G1...GK . Let T = {0, 1, ...}
denote time periods. Let L denote a bundle of goods,
represented as a vector of size K, where Lk ∈ {0, 1} denotes
the quantity of good type Gk in the bundle.8
The type of a
buyer i ∈ B is (ai, di, vi), with arrival time ai ∈ T, departure
time di ∈ T, and private valuation vi(L) ≥ 0 for each bundle
of goods L received between ai and di, and zero value
otherwise. The arrival time models the period in which a buyer
first realizes her demand and enters the market, while the
departure time models the period in which a buyer loses
interest in acquiring the good(s). In settings with general
valuations, we need an additional assumption: an upper bound
on the difference between a buyer"s arrival and departure,
denoted ΔMax. Buyers have quasi-linear utilities, so that
the utility of buyer i receiving bundle L and paying p, in
some period no later than di, is ui(L, p) = vi(L) − p. Each
seller j ∈ S brings a single item kj to the market, has no
intrinsic value and wants to maximize revenue. Seller j has
an arrival time, aj, which models the period in which she
is first interested in listing the item, while the departure
time, dj, models the latest period in which she is willing to
consider having an auction for the item close. A seller will
receive payment by the end of the reported departure of the
winning buyer.
8
We extend notation whereby a single item k of type Gk
refers to a vector L : Lk = 1.
182
We say an individual auction in a sequence is locally
strategyproof (LSP) if truthful bidding is a dominant strategy
for a buyer that can only bid in that auction. Consider the
following example to see that LSP is insufficient for the
existence of a dominant bidding strategy for buyers facing a
sequence of auctions.
Example 1. Alice values one ton of Sand with one ton
of Stone at $2, 000. Bob holds a Vickrey auction for one ton
of Sand on Monday and a Vickrey auction for one ton of
Stone on Tuesday. Alice has no dominant bidding strategy
because she needs to know the price for Stone on Tuesday to
know her maximum willingness to pay for Sand on Monday.
Definition 1. The sequential auction problem. Given
a sequence of auctions, despite each auction being locally
strategyproof, a bidder has no dominant bidding strategy.
Consider a sequence of auctions. Generally, auctions
selling the same item will be uncertainly-ordered, because a
buyer will not know the ordering of closing prices among
the auctions. Define the interesting bundles for a buyer as
all bundles that could maximize the buyer"s profit for some
combination of auctions and bids of other buyers.9
Within
the interesting bundles, say that an item has uncertain
marginal value if the marginal value of an item depends on the
other goods held by the buyer.10
Say that an item is
oversupplied if there is more than one auction offering an item
of that type. Say two bundles are substitutes if one of those
bundles has the same value as the union of both bundles.11
Proposition 1. Given locally strategyproof single-item
auctions, the sequential auction problem exists for a bidder if
and only if either of the following two conditions is true: (1)
within the set of interesting bundles (a) there are two
bundles that are substitutes, (b) there is an item with uncertain
marginal value, or (c) there is an item that is over-supplied;
(2) a bidder faces competitors" bids that are conditioned on
the bidder"s past bids.
Proof. (Sketch.)(⇐) A bidder does not have a dominant
strategy when (a) she does not know which bundle among
substitutes to pursue, (b) she faces the exposure problem,
or (c) she faces the multiple copies problem. Additionally,
a bidder does not have a dominant strategy when she does
not how to optimally influence the bids of competitors.(⇒)
By contradiction. A bidder has a dominant strategy to bid
its constant marginal value for a given item in each auction
available when conditions (1) and (2) are both false.
For example, the following buyers all face the sequential
auction problem as a result of condition (a), (b) and (c)
respectively: a buyer who values one ton of Sand for $1,000,
or one ton of Stone for $2,000, but not both Sand and Stone;
a buyer who values one ton of Sand for $1,000, one ton of
Stone for $300, and one ton of Sand and one ton of Stone for
$1,500, and can participate in an auction for Sand before an
auction for Stone; a buyer who values one ton of Sand for
$1,000 and can participate in many auctions selling Sand.
9
Assume that the empty set is an interesting bundle.
10
Formally, an item k has uncertain marginal value if |{m :
m = vi(Q) − vi(Q − k), ∀Q ⊆ L ∈ InterestingBundle, Q ⊇
k}| > 1.
11
Formally, two bundles A and B are substitutes if vi(A ∪
B) = max(vi(A), vi(B)), where A ∪ B = L where Lk =
max(Ak, Bk).
4. SUPER PROXIES AND OPTIONS
The novel solution proposed in this work to resolve the
sequential auction problem consists of two primary
components: richer proxy agents, and options with price matching.
In finance, a real option is a right to acquire a real good at
a certain price, called the exercise price. For instance, Alice
may obtain from Bob the right to buy Sand from him at
an exercise price of $1, 000. An option provides the right to
purchase a good at an exercise price but not the obligation.
This flexibility allows buyers to put together a collection of
options on goods and then decide which to exercise.
Options are typically sold at a price called the option
price. However, options obtained at a non-zero option price
cannot generally support a simple, dominant bidding
strategy, as a buyer must compute the expected value of an
option to justify the cost [8]. This computation requires a
model of the future, which in our setting requires a model
of the bidding strategies and the values of other bidders.
This is the very kind of game-theoretic reasoning that we
want to avoid.
Instead, we consider costless options with an option price
of zero. This will require some care as buyers are weakly
better off with a costless option than without one, whatever
its exercise price. However, multiple bidders pursuing
options with no intention of exercising them would cause the
efficiency of an auction for options to unravel. This is the
role of the mandatory proxy agents, which intermediate
between buyers and the market. A proxy agent forces a link
between the valuation function used to acquire options and
the valuation used to exercise options. If a buyer tells her
proxy an inflated value for an item, she runs the risk of
having the proxy exercise options at a price greater than her
value.
4.1 Buyer Proxies
4.1.1 Acquiring Options
After her arrival, a buyer submits her valuation ˆvi
(perhaps untruthfully) to her proxy in some period ˆai ≥ ai,
along with a claim about her departure time ˆdi ≥ ˆai. All
transactions are intermediated via proxy agents. Each
auction is modified to sell an option on that good to the
highest bidding proxy, with an initial exercise price set to the
second-highest bid received.12
When an option in which a buyer is interested becomes
available for the first time, the proxy determines its bid
by computing the buyer"s maximum marginal value for the
item, and then submits a bid in this amount. A proxy does
not bid for an item when it already holds an option. The
bid price is:
bidt
i(k) = max
L
[ˆvi(L + k) − ˆvi(L)] (1)
By having a proxy compute a buyer"s maximum marginal
value for an item and then bidding only that amount, a
buyer"s proxy will win any auction that could possibly be of
benefit to the buyer and only lose those auctions that could
never be of value to the buyer.
12
The system can set a reserve price for each good, provided
that the reserve is universal for all auctions selling the same
item. Without a universal reserve price, price matching is
not possible because of the additional restrictions on prices
that individual sellers will accept.
183
Buyer Type Monday Tuesday
Molly (Mon, Tues, $8) 6Nancy 6Nancy → 4Polly
Nancy (Mon, Tues, $6) - 4Polly
Polly (Mon, Tues, $4)
-Table 1: Three-buyer example with each wanting a
single item and one auction occurring on Monday and
Tuesday. XY implies an option with exercise price X and
bookkeeping that a proxy has prevented Y from
currently possessing an option. → is the updating of
exercise price and bookkeeping.
When a proxy wins an auction for an option, the proxy
will store in its local memory the identity (which may be
a pseudonym) of the proxy not holding an option because
of the proxy"s win (i.e., the proxy that it ‘bumped" from
winning, if any). This information will be used for price
matching.
4.1.2 Pricing Options
Sellers agree by joining the market to allow the proxy
representing a buyer to adjust the exercise price of an option
that it holds downwards if the proxy discovers that it could
have achieved a better price by waiting to bid in a later
auction for an option on the same good. To assist in the
implementation of the price matching scheme each proxy
tracks future auctions for an option that it has already won
and will determine who would be bidding in that auction
had the proxy delayed its entry into the market until this
later auction. The proxy will request price matching from
the seller that granted it an option if the proxy discovers that
it could have secured a lower price by waiting. To reiterate,
the proxy does not acquire more than one option for any
good. Rather, it reduces the exercise price on its already
issued option if a better deal is found.
The proxy is able to discover these deals by asking each
future auction to report the identities of the bidders in that
auction together with their bids. This needs to be enforced
by eBay, as the central authority. The highest bidder in this
later auction, across those whose identity is not stored in
the proxy"s memory for the given item, is exactly the bidder
against whom the proxy would be competing had it delayed
its entry until this auction. If this high bid is lower than the
current option price held, the proxy price matches down
to this high bid price.
After price matching, one of two adjustments will be made
by the proxy for bookkeeping purposes. If the winner of
the auction is the bidder whose identity has been in the
proxy"s local memory, the proxy will replace that local
information with the identity of the bidder whose bid it just
price matched, as that is now the bidder the proxy has
prevented from obtaining an option. If the auction winner"s
identity is not stored in the proxy"s local memory the
memory may be cleared. In this case, the proxy will simply price
match against the bids of future auction winners on this
item until the proxy departs.
Example 2 (Table 1). Molly"s proxy wins the
Monday auction, submitting a bid of $8 and receiving an option
for $6. Molly"s proxy adds Nancy to its local memory as
Nancy"s proxy would have won had Molly"s proxy not bid.
On Tuesday, only Nancy"s and Polly"s proxy bid (as Molly"s
proxy holds an option), with Nancy"s proxy winning an
opBuyer Type Monday Tuesday
Truth:
Molly (Mon, Mon, $8)
6NancyNancy (Mon, Tues, $6) - 4Polly
Polly (Mon, Tues, $4)
-Misreport:
Molly (Mon, Mon, $8)
-Nancy (Mon, Tues, $10) 8Molly 8Molly → 4φ
Polly (Mon, Tues, $4) - 0φ
Misreport &
match low:
Molly (Mon, Mon, $8)
-Nancy (Mon, Tues, $10) 8 8 → 0
Polly (Mon, Tues, $4) - 0
Table 2: Examples demonstrating why bookkeeping will
lead to a truthful system whereas simply matching to the
lowest winning price will not.
tion for $4 and noting that it bumped Polly"s proxy. At this
time, Molly"s proxy will price match its option down to $4
and replace Nancy with Polly in its local memory as per the
price match algorithm, as Polly would be holding an option
had Molly never bid.
4.1.3 Exercising Options
At the reported departure time the proxy chooses which
options to exercise. Therefore, a seller of an option must
wait until period ˆdw for the option to be exercised and
receive payment, where w was the winner of the option.13
For
bidder i, in period ˆdi, the proxy chooses the option(s) that
maximize the (reported) utility of the buyer:
θ∗
t = argmax
θ⊆Θ
(ˆvi(γ(θ)) − π(θ)) (2)
where Θ is the set of all options held, γ(θ) are the goods
corresponding to a set of options, and π(θ) is the sum of
exercise prices for a set of options. All other options are
returned.14
No options are exercised when no combination
of options have positive utility.
4.1.4 Why bookkeep and not match winning price?
One may believe that an alternative method for
implementing a price matching scheme could be to simply have
proxies match the lowest winning price they observe after
winning an option. However, as demonstrated in Table 2,
such a simple price matching scheme will not lead to a
truthful system.
The first scenario in Table 2 demonstrates the outcome
if all agents were to truthfully report their types. Molly
13
While this appears restrictive on the seller, we believe it not
significantly different than what sellers on eBay currently
endure in practice. An auction on eBay closes at a specific
time, but a seller must wait until a buyer relinquishes
payment before being able to realize the revenue, an amount of
time that could easily be days (if payment is via a money
order sent through courier) to much longer (if a buyer is
slow but not overtly delinquent in remitting her payment).
14
Presumably, an option returned will result in the seller
holding a new auction for an option on the item it still
possesses. However, the system will not allow a seller to
re-auction an option until ΔMax after the option had first
been issued in order to maintain a truthful mechanism.
184
would win the Monday auction and receive an option with
an exercise price of $6 (subsequently exercising that option
at the end of Monday), and Nancy would win the Tuesday
auction and receive an option with an exercise price of $4
(subsequently exercising that option at the end of Tuesday).
The second scenario in Table 2 demonstrates the outcome
if Nancy were to misreport her value for the good by
reporting an inflated value of $10, using the proposed bookkeeping
method. Nancy would win the Monday auction and receive
an option with an exercise price of $8. On Tuesday, Polly
would win the auction and receive an option with an exercise
price of $0. Nancy"s proxy would observe that the highest
bid submitted on Tuesday among those proxies not stored
in local memory is Polly"s bid of $4, and so Nancy"s proxy
would price match the exercise price of its option down to
$4. Note that the exercise price Nancy"s proxy has obtained
at the end of Tuesday is the same as when she truthfully
revealed her type to her proxy.
The third scenario in Table 2 demonstrates the outcome if
Nancy were to misreport her value for the good by reporting
an inflated value of $10, if the price matching scheme were
for proxies to simply match their option price to the
lowest winning price at any time while they are in the system.
Nancy would win the Monday auction and receive an option
with an exercise price of $8. On Tuesday, Polly would win
the auction and receive an option with an exercise price of
$0. Nancy"s proxy would observe that the lowest price on
Tuesday was $0, and so Nancy"s proxy would price match
the exercise price of its option down to $0. Note that the
exercise price Nancy"s proxy has obtained at the end of
Tuesday is lower than when she truthfully revealed her type to
the proxy.
Therefore, a price matching policy of simply matching the
lowest price paid may not elicit truthful information from
buyers.
4.2 Complexity of Algorithm
An XOR-valuation of size M for buyer i is a set of M
terms, < L1
, v1
i > ...< LM
, vM
i >, that maps distinct
bundles to values, where i is interested in acquiring at most one
such bundle. For any bundle S, vi(S) = maxLm⊆S(vm
i ).
Theorem 1. Given an XOR-valuation which possesses
M terms, there is an O(KM2
) algorithm for computing all
maximum marginal values, where K is the number of
different item types in which a buyer may be interested.
Proof. For each item type, recall Equation 1 which
defines the maximum marginal value of an item. For each
bundle L in the M-term valuation, vi(L + k) may be found
by iterating over the M terms. Therefore, the number of
terms explored to determine the maximum marginal value
for any item is O(M2
), and so the total number of
bundle comparisons to be performed to calculate all maximum
marginal values is O(KM2
).
Theorem 2. The total memory required by a proxy for
implementing price matching is O(K), where K is the
number of distinct item types. The total work performed by a
proxy to conduct price matching in each auction is O(1).
Proof. By construction of the algorithm, the proxy stores
one maximum marginal value for each item for bidding, of
which there are O(K); at most one buyer"s identity for each
item, of which there are O(K); and one current option
exercise price for each item, of which there are O(K). For
each auction, the proxy either submits a precomputed bid
or price matches, both of which take O(1) work.
4.3 Truthful Bidding to the Proxy Agent
Proxies transform the market into a direct revelation
mechanism, where each buyer i interacts with the proxy only
once,15
and does so by declaring a bid, bi, which is
defined as an announcement of her type, (ˆai, ˆdi, ˆvi), where
the announcement may or may not be truthful. We
denote all received bids other than i"s as b−i. Given bids,
b = (bi, b−i), the market determines allocations, xi(b), and
payments, pi(b) ≥ 0, to each buyer (using an online
algorithm).
A dominant strategy equilibrium for buyers requires that
vi(xi(bi, b−i))−pi(bi, b−i) ≥ vi(xi(bi, b−i))−pi(bi, b−i), ∀bi =
bi, ∀b−i.
We now establish that it is a dominant strategy for a buyer
to reveal her true valuation and true departure time to her
proxy agent immediately upon arrival to the system. The
proof builds on the price-based characterization of
strategyproof single-item online auctions in Hajiaghayi et al. [12].
Define a monotonic and value-independent price function
psi(ai, di, L, v−i) which can depend on the values of other
agents v−i. Price psi(ai, di, L, v−i) will represent the price
available to agent i for bundle L in the mechanism if it
announces arrival time ai and departure time di. The price
is independent of the value vi of agent i, but can depend on
ai, di and L as long as it satisfies a monotonicity condition.
Definition 2. Price function psi(ai, di, L, v−i) is
monotonic if psi(ai, di, L , v−i) ≤ psi(ai, di, L, v−i) for all ai ≤
ai, all di ≥ di, all bundles L ⊆ L and all v−i.
Lemma 1. An online combinatorial auction will be
strategyproof (with truthful reports of arrival, departure and value
a dominant strategy) when there exists a monotonic and
value-independent price function, psi(ai, di, L, v−i), such that
for all i and all ai, di ∈ T and all vi, agent i is allocated
bundle L∗
= argmaxL [vi(L) − psi(ai, di, L, v−i)] in period di
and makes payment psi(ai, di, L∗
, v−i).
Proof. Agent i cannot benefit from reporting a later
departure ˆdi because the allocation is made in period ˆdi and
the agent would have no value for this allocation. Agent
i cannot benefit from reporting a later arrival ˆai ≥ ai or
earlier departure ˆdi ≤ di because of price monotonicity.
Finally, the agent cannot benefit from reporting some ˆvi = vi
because its reported valuation does not change the prices
it faces and the mechanism maximizes its utility given its
reported valuation and given the prices.
Lemma 2. At any given time, there is at most one buyer
in the system whose proxy does not hold an option for a
given item type because of buyer i"s presence in the system,
and the identity of that buyer will be stored in i"s proxy"s
local memory at that time if such a buyer exists.
Proof. By induction. Consider the first proxy that a
buyer prevents from winning an option. Either (a) the
15
For analysis purposes, we view the mechanism as an opaque
market so that the buyer cannot condition her bid on bids
placed by others.
185
bumped proxy will leave the system having never won an
option, or (b) the bumped proxy will win an auction in the
future. If (a), the buyer"s presence prevented exactly that
one buyer from winning an option, but will have not
prevented any other proxies from winning an option (as the
buyer"s proxy will not bid on additional options upon
securing one), and will have had that bumped proxy"s identity
in its local memory by definition of the algorithm. If (b),
the buyer has not prevented the bumped proxy from
winning an option after all, but rather has prevented only the
proxy that lost to the bumped proxy from winning (if any),
whose identity will now be stored in the proxy"s local
memory by definition of the algorithm. For this new identity in
the buyer"s proxy"s local memory, either scenario (a) or (b)
will be true, ad infinitum.
Given this, we show that the options-based
infrastructure implements a price-based auction with a monotonic and
value-independent price schedule to every agent.
Theorem 3. Truthful revelation of valuation, arrival and
departure is a dominant strategy for a buyer in the
optionsbased market.
Proof. First, define a simple agent-independent price
function pk
i (t, v−i) as the highest bid by the proxies not
holding an option on an item of type Gk at time t, not
including the proxy representing i herself and not
including any proxies that would have already won an option had
i never entered the system (i.e., whose identity is stored
in i"s proxy"s local memory)(∞ if no supply at t). This
set of proxies is independent of any declaration i makes to
its proxy (as the set explicitly excludes the at most one
proxy (see Lemma 2) that i has prevented from holding
an option), and each bid submitted by a proxy within this
set is only a function of their own buyer"s declared
valuation (see Equation 1). Furthermore, i cannot influence
the supply she faces as any options returned by bidders
due to a price set by i"s proxy"s bid will be re-auctioned
after i has departed the system. Therefore, pk
i (t, v−i) is
independent of i"s declaration to its proxy. Next, define
psk
i (ˆai, ˆdi, v−i) = minˆai≤τ≤ ˆdi
[pk
i (τ, v−i)] (possibly ∞) as the
minimum price over pk
i (t, v−i), which is clearly monotonic.
By construction of price matching, this is exactly the price
obtained by a proxy on any option that it holds at
departure. Define psi(ˆai, ˆdi, L, v−i) =
Èk=K
k=1 psk
i (ˆai, ˆdi, v−i)Lk,
which is monotonic in ˆai, ˆdi and L since psk
i (ˆai, ˆdi, v−i) is
monotonic in ˆai and ˆdi and (weakly) greater than zero for
each k. Given the set of options held at ˆdi, which may be
a subset of those items with non-infinite prices, the proxy
exercises options to maximize the reported utility. Left to
show is that all bundles that could not be obtained with
options held are priced sufficiently high as to not be
preferred. For each such bundle, either there is an item priced
at ∞ (in which case the bundle would not be desired) or
there must be an item in that bundle for which the proxy
does not hold an option that was available. In all auctions
for such an item there must have been a distinct bidder
with a bid greater than bidt
i(k), which subsequently results
in psk
i (ˆai, ˆdi, v−i) > bidt
i(k), and so the bundle without k
would be preferred to the bundle.
Theorem 4. The super proxy, options-based scheme is
individually-rational for both buyers and sellers.
Price σ(Price) Value Surplus
eBay $240.24 $32 $244 $4
Options $239.66 $12 $263 $23
Table 3: Average price paid, standard deviation of
prices paid, average bidder value among winners, and
average winning bidder surplus on eBay for Dell E193FP
LCD screens as well as the simulated options-based
market using worst-case estimates of bidders" true value.
Proof. By construction, the proxy exercises the profit
maximizing set of options obtained, or no options if no set
of options derives non-negative surplus. Therefore, buyers
are guaranteed non-negative surplus by participating in the
scheme. For sellers, the price of each option is based on a
non-negative bid or zero.
5. EVALUATING THE OPTIONS / PROXY
INFRASTRUCTURE
A goal of the empirical benchmarking and a reason to
collect data from eBay is to try and build a realistic model
of buyers from which to estimate seller revenue and other
market effects under the options-based scheme.
We simulate a sequence of auctions that match the timing
of the Dell LCD auctions on eBay.16
When an auction
successfully closes on eBay, we simulate a Vickrey auction for
an option on the item sold in that period. Auctions that do
not successfully close on eBay are not simulated. We
estimate the arrival, departure and value of each bidder on eBay
from their observed behavior.17
Arrival is estimated as the
first time that a bidder interacts with the eBay proxy, while
departure is estimated as the latest closing time among eBay
auctions in which a bidder participates.
We initially adopt a particularly conservative estimate for
bidder value, estimating it as the highest bid a bidder was
observed to make on eBay. Table 3 compares the
distribution of closing prices on eBay and in the simulated options
scheme. While the average revenue in both schemes is
virtually the same ($239.66 in the options scheme vs. $240.24
on eBay), the winners in the options scheme tend to value
the item won 7% more than the winners on eBay ($263 in
the options scheme vs. $244 on eBay).
5.1 Bid Identification
We extend the work of Haile and Tamer [11] to sequential
auctions to get a better view of underlying bidder values.
Rather than assume for bidders an equilibrium behavior as
in standard econometric techniques, Haile and Tamer do not
attempt to model how bidders" true values get mapped into a
bid in any given auction. Rather, in the context of repeated
16
When running the simulations, the results of the first and
final ten days of auctions are not recorded to reduce edge
effects that come from viewing a discrete time window of a
continuous process.
17
For the 100 bidders that won multiple times on eBay, we
have each one bid a constant marginal value for each
additional item in each auction until the number of options
held equals the total number of LCDs won on eBay, with
each option available for price matching independently. This
bidding strategy is not a dominant strategy (falling outside
the type space possible for buyers on which the proof of
truthfulness has been built), but is believed to be the most
appropriate first order action for simulation.
186
0 100 200 300 400 500
0
0.2
0.4
0.6
0.8
1
Value ($)
CDF Observed Max Bids Upper Bound of True Value
Figure 2: CDF of maximum bids observed and upper
bound estimate of the bidding population"s distribution
for maximum willingness to pay. The true population
distribution lies below the estimated upper bound.
single-item auctions with distinct bidder populations, Haile
and Tamer make only the following two assumptions when
estimating the distribution of true bidder values:
1. Bidders do not bid more than they are willing to pay.
2. Bidders do not allow an opponent to win at a price
they are willing to beat.
From the first of their two assumptions, given the bids placed
by each bidder in each auction, Haile and Tamer derive a
method for estimating an upper bound of the bidding
population"s true value distribution (i.e., the bound that lies
above the true value distribution). From the second of their
two assumptions, given the winning price of each auction,
Haile and Tamer derive a method for estimating a lower
bound of the bidding population"s true value distribution.
It is only the upper-bound of the distribution that we
utilize in our work.
Haile and Tamer assume that bidders only participate in
a single auction, and require independence of the bidding
population from auction to auction. Neither assumption is
valid here: the former because bidders are known to bid
in more than one auction, and the latter because the set
of bidders in an auction is in all likelihood not a true i.i.d.
sampling of the overall bidding population. In particular,
those who win auctions are less likely to bid in successive
auctions, while those who lose auctions are more likely to
remain bidders in future auctions.
In applying their methods we make the following
adjustments:
• Within a given auction, each individual bidder"s true
willingness to pay is assumed weakly greater than the
maximum bid that bidder submits across all auctions
for that item (either past or future).
• When estimating the upper bound of the value
distribution, if a bidder bids in more than one auction,
randomly select one of the auctions in which the
bidder bid, and only utilize that one observation during
the estimation.18
18
In current work, we assume that removing duplicate
bidders is sufficient to make the buying populations
independent i.i.d. draws from auction to auction. If one believes
that certain portions of the population are drawn to
cerPrice σ(Price) Value Surplus
eBay $240.24 $32 $281 $40
Options $275.80 $14 $302 $26
Table 4: Average price paid, standard deviation of
prices paid, average bidder value among winners, and
average winning bidder surplus on eBay for Dell E193FP
LCD screens as well as in the simulated options-based
market using an adjusted Haile and Tamer estimate of
bidders" true values being 15% higher than their
maximum observed bid.
Figure 2 provides the distribution of maximum bids placed
by bidders on eBay as well as the estimated upper bound of
the true value distribution of bidders based on the extended
Haile and Tamer method.19
As can be seen, the smallest
relative gap between the two curves meaningfully occurs near
the 80th percentile, where the upper bound is 1.17 times
the maximum bid. Therefore, adopted as a less
conservative model of bidder values is a uniform scaling factor of
1.15.
We now present results from this less conservative
analysis. Table 4 shows the distribution of closing prices in
auctions on eBay and in the simulated options scheme. The
mean price in the options scheme is now significantly higher,
15% greater, than the prices on eBay ($276 in the options
scheme vs. $240 on eBay), while the standard deviation
of closing prices is lower among the options scheme auctions
($14 in the options scheme vs. $32 on eBay). Therefore, not
only is the expected revenue stream higher, but the lower
variance provides sellers a greater likelihood of realizing that
higher revenue.
The efficiency of the options scheme remains higher than
on eBay. The winners in the options scheme now have an
average estimated value 7.5% higher at $302.
In an effort to better understand this efficiency, we
formulated a mixed integer program (MIP) to determine a
simple estimate of the allocative efficiency of eBay. The
MIP computes the efficient value of the offline problem with
full hindsight on all bids and all supply.20
Using a scaling
of 1.15, the total value allocated to eBay winners is
estimated at $551,242, while the optimal value (from the MIP)
is $593,301. This suggests an allocative efficiency of 92.9%:
while the typical value of a winner on eBay is $281, an
average value of $303 was possible.21
Note the options-based
tain auctions, then further adjustments would be required
in order to utilize these techniques.
19
The estimation of the points in the curve is a
minimization over many variables, many of which can have
smallnumbers bias. Consequently, Haile and Tamer suggest using
a weighted average over all terms yi of
È
i yi
exp(yiρ)Èj exp(yj ρ)
to
approximate the minimum while reducing the small
number effects. We used ρ = −1000 and removed observations
of auctions with 17 bidders or more as they occurred very
infrequently. However, some small numbers bias still
demonstrated itself with the plateau in our upper bound estimate
around a value of $300.
20
Buyers who won more than one item on eBay are cloned
so that they appear to be multiple bidders of identical type.
21
As long as one believes that every bidder"s true value is a
constant factor α away from their observed maximum bid,
the 92.9% efficiency calculation holds for any value of α. In
practice, this belief may not be reasonable. For example, if
losing bidders tend to have true values close to their observed
187
scheme comes very close to achieving this level of efficiency
[at 99.7% efficient in this estimate] even though it operates
without the benefit of hindsight.
Finally, although the typical winning bidder surplus
decreases between eBay and the options-based scheme, some
surplus redistribution would be possible because the total
market efficiency is improved.22
6. DISCUSSION
The biggest concern with our scheme is that proxy agents
who may be interested in many different items may acquire
many more options than they finally exercise. This can lead
to efficiency loss. Notice that this is not an issue when
bidders are only interested in a single item (as in our empirical
study), or have linear-additive values on items.
To fix this, we would prefer to have proxy agents use more
caution in acquiring options and use a more adaptive bidding
strategy than that in Equation 1. For instance, if a proxy
is already holding an option with an exercise price of $3 on
some item for which it has value of $10, and it values some
substitute item at $5, the proxy could reason that in no
circumstance will it be useful to acquire an option on the
second item.
We formulate a more sophisticated bidding strategy along
these lines. Let Θt be the set of all options a proxy for bidder
i already possesses at time t. Let θt ⊆ Θt, be a subset of
those options, the sum of whose exercise prices are π(θt),
and the goods corresponding to those options being γ(θt).
Let Π(θt) = ˆvi(γ(θt)) − π(θt) be the (reported) available
surplus associated with a set of options. Let θ∗
t be the set
of options currently held that would maximize the buyer"s
surplus; i.e., θ∗
t = argmaxθt⊆Θt
Π(θt).
Let the maximal willingness to pay for an item k represent
a price above which the agent knows it would never exercise
an option on the item given the current options held. This
can be computed as follows:
bidt
i(k) = max
L
[0, min[ˆvi(L + k) − Π(θ∗
t ), ˆvi(L + k) − ˆvi(L)]]
(3)
where ˆvi(L+k)−Π(θ∗
t ) considers surplus already held, ˆvi(L+
k)−ˆvi(L) considers the marginal value of a good, and taking
the max[0, .] considers the overall use of pursuing the good.
However, and somewhat counter intuitively, we are not
able to implement this bidding scheme without forfeiting
truthfulness. The Π(θ∗
t ) term in Equation 3 (i.e., the amount
of guaranteed surplus bidder i has already obtained) can be
influenced by proxy j"s bid. Therefore, bidder j may have
the incentive to misrepresent her valuation to her proxy if
she believes doing so will cause i to bid differently in the
future in a manner beneficial to j. Consider the following
example where the proxy scheme is refined to bid the
maximum willingness to pay.
Example 3. Alice values either one ton of Sand or one
ton of Stone for $2,000. Bob values either one ton of Sand
or one ton of Stone for $1,500. All bidders have a patience
maximum bids while eBay winners have true values much
greater than their observed maximum bids then downward
bias is introduced in the efficiency calculation at present.
22
The increase in eBay winner surplus between Tables 3 and
4 is to be expected as the α scaling strictly increases the
estimated value of the eBay winners while holding the prices
at which they won constant.
of 2 days. On day one, a Sand auction is held, where Alice"s
proxy bids $2,000 and Bob"s bids $1,500. Alice"s proxy wins
an option to purchase Sand for $1,500. On day two, a Stone
auction is held, where Alice"s proxy bids $1,500 [as she has
already obtained a guaranteed $500 of surplus from winning
a Sand option, and so reduces her Stone bid by this amount],
and Bob"s bids $1,500. Either Alice"s proxy or Bob"s proxy
will win the Stone option. At the end of the second day,
Alice"s proxy holds an option with an exercise price of $1,500
to obtain a good valued for $2,000, and so obtains $500 in
surplus.
Now, consider what would have happened had Alice
declared that she valued only Stone.
Example 4. Alice declares valuing only Stone for $2,000.
Bob values either one ton of Sand or one ton of Stone for
$1,500. All bidders have a patience of 2 days. On day one, a
Sand auction is held, where Bob"s proxy bids $1,500. Bob"s
proxy wins an option to purchase Sand for $0. On day two,
a Stone auction is held, where Alice"s proxy bids $2,000, and
Bob"s bids $0 [as he has already obtained a guaranteed $1,500
of surplus from winning a Sand option, and so reduces his
Stone bid by this amount]. Alice"s proxy wins the Stone
option for $0. At the end of the second day, Alice"s proxy
holds an option with an exercise price of $0 to obtain a good
valued for $2,000, and so obtains $2,000 in surplus.
By misrepresenting her valuation (i.e., excluding her value
of Sand), Alice was able to secure higher surplus by guiding
Bob"s bid for Stone to $0. An area of immediate further
work by the authors is to develop a more sophisticated proxy
agent that can allow for bidding of maximum willingness to
pay (Equation 3) while maintaining truthfulness.
An additional, practical, concern with our proxy scheme
is that we assume an available, trusted, and well understood
method to characterize goods (and presumably the quality
of goods). We envision this happening in practice by
sellers defining a classification for their item upon entering the
market, for instance via a UPC code. Just as in eBay, this
would allow an opportunity for sellers to improve revenue by
overstating the quality of their item (new vs. like new),
and raises the issue of how well a reputation scheme could
address this.
7. CONCLUSIONS
We introduced a new sales channel, consisting of an
optionsbased and proxied auction protocol, to address the
sequential auction problem that exists when bidders face
multiple auctions for substitutes and complements goods. Our
scheme provides bidders with a simple, dominant and
truthful bidding strategy even though the market remains open
and dynamic.
In addition to exploring more sophisticated proxies that
bid in terms of maximum willingness to pay, future work
should aim to better model seller incentives and resolve the
strategic problems facing sellers. For instance, does the
options scheme change seller incentives from what they
currently are on eBay?
Acknowledgments
We would like to thank Pai-Ling Yin. Helpful comments
have been received from William Simpson, attendees at
Har188
vard University"s EconCS and ITM seminars, and
anonymous reviewers. Thank you to Aaron L. Roth and
KangXing Jin for technical support. All errors and omissions
remain our own.
8. REFERENCES
[1] P. Anthony and N. R. Jennings. Developing a bidding
agent for multiple heterogeneous auctions. ACM
Trans. On Internet Technology, 2003.
[2] R. Bapna, P. Goes, A. Gupta, and Y. Jin. User
heterogeneity and its impact on electronic auction
market design: An empirical exploration. MIS
Quarterly, 28(1):21-43, 2004.
[3] D. Bertsimas, J. Hawkins, and G. Perakis. Optimal
bidding in on-line auctions. Working Paper, 2002.
[4] C. Boutilier, M. Goldszmidt, and B. Sabata.
Sequential auctions for the allocation of resources with
complementarities. In Proc. 16th International Joint
Conference on Artificial Intelligence (IJCAI-99),
pages 527-534, 1999.
[5] A. Byde, C. Preist, and N. R. Jennings. Decision
procedures for multiple auctions. In Proc. 1st Int.
Joint Conf. on Autonomous Agents and Multiagent
Systems (AAMAS-02), 2002.
[6] M. M. Bykowsky, R. J. Cull, and J. O. Ledyard.
Mutually destructive bidding: The FCC auction
design problem. Journal of Regulatory Economics,
17(3):205-228, 2000.
[7] Y. Chen, C. Narasimhan, and Z. J. Zhang. Consumer
heterogeneity and competitive price-matching
guarantees. Marketing Science, 20(3):300-314, 2001.
[8] A. K. Dixit and R. S. Pindyck. Investment under
Uncertainty. Princeton University Press, 1994.
[9] R. Gopal, S. Thompson, Y. A. Tung, and A. B.
Whinston. Managing risks in multiple online auctions:
An options approach. Decision Sciences,
36(3):397-425, 2005.
[10] A. Greenwald and J. O. Kephart. Shopbots and
pricebots. In Proc. 16th International Joint
Conference on Artificial Intelligence (IJCAI-99),
pages 506-511, 1999.
[11] P. A. Haile and E. Tamer. Inference with an
incomplete model of English auctions. Journal of
Political Economy, 11(1), 2003.
[12] M. T. Hajiaghayi, R. Kleinberg, M. Mahdian, and
D. C. Parkes. Online auctions with re-usable goods. In
Proc. ACM Conf. on Electronic Commerce, 2005.
[13] K. Hendricks, I. Onur, and T. Wiseman. Preemption
and delay in eBay auctions. University of Texas at
Austin Working Paper, 2005.
[14] A. Iwasaki, M. Yokoo, and K. Terada. A robust open
ascending-price multi-unit auction protocol against
false-name bids. Decision Support Systems, 39:23-40,
2005.
[15] E. G. James D. Hess. Price-matching policies: An
empirical case. Managerial and Decision Economics,
12(4):305-315, 1991.
[16] A. X. Jiang and K. Leyton-Brown. Estimating
bidders" valuation distributions in online auctions. In
Workshop on Game Theory and Decision Theory
(GTDT) at IJCAI, 2005.
[17] R. Lavi and N. Nisan. Competitive analysis of
incentive compatible on-line auctions. In Proc. 2nd
ACM Conf. on Electronic Commerce (EC-00), 2000.
[18] Y. J. Lin. Price matching in a model of equilibrium
price dispersion. Southern Economic Journal,
55(1):57-69, 1988.
[19] D. Lucking-Reiley and D. F. Spulber.
Business-to-business electronic commerce. Journal of
Economic Perspectives, 15(1):55-68, 2001.
[20] A. Ockenfels and A. Roth. Last-minute bidding and
the rules for ending second-price auctions: Evidence
from eBay and Amazon auctions on the Internet.
American Economic Review, 92(4):1093-1103, 2002.
[21] M. Peters and S. Severinov. Internet auctions with
many traders. Journal of Economic Theory
(Forthcoming), 2005.
[22] R. Porter. Mechanism design for online real-time
scheduling. In Proceedings of the 5th ACM conference
on Electronic commerce, pages 61-70. ACM Press,
2004.
[23] M. H. Rothkopf and R. Engelbrecht-Wiggans.
Innovative approaches to competitive mineral leasing.
Resources and Energy, 14:233-248, 1992.
[24] T. Sandholm and V. Lesser. Leveled commitment
contracts and strategic breach. Games and Economic
Behavior, 35:212-270, 2001.
[25] T. W. Sandholm and V. R. Lesser. Issues in
automated negotiation and electronic commerce:
Extending the Contract Net framework. In Proc. 1st
International Conference on Multi-Agent Systems
(ICMAS-95), pages 328-335, 1995.
[26] H. S. Shah, N. R. Joshi, A. Sureka, and P. R.
Wurman. Mining for bidding strategies on eBay.
Lecture Notes on Artificial Intelligence, 2003.
[27] M. Stryszowska. Late and multiple bidding in
competing second price Internet auctions.
EuroConference on Auctions and Market Design:
Theory, Evidence and Applications, 2003.
[28] J. T.-Y. Wang. Is last minute bidding bad? UCLA
Working Paper, 2003.
[29] R. Zeithammer. An equilibrium model of a dynamic
auction marketplace. Working Paper, University of
Chicago, 2005.
189 | business-to-consumer auction;empirical analysis;automated trading agent;electronic marketplace;computer simulation;market effect;option;trading opportunity;options-based extension;ebay;sequential auction problem;bidding strategy;strategic behavior;commoditized market;proxy-bidding system;multiple auction;proxy bid;online auction |
train_J-40 | Networks Preserving Evolutionary Equilibria and the Power of Randomization | We study a natural extension of classical evolutionary game theory to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network. We generalize the definition of an evolutionary stable strategy (ESS), and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly. We examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them. | 1. INTRODUCTION
In this paper, we introduce and examine a natural
extension of classical evolutionary game theory (EGT) to a setting
in which pairwise interactions are restricted to the edges of
an undirected graph or network. This extension generalizes
the classical setting, in which all pairs of organisms in an
infinite population are equally likely to interact. The
classical setting can be viewed as the special case in which the
underlying network is a clique.
There are many obvious reasons why one would like to
examine more general graphs, the primary one being in that
many scenarios considered in evolutionary game theory, all
interactions are in fact not possible. For example,
geographical restrictions may limit interactions to physically
proximate pairs of organisms. More generally, as evolutionary
game theory has become a plausible model not only for
biological interaction, but also economic and other kinds of
interaction in which certain dynamics are more imitative than
optimizing (see [2, 16] and chapter 4 of [19]), the network
constraints may come from similarly more general sources.
Evolutionary game theory on networks has been considered
before, but not in the generality we will do so here (see
Section 4).
We generalize the definition of an evolutionary stable
strategy (ESS) to networks, and show a pair of complementary
results that exhibit the power of randomization in our
setting: subject to degree or edge density conditions, the
classical ESS of any game are preserved when the graph is
chosen randomly and the mutation set is chosen adversarially,
or when the graph is chosen adversarially and the mutation
set is chosen randomly. We examine natural strengthenings
of our generalized ESS definition, and show that similarly
strong results are not possible for them.
The work described here is part of recent efforts
examining the relationship between graph topology or structure
and properties of equilibrium outcomes. Previous works in
this line include studies of the relationship of topology to
properties of correlated equilibria in graphical games [11],
and studies of price variation in graph-theoretic market
exchange models [12]. More generally, this work contributes
to the line of graph-theoretic models for game theory
investigated in both computer science [13] and economics [10].
2. CLASSICAL EGT
The fundamental concept of evolutionary game theory is
the evolutionarily stable strategy (ESS). Intuitively, an ESS
is a strategy such that if all the members of a population
adopt it, then no mutant strategy could invade the
population [17]. To make this more precise, we describe the basic
model of evolutionary game theory, in which the notion of
an ESS resides.
The standard model of evolutionary game theory
considers an infinite population of organisms, each of which plays
a strategy in a fixed, 2-player, symmetric game. The game
is defined by a fitness function F. All pairs of members of
the infinite population are equally likely to interact with one
another. If two organisms interact, one playing strategy s
200
and the other playing strategy t, the s-player earns a fitness
of F(s|t) while the t-player earns a fitness of F(t|s).
In this infinite population of organisms, suppose there is a
1 − fraction who play strategy s, and call these organisms
incumbents; and suppose there is an fraction who play t,
and call these organisms mutants. Assume two organisms
are chosen uniformly at random to play each other. The
strategy s is an ESS if the expected fitness of an
organism playing s is higher than that of an organism playing t,
for all t = s and all sufficiently small . Since an
incumbent will meet another incumbent with probability 1 −
and it will meet a mutant with probability , we can
calculate the expected fitness of an incumbent, which is simply
(1 − )F(s|s) + F(s|t). Similarly, the expected fitness of
a mutant is (1 − )F(t|s) + F(t|t). Thus we come to the
formal definition of an ESS [19].
Definition 2.1. A strategy s is an evolutionarily stable
strategy (ESS) for the 2-player, symmetric game given by
fitness function F, if for every strategy t = s, there exists
an t such that for all 0 < < t, (1 − )F(s|s) + F(s|t) >
(1 − )F(t|s) + F(t|t).
A consequence of this definition is that for s to be an ESS,
it must be the case that F(s|s) ≥ F(t|s), for all strategies
t. This inequality means that s must be a best response
to itself, and thus any ESS strategy s must also be a Nash
equilibrium. In general the notion of ESS is more
restrictive than Nash equilibrium, and not all 2-player, symmetric
games have an ESS.
In this paper our interest is to examine what kinds of
network structure preserve the ESS strategies for those games
that do have a standard ESS. First we must of course
generalize the definition of ESS to a network setting.
3. EGT ON GRAPHS
In our setting, we will no longer assume that two
organisms are chosen uniformly at random to interact. Instead,
we assume that organisms interact only with those in their
local neighborhood, as defined by an undirected graph or
network. As in the classical setting (which can be viewed
as the special case of the complete network or clique), we
shall assume an infinite population, by which we mean we
examine limiting behavior in a family of graphs of increasing
size.
Before giving formal definitions, some comments are in
order on what to expect in moving from the classical to the
graph-theoretic setting. In the classical (complete graph)
setting, there exist many symmetries that may be broken in
moving to the the network setting, at both the group and
individual level. Indeed, such asymmetries are the primary
interest in examining a graph-theoretic generalization.
For example, at the group level, in the standard ESS
definition, one need not discuss any particular set of mutants of
population fraction . Since all organisms are equally likely
to interact, the survival or fate of any specific mutant set is
identical to that of any other. In the network setting, this
may not be true: some mutant sets may be better able to
survive than others due to the specific topologies of their
interactions in the network. For instance, foreshadowing some
of our analysis, if s is an ESS but F(t|t) is much larger than
F(s|s) and F(s|t), a mutant set with a great deal of
internal interaction (that is, edges between mutants) may be
able to survive, whereas one without this may suffer. At
the level of individuals, in the classical setting, the assertion
that one mutant dies implies that all mutants die, again by
symmetry. In the network setting, individual fates may
differ within a group all playing a common strategy. These
observations imply that in examining ESS on networks we
face definitional choices that were obscured in the classical
model.
If G is a graph representing the allowed pairwise
interactions between organisms (vertices), and u is a vertex of G
playing strategy su, then the fitness of u is given by
F(u) =
P
v∈Γ(u) F(su|sv)
|Γ(u)|
.
Here sv is the strategy being played by the neighbor v, and
Γ(u) = {v ∈ V : (u, v) ∈ E}. One can view the fitness of
u as the average fitness u would obtain if it played each if
its neighbors, or the expected fitness u would obtain if it
were assigned to play one of its neighbors chosen uniformly
at random.
Classical evolutionary game theory examines an infinite,
symmetric population. Graphs or networks are inherently
finite objects, and we are specifically interested in their
asymmetries, as discussed above. Thus all of our definitions shall
revolve around an infinite family G = {Gn}∞
n=0 of finite
graphs Gn over n vertices, but we shall examine asymptotic
(large n) properties of such families.
We first give a definition for a family of mutant vertex
sets in such an infinite graph family to contract.
Definition 3.1. Let G = {Gn}∞
n=0 be an infinite family
of graphs, where Gn has n vertices. Let M = {Mn}∞
n=0
be any family of subsets of vertices of the Gn such that
|Mn| ≥ n for some constant > 0. Suppose all the vertices
of Mn play a common (mutant) strategy t, and suppose the
remaining vertices in Gn play a common (incumbent)
strategy s. We say that Mn contracts if for sufficiently large n,
for all but o(n) of the j ∈ Mn, j has an incumbent neighbor
i such that F(j) < F(i).
A reasonable alternative would be to ask that the
condition above hold for all mutants rather than all but o(n).
Note also that we only require that a mutant have one
incumbent neighbor of higher fitness in order to die; one might
considering requiring more. In Sections 6.1 and 6.2 we
consider these stronger conditions and demonstrate that our
results can no longer hold.
In order to properly define an ESS for an infinite family of
finite graphs in a way that recovers the classical definition
asymptotically in the case of the family of complete graphs,
we first must give a definition that restricts attention to
families of mutant vertices that are smaller than some invasion
threshold n, yet remain some constant fraction of the
population. This prevents invasions that survive merely by
constituting a vanishing fraction of the population.
Definition 3.2. Let > 0, and let G = {Gn}∞
n=0 be
an infinite family of graphs, where Gn has n vertices. Let
M = {Mn}∞
n=0 be any family of (mutant) vertices in Gn.
We say that M is -linear if there exists an , > > 0,
such that for all sufficiently large n, n > |Mn| > n.
We can now give our definition for a strategy to be
evolutionarily stable when employed by organisms interacting
with their neighborhood in a graph.
201
Definition 3.3. Let G = {Gn}∞
n=0 be an infinite family
of graphs, where Gn has n vertices. Let F be any 2-player,
symmetric game for which s is a strategy. We say that s is
an ESS with respect to F and G if for all mutant strategies
t = s, there exists an t > 0 such that for any t-linear
family of mutant vertices M = {Mn}∞
n=0 all playing t, for n
sufficiently large, Mn contracts.
Thus, to violate the ESS property for G, one must witness
a family of mutations M in which each Mn is an arbitrarily
small but nonzero constant fraction of the population of Gn,
but does not contract (i.e. every mutant set has a subset of
linear size that survives all of its incumbent interactions).
In Section A.1 we show that the definition given coincides
with the classical one in the case where G is the family of
complete graphs, in the limit of large n. We note that even
in the classical model, small sets of mutants were allowed to
have greater fitness than the incumbents, as long as the size
of the set was o(n) [18].
In the definition above there are three parameters: the
game F, the graph family G and the mutation family M.
Our main results will hold for any 2-player, symmetric game
F. We will also study two rather general settings for G and
M: that in which G is a family of random graphs and M is
arbitrary, and that in which G is nearly arbitrary and M is
randomly chosen. In both cases, we will see that, subject to
conditions on degree or edge density (essentially forcing
connectivity of G but not much more), for any 2-player,
symmetric game, the ESS of the classical settings, and only those
strategies, are always preserved. Thus a common theme of
these results is the power of randomization: as long as either
the network itself is chosen randomly, or the mutation set is
chosen randomly, classical ESS are preserved.
4. RELATED WORK
There has been previous work that analyzes which
strategies are resilient to mutant invasions with respect to various
types of graphs. What sets our work apart is that the model
we consider encompasses a significantly more general class of
games and graph topologies. We will briefly survey this
literature and point out the differences in the previous models
and ours.
In [8], [3], and [4], the authors consider specific families
of graphs, such as cycles and lattices, where players play
specific games, such as 2 × 2-games or k × k-coordination
games. In these papers the authors specify a simple, local
dynamic for players to improve their payoffs by changing
strategies, and analyze what type of strategies will grow to
dominate the population. The model we propose is more
general than both of these, as it encompasses a larger class
of graphs as well as a richer set of games.
Also related to our work is that of [14], where the authors
propose two models. The first assumes organisms interact
according to a weighted, undirected graph. However, the
fitness of each organism is simply assigned and does not
depend on the actions of each organism"s neighborhood. The
second model has organisms arranged around a directed
cycle, where neighbors play a 2 × 2-game. With probability
proportional to its fitness, an organism is chosen to
reproduce by placing a replica of itself in its neighbors position,
thereby killing the neighbor. We consider more general
games than the first model and more general graphs than
the second.
Finally, the works most closely related to ours are [7], [15],
and [6]. The authors consider 2-action, coordination games
played by players in a general undirected graph. In these
three works, the authors specify a dynamic for a strategy to
reproduce, and analyze properties of the graph that allow a
strategy to overrun the population. Here again, one can see
that our model is more general than these, as it allows for
organisms to play any 2-player, symmetric game.
5. NETWORKS PRESERVING ESS
We now proceed to state and prove two complementary
results in the network ESS model defined in Section 3. First,
we consider a setting where the graphs are generated via the
Gn,p model of Erd˝os and R´enyi [5]. In this model, every
pair of vertices are joined by an edge independently and
with probability p (where p may depend on n). The mutant
set, however, will be constructed adversarially (subject to
the linear size constraint given by Definition 3.3). For these
settings, we show that for any 2-player, symmetric game, s
is a classical ESS of that game, if and only if s is an ESS
for {Gn,p}∞
n=0, where p = Ω(1/nc
) and 0 ≤ c < 1, and any
mutant family {Mn}∞
n=0, where each Mn has linear size. We
note that under these settings, if we let c = 1 − γ for small
γ > 0, the expected number of edges in Gn is n1+γ
or larger
- that is, just superlinear in the number of vertices and
potentially far smaller than O(n2
). It is easy to convince
oneself that once the graphs have only a linear number of
edges, we are flirting with disconnectedness, and there may
simply be large mutant sets that can survive in isolation due
to the lack of any incumbent interactions in certain games.
Thus in some sense we examine the minimum plausible edge
density.
The second result is a kind of dual to the first, considering
a setting where the graphs are chosen arbitrarily (subject to
conditions) but the mutant sets are chosen randomly. It
states that for any 2-player, symmetric game, s is a
classical ESS for that game, if and only if s is an ESS for any
{Gn = (Vn, En)}∞
n=0 in which for all v ∈ Vn, deg(v) = Ω(nγ
)
(for any constant γ > 0), and a family of mutant sets
{Mn}∞
n=0, that is chosen randomly (that is, in which each
organism is labeled a mutant with constant probability > 0).
Thus, in this setting we again find that classical ESS are
preserved subject to edge density restrictions. Since the degree
assumption is somewhat strong, we also prove another result
which only assumes that |En| ≥ n1+γ
, and shows that there
must exist at least 1 mutant with an incumbent neighbor of
higher fitness (as opposed to showing that all but o(n)
mutants have an incumbent neighbor of higher fitness). As will
be discussed, this rules out stationary mutant invasions.
5.1 Random Graphs, Adversarial Mutations
Now we state and prove a theorem which shows that if s
is a classical ESS, then s will be an ESS for random graphs,
where a linear sized set of mutants is chosen by an adversary.
Theorem 5.1. Let F be any 2-player, symmetric game,
and suppose s is a classical ESS of F. Let the infinite
graph family {Gn}∞
n=0 be drawn according to Gn,p, where
p = Ω(1/nc
) and 0 ≤ c < 1. Then with probability 1, s is
an ESS.
The main idea of the proof is to divide mutants into 2
categories, those with normal fitness and those with
ab202
normal fitness. First, we show all but o(n) of the
population (incumbent or mutant) have an incumbent neighbor of
normal fitness. This will imply that all but o(n) of the
mutants of normal fitness have an incumbent neighbor of higher
fitness. The vehicle for proving this is Theorem 2.15 of [5],
which gives an upper bound on the number of vertices not
connected to a sufficiently large set. This theorem assumes
that the size of this large set is known with equality, which
necessitates the union bound argument below. Secondly, we
show that there can be at most o(n) mutants with abnormal
fitness. Since there are so few of them, even if none of them
have an incumbent neighbor of higher fitness, s will still be
an ESS with respect to F and G.
Proof. (Sketch) Let t = s be the mutant strategy. Since
s is a classical ESS, there exists an t such that (1− )F(s|s)+
F(s|t) > (1 − )F(t|s) + F(t|t), for all 0 < < t. Let M
be any mutant family that is t-linear. Thus for any fixed
value of n that is sufficiently large, there exists an such
that |Mn| = n and t > > 0. Also, let In = Vn \ Mn and
let I ⊆ In be the set of incumbents that have fitness in the
range (1 ± τ)[(1 − )F(s|s) + F(s|t)] for some constant τ,
0 < τ < 1/6. Lemma 5.1 below shows (1 − )n ≥ |I | ≥
(1 − )n − 24 log n
τ2p
. Finally, let
TI = {x ∈ V \ I : Γ(x) ∩ I = ∅}.
(For the sake of clarity we suppress the subscript n on the
sets I and T.) The union bound gives us
Pr(|TI | ≥ δn) ≤
(1− )n
X
i=(1− )n− 24 log n
τ2p
Pr(|TI | ≥ δn and |I | = i) (1)
Letting δ = n−γ
for some γ > 0 gives δn = o(n). We will
apply Theorem 2.15 of [5] to the summand on the right hand
side of Equation 1. If we let γ = (1−c)/2, and combine this
with the fact that 0 ≤ c < 1, all of the requirements of this
theorem will be satisfied (details omitted). Now when we
apply this theorem to Equation 1, we get
Pr(|TI | ≥ δn) ≤
(1− )n
X
i=(1− )n− 24 log n
τ2p
exp
„
−
1
6
Cδn
«
(2)
= o(1)
This is because equation 2 has only 24 log n
τ2p
terms, and
Theorem 2.15 of [5] gives us that C ≥ (1 − )n1−c
− 24 log n
τ2 .
Thus we have shown, with probability tending to 1 as n →
∞, at most o(n) individuals are not attached to an
incumbent which has fitness in the range (1 ± τ)[(1 − )F(s|s) +
F(s|t)]. This implies that the number of mutants of
approximately normal fitness, not attached to an incumbent
of approximately normal fitness, is also o(n).
Now those mutants of approximately normal fitness that
are attached to an incumbent of approximately normal
fitness have fitness in the range (1±τ)[(1− )F(t|s)+ F(t|t)].
The incumbents that they are attached to have fitness in the
range (1±τ)[(1− )F(s|s)+ F(s|t)]. Since s is an ESS of F,
we know (1− )F(s|s)+ F(s|t) > (1− )F(t|s)+ F(t|t), thus
if we choose τ small enough, we can ensure that all but o(n)
mutants of normal fitness have a neighboring incumbent of
higher fitness.
Finally by Lemma 5.1, we know there are at most o(n)
mutants of abnormal fitness. So even if all of them are more fit
than their respective incumbent neighbors, we have shown
all but o(n) of the mutants have an incumbent neighbor of
higher fitness.
We now state and prove the lemma used in the proof
above.
Lemma 5.1. For almost every graph Gn,p with (1 − )n
incumbents, all but 24 log n
δ2p
incumbents have fitness in the
range (1±δ)[(1− )F(s|s)+ F(s|t)], where p = Ω(1/nc
) and
, δ and c are constants satisfying 0 < < 1, 0 < δ < 1/6,
0 ≤ c < 1. Similarly, under the same assumptions, all
but 24 log n
δ2p
mutants have fitness in the range (1 ± δ)[(1 −
)F(t|s) + F(t|t)].
Proof. We define the mutant degree of a vertex to be
the number of mutant neighbors of that vertex, and
incumbent degree analogously. Observe that the only way for
an incumbent to have fitness far from its expected value of
(1− )F(s|s)+ F(s|t) is if it has a fraction of mutant
neighbors either much higher or much lower than . Theorem
2.14 of [5] gives us a bound on the number of such
incumbents. It states that the number of incumbents with mutant
degree outside the range (1 ± δ)p|M| is at most 12 log n
δ2p
. By
the same theorem, the number of incumbents with
incumbent degree outside the range (1 ± δ)p|I| is at most 12 log n
δ2p
.
From the linearity of fitness as a function of the fraction
of mutant or incumbent neighbors, one can show that for
those incumbents with mutant and incumbent degree in the
expected range, their fitness is within a constant factor of
(1 − )F(s|s) + F(s|t), where that constant goes to 1 as n
tends to infinity and δ tends to 0. The proof for the mutant
case is analogous.
We note that if in the statement of Theorem 5.1 we let
c = 0, then p = 1. This, in turn, makes G = {Kn}∞
n=0,
where Kn is a clique of n vertices. Then for any Kn all
of the incumbents will have identical fitness and all of the
mutants will have identical fitness. Furthermore, since s was
an ESS for G, the incumbent fitness will be higher than the
mutant fitness. Finally, one can show that as n → ∞, the
incumbent fitness converges to (1 − )F(s|s) + F(s|t), and
the mutant fitness converges to (1 − )F(t|s) + F(t|t). In
other words, s must be a classical ESS, providing a converse
to Theorem 5.1. We rigorously present this argument in
Section A.1.
5.2 Adversarial Graphs, Random Mutations
We now move on to our second main result. Here we show
that if the graph family, rather than being chosen randomly,
is arbitrary subject to a minimum degree requirement, and
the mutation sets are randomly chosen, classical ESS are
again preserved. A modified notion of ESS allows us to
considerably weaken the degree requirement to a minimum
edge density requirement.
Theorem 5.2. Let G = {Gn = (Vn, En)}∞
n=0 be an
infinite family of graphs in which for all v ∈ Vn, deg(v) = Ω(nγ
)
(for any constant γ > 0). Let F be any 2-player, symmetric
game, and suppose s is a classical ESS of F. Let t be any
mutant strategy, and let the mutant family M = {Mn}∞
n=0 be
chosen randomly by labeling each vertex a mutant with
constant probability , where t > > 0. Then with probability
1, s is an ESS with respect to F, G and M.
203
Proof. Let t = s be the mutant strategy and let X be
the event that every incumbent has fitness within the range
(1 ± τ)[(1 − )F(s|s) + F(s|t)], for some constant τ > 0 to
be specified later. Similarly, let Y be the event that every
mutant has fitness within the range (1 ± τ)[(1 − )F(t|s) +
F(t|t)]. Since Pr(X ∩ Y ) = 1 − Pr(¬X ∪ ¬Y ), we proceed
by showing Pr(¬X ∪ ¬Y ) = o(1).
¬X is the event that there exists an incumbent with fitness
outside the range (1±τ)[(1− )F(s|s)+ F(s|t)]. If degM (v)
denotes the number of mutant neighbors of v, similarly,
degI (v) denotes the number of incumbent neighbors of v,
then an incumbent i has fitness
degI (i)
deg(i)
F(s|s)+
degM (i)
deg(i)
F(s|t).
Since F(s|s) and F(s|t) are fixed quantities, the only
variation in an incumbents fitness can come from variation in the
terms degI (i)
deg(i)
and degM (i)
deg(i)
. One can use the Chernoff bound
followed by the union bound to show that for any incumbent
i,
Pr(F(i) /∈ (1 ± τ)[(1 − )F(s|s) + F(s|t)])
< 4 exp
„
−
deg(i)τ2
3
«
.
Next one can use the union bound again to bound the
probability of the event ¬X,
Pr(¬X) ≤ 4n exp
„
−
diτ2
3
«
where di = mini∈V \M deg(i), 0 < ≤ 1/2. An analogous
argument can be made to show Pr(¬Y ) < 4n exp(−
dj τ2
3
),
where dj = minj∈M deg(j) and 0 < ≤ 1/2. Thus, by the
union bound,
Pr(¬X ∪ ¬Y ) < 8n exp
„
−
dτ2
3
«
where d = minv∈V deg(v), 0 < ≤ 1/2. Since deg(v) =
Ω(nγ
), for all v ∈ V , and , τ and γ are all constants greater
than 0,
lim
n→∞
8n
exp ( dτ2/3)
= 0,
so Pr(¬X∪¬Y ) = o(1). Thus, we can choose τ small enough
such that (1 + τ)[(1 − )F(t|s) + F(t|t)] < (1 − τ)[(1 −
)F(s|s)+ F(s|t)], and then choose n large enough such that
with probability 1 − o(1), every incumbent will have fitness
in the range (1±τ)[(1− )F(s|s)+F(s|t)], and every mutant
will have fitness in the range (1 ± τ)[(1 − )F(t|s) + F(t|t)].
So with high probability, every incumbent will have a higher
fitness than every mutant.
By arguments similar to those following the proof of
Theorem 5.1, if we let G = {Kn}∞
n=0, each incumbent will have
the same fitness and each mutant will have the same fitness.
Furthermore, since s is an ESS for G, the incumbent fitness
must be higher than the mutant fitness. Here again, one
has to show show that as n → ∞, the incumbent fitness
converges to (1 − )F(s|s) + F(s|t), and the mutant fitness
converges to (1 − )F(t|s) + F(t|t). Observe that the exact
fraction mutants of Vn is now a random variable. So to prove
this convergence we use an argument similar to one that is
used to prove that sequence of random variables that
converges in probability also converges in distribution (details
omitted). This in turn establishes that s must be a classical
ESS, and we thus obtain a converse to Theorem 5.2. This
argument is made rigorous in Section A.2.
The assumption on the degree of each vertex of
Theorem 5.2 is rather strong. The following theorem relaxes
this requirement and only necessitates that every graph have
n1+γ
edges, for some constant γ > 0, in which case it shows
there will alway be at least 1 mutant with an incumbent
neighbor of higher fitness. A strategy that is an ESS in
this weakened sense will essentially rule out stable, static
sets of mutant invasions, but not more complex invasions.
An example of more complex invasions are mutant sets that
survive, but only by perpetually migrating through the
graph under some natural evolutionary dynamics, akin to
gliders in the well-known Game of Life [1].
Theorem 5.3. Let F be any game, and let s be a classical
ESS of F, and let t = s be a mutant strategy. For any graph
family G = {Gn = (Vn, En)}∞
n=0 in which |En| ≥ n1+γ
(for
any constant γ > 0), and any mutant family M = {Mn}∞
n=0
which is determined by labeling each vertex a mutant with
probability , where t > > 0, the probability that there
exists a mutant with an incumbent neighbor of higher fitness
approaches 1 as n → ∞.
Proof. (Sketch) The main idea behind the proof is to
show that with high probability, over only the choice of
mutants, there will be an incumbent-mutant edge in which both
vertices have high degree. If their degree is high enough, we
can show that close to an fraction of their neighbors are
mutants, and thus their fitnesses are very close to what we
expect them to be in the classical case. Since s is an ESS,
the fitness of the incumbent will be higher than the mutant.
We call an edge (i, j) ∈ En a g(n)-barbell if deg(i) ≥ g(n)
and deg(j) ≥ g(n). Suppose Gn has at most h(n) edges that
are g(n)-barbells. This means there are at least |En| − h(n)
edges in which at least one vertex has degree at most g(n).
We call these vertices light vertices. Let (n) be the number
of light vertices in Gn. Observe that |En|−h(n) ≤ (n)g(n).
This is because each light vertex is incident on at most g(n)
edges. This gives us that
|En| ≤ h(n) + (n)g(n) ≤ h(n) + ng(n).
So if we choose h(n) and g(n) such that h(n) + ng(n) =
o(n1+γ
), then |En| = o(n1+γ
). This contradicts the
assumption that |En| = Ω(n1+γ
). Thus, subject to the above
constraint on h(n) and g(n), Gn must contain at least h(n)
edges that are g(n)-barbells.
Now let Hn denote the subgraph induced by the barbell
edges of Gn. Note that regardless of the structure of Gn,
there is no reason that Hn should be connected. Thus, let
m be the number of connected components of Hn, and let
c1, c2, . . . , cm be the number of vertices in each of these
connected components. Note that since Hn is an edge-induced
subgraph we have ck ≥ 2 for all components k. Let us choose
the mutant set by first flipping the vertices in Hn only. We
now show that the probability, with respect to the random
mutant set, that none of the components of Hn have an
incumbent-mutant edge is exponentially small in n. Let An
be the event that every component of Hn contains only
mutants or only incumbents. Then algebraic manipulations can
establish that
Pr[An] = Πm
k=1( ck
+ (1 − )ck
)
≤ (1 − )(1− β2
2
)
Pm
k=1 ck
204
where β is a constant. Thus for sufficiently small the bound
decreases exponentially with
Pm
k=1 ck. Furthermore, sincePm
k=1
`ck
2
´
≥ h(n) (with equality achieved by making each
component a clique), one can show that
Pm
k=1 ck ≥
p
h(n).
Thus, as long as h(n) → ∞ with n, the probability that all
components are uniformly labeled will go to 0.
Now assuming that there exists a non-uniformly labeled
component, by construction that component contains an
edge (i, j) where i is an incumbent and j is a mutant, that
is a g(n)-barbell. We also assume that the h(n) vertices
already labeled have been done so arbitrarily, but that the
remaining g(n) − h(n) vertices neighboring i and j are
labeled mutants independently with probability . Then via
a standard Chernoff bound argument, one can show that
with high probability, the fraction of mutants neighboring
i and the fraction of mutants neighboring j is in the range
(1 ± τ)(g(n)−h(n))
g(n)
. Similarly, one can show that the
fraction of incumbents neighboring i and the fraction of mutants
neighboring j is in the range 1 − (1 ± τ)(g(n)−h(n))
g(n)
.
Since s is an ESS, there exists a ζ > 0 such that (1 −
)F(s|s) + F(s|t) = (1 − )F(t|s) + F(t|t) + ζ. If we
choose g(n) = nγ
, and h(n) = o(g(n)), we can choose n
large enough and τ small enough to force F(i) > F(j), as
desired.
6. LIMITATIONS OF STRONGER MODELS
In this section we show that if one tried to strengthen
the model described in Section 3 in two natural ways, one
would not be able to prove results as strong as Theorems 5.1
and 5.2, which hold for every 2-player, symmetric game.
6.1 Stronger Contraction for the Mutant Set
In Section 3 we alluded to the fact that we made certain
design decisions in arriving at Definitions 3.1, 3.2 and 3.3.
One such decision was to require that all but o(n) mutants
have incumbent neighbors of higher fitness. Instead, we
could have required that all mutants have an incumbent
neighbor of higher fitness. The two theorems in this
subsection show that if one were to strengthen our notion of
contraction for the mutant set, given by Definition 3.1, in
this way, it would be impossible to prove theorems analogous
to Theorems 5.1 and 5.3.
Recall that Definition 3.1 gave the notion of contraction
for a linear sized subset of mutants. In what follows, we
will say an edge (i, j) contracts if i is an incumbent, j is a
mutant, and F(i) > F(j). Also, recall that Theorem 5.1
stated that if s is a classical ESS, then it is an ESS for
random graphs with adversarial mutations. Next, we prove
that if we instead required every incumbent-mutant edge to
contract, this need not be the case.
Theorem 6.1. Let F be a 2-player, symmetric game that
has a classical ESS s for which there exists a mutant
strategy t = s with F(t|t) > F(s|s) and F(t|t) > F(s|t). Let
G = {Gn}∞
n=0 be an infinite family of random graphs drawn
according to Gn,p, where p = Ω(1/nc
) for any constant
0 ≤ c < 1. Then with probability approaching 1 as n → ∞,
there exists a mutant family M = {Mn}∞
n=0, where tn >
|Mn| > n and t, > 0, in which there is an edge that does
not contract.
Proof. (Sketch) With probability approaching 1 as n →
∞, there exists a vertex j where deg(j) is arbitrarily close
to n. So label j mutant, label one of its neighbors
incumbent, denoted i, and label the rest of j"s neighborhood
mutant. Also, label all of i"s neighbors incumbent, with
the exception of j and j"s neighbors (which were already
labeled mutant). In this setting, one can show that F(j)
will be arbitrarily close to F(t|t) and F(i) will be a convex
combination of F(s|s) and F(s|t), which are both strictly
less than F(t|t).
Theorem 5.3 stated that if s is a classical ESS, then for
graphs where |En| ≥ n1+γ
, for some γ > 0, and where each
organism is labeled a mutant with probability , one edge
must contract. Below we show that, for certain graphs and
certain games, there will always exist one edge that will not
contract.
Theorem 6.2. Let F be a 2-player, symmetric game that
has a classical ESS s, such that there exists a mutant
strategy t = s where F(t|s) > F(s|t). There exists an infinite
family of graphs {Gn = (Vn, En)}∞
n=0, where |En| = Θ(n2
),
such that for a mutant family M = {Mn}∞
n=0, which is
determined by labeling each vertex a mutant with probability
> 0, the probability there exists an edge in En that does
not contract approaches 1 as n → ∞.
Proof. (Sketch) Construct Gn as follows. Pick n/4
vertices u1, u2, . . . , un/4 and add edges such that they from
a clique. Then, for each ui, i ∈ [n/4] add edges (ui, vi),
(vi, wi) and (wi, xi). With probability 1 as n → ∞, there
exists an i such that ui and wi are mutants and vi and xi
are incumbents. Observe that F(vi) = F(xi) = F(s|t) and
F(wi) = F(t|s).
6.2 Stronger Contraction for Individuals
The model of Section 3 requires that for an edge (i, j) to
contract, the fitness of i must be greater than the fitness of j.
One way to strengthen this notion of contraction would be
to require that the maximum fitness incumbent in the
neighborhood of j be more fit than the maximum fitness mutant
in the neighborhood of j. This models the idea that each
organism is trying to take over each place in its
neighborhood, but only the most fit organism in the neighborhood
of a vertex gets the privilege of taking it. If we assume that
we adopt this notion of contraction for individual mutants,
and require that all incumbent-mutant edges contract, we
will next show that Theorems 6.1 and 6.2 still hold, and
thus it is still impossible to get results such as Theorems 5.1
and 5.3 which hold for every 2-player, symmetric game.
In the proof of Theorem 6.1 we proved that F(i) is strictly
less than F(j). Observe that maximum fitness mutant in
the neighborhood of j must have fitness at least F(j). Also
observe that there is only 1 incumbent in the neighborhood
of j, namely i. So under this stronger notion of contraction,
the edge (i, j) will not contract.
Similarly, in the proof of Theorem 6.2, observe that the
only mutant in the neighborhood of wi is wi itself, which
has fitness F(t|s). Furthermore, the only incumbents in the
neighborhood of wi are vi and xi, both of which have fitness
F(s|t). By assumption, F(t|s) > F(s|t), thus, under this
stronger notion of contraction, neither of the
incumbentmutant edges, (vi, wi) and (xi, wi), will contract.
7. REFERENCES
[1] Elwyn R. Berlekamp, John Horton Conway, and
Richard K. Guy. Winning Ways for Your
205
Mathematical Plays, volume 4. AK Peters, Ltd, March
2004.
[2] Jonas Bj¨ornerstedt and Karl H. Schlag. On the
evolution of imitative behavior. Discussion Paper
B-378, University of Bonn, 1996.
[3] L. E. Blume. The statistical mechanics of strategic
interaction. Games and Economic Behavior,
5:387-424, 1993.
[4] L. E. Blume. The statistical mechanics of
best-response strategy revision. Games and Economic
Behavior, 11(2):111-145, November 1995.
[5] B. Bollob´as. Random Graphs. Cambridge University
Press, 2001.
[6] Michael Suk-Young Chwe. Communication and
coordination in social networks. Review of Economic
Studies, 67:1-16, 2000.
[7] Glenn Ellison. Learning, local interaction, and
coordination. Econometrica, 61(5):1047-1071, Sept.
1993.
[8] I. Eshel, L. Samuelson, and A. Shaked. Altruists,
egoists, and hooligans in a local interaction model.
The American Economic Review, 88(1), 1998.
[9] Geoffrey R. Grimmett and David R. Stirzaker.
Probability and Random Processes. Oxford University
Press, 3rd edition, 2001.
[10] M. Jackson. A survey of models of network formation:
Stability and efficiency. In Group Formation in
Economics; Networks, Clubs and Coalitions.
Cambridge University Press, 2004.
[11] S. Kakade, M. Kearns, J. Langford, and L. Ortiz.
Correlated equilibria in graphical games. ACM
Conference on Electronic Commerce, 2003.
[12] S. Kakade, M. Kearns, L. Ortiz, R. Pemantle, and
S. Suri. Economic properties of social networks.
Neural Information Processing Systems, 2004.
[13] M. Kearns, M. Littman, and S. Singh. Graphical
models for game theory. Conference on Uncertainty in
Artificial Intelligence, pages 253-260, 2001.
[14] E. Lieberman, C. Hauert, and M. A. Nowak.
Evolutionary dynamics on graphs. Nature,
433:312-316, 2005.
[15] S. Morris. Contagion. Review of Economic Studies,
67(1):57-78, 2000.
[16] Karl H. Schlag. Why imitate and if so, how? Journal
of Economic Theory, 78:130-156, 1998.
[17] J. M. Smith. Evolution and the Theory of Games.
Cambridge University Press, 1982.
[18] William L. Vickery. How to cheat against a simple
mixed strategy ESS. Journal of Theoretical Biology,
127:133-139, 1987.
[19] J¨orgen W. Weibull. Evolutionary Game Theory. The
MIT Press, 1995.
APPENDIX
A. GRAPHICAL AND CLASSICAL ESS
In this section we explore the conditions under which a
graphical ESS is also a classical ESS. To do so, we state and
prove two theorems which provide converses to each of the
major theorems in Section 3.
A.1 Random Graphs, Adversarial Mutations
Theorem 5.2 states that if s is a classical ESS and G =
{Gn,p}, where p = Ω(1/nc
) and 0 ≤ c < 1, then with
probability 1 as n → ∞, s is an ESS with respect to G. Here we
show that if s is an ESS with respect to G, then s is a
classical ESS. In order to prove this theorem, we do not need the
full generality of s being an ESS for G when p = Ω(1/nc
)
where 0 ≤ c < 1. All we need is s to be an ESS for G when
p = 1. In this case there are no more probabilistic events
in the theorem statement. Also, since p = 1 each graph in
G is a clique, so if one incumbent has a higher fitness than
one mutant, then all incumbents have higher fitness than all
mutants. This gives rise to the following theorem.
Theorem A.1. Let F be any 2-player, symmetric game,
and suppose s is a strategy for F and t = s is a mutant
strategy. Let G = {Kn}∞
n=0. If, as n → ∞, for any t-linear
family of mutants M = {Mn}∞
n=0, there exists an incumbent
i and a mutant j such that F(i) > F(j), then s is a classical
ESS of F.
The proof of this theorem analyzes the limiting behavior
of the mutant population as the size of the cliques in G tends
to infinity. It also shows how the definition of ESS given in
Section 5 recovers the classical definition of ESS.
Proof. Since each graph in G is a clique, every
incumbent will have the same number of incumbent and mutant
neighbors, and every mutant will have the same number of
incumbent and mutant neighbors. Thus, all incumbents will
have identical fitness and all mutants will have identical
fitness. Next, one can construct an t-linear mutant family
M, where the fraction of mutants converges to for any ,
where t > > 0. So for n large enough, the number of
mutants in Kn will be arbitrarily close to n. Thus, any
mutant subset of size n will result in all incumbents having
fitness (1 − n
n−1
)F(s|s) + n
n−1
F(s|t), and all mutants
having fitness (1 − n−1
n−1
)F(t|s) + n−1
n−1
F(t|t). Furthermore, by
assumption the incumbent fitness must be higher than the
mutant fitness. This implies,
lim
n→∞
„
(1 −
n
n − 1
)F(s|s) +
n
n − 1
F(s|t) >
(1 −
n − 1
n − 1
)F(t|s) +
n − 1
n − 1
F(t|t)
«
= 1.
This implies, (1− )F(s|s)+ F(s|t) > (1− )F(t|s)+ F(t|t),
for all , where t > > 0.
A.2 Adversarial Graphs, Random Mutations
Theorem 5.2 states that if s is a classical ESS for a
2player, symmetric game F, where G is chosen adversarially
subject to the constraint that the degree of each vertex is
Ω(nγ
) (for any constant γ > 0), and mutants are chosen
with probability , then s is an ESS with respect to F, G,
and M. Here we show that if s is an ESS with respect to F,
G, and M then s is a classical ESS.
All we will need to prove this is that s is an ESS with
respect to G = {Kn}∞
n=0, that is when each vertex has degree
n − 1. As in Theorem A.1, since the graphs are cliques, if
one incumbent has higher fitness than one mutant, then all
incumbents have higher fitness than all mutants. Thus, the
theorem below is also a converse to Theorem 5.3. (Recall
that Theorem 5.3 uses a weaker notion of contraction that
206
requires only one incumbent to have higher fitness than one
mutant.)
Theorem A.2. Let F be any 2-player symmetric game,
and suppose s is an incumbent strategy for F and t = s
is a mutant strategy. Let G = {Kn}∞
n=0. If with
probability 1 as n → ∞, s is an ESS for G and a mutant family
M = {Mn}∞
n=0, which is determined by labeling each vertex
a mutant with probability , where t > > 0, then s is a
classical ESS of F.
This proof also analyzes the limiting behavior of the
mutant population as the size of the cliques in G tends to
infinity. Since the mutants are chosen randomly we will use
an argument similar to the proof that a sequence of random
variables that converges in probability, also converge in
distribution. In this case the sequence of random variables will
be actual fraction of mutants in each Kn.
Proof. Fix any value of , where n > > 0, and
construct each Mn by labeling a vertex a mutant with
probability . By the same argument as in the proof of Theorem A.1,
if the actual number of mutants in Kn is denoted by nn,
any mutant subset of size nn will result in all incumbents
having fitness (1 − nn
n−1
)F(s|s) + nn
n−1
F(s|t), and in all
mutants having fitness (1 − nn−1
n−1
)F(t|s) + nn−1
n−1
F(t|t). This
implies
lim
n→∞
Pr(s is an ESS for Gn w.r.t. nn mutants) = 1 ⇒
lim
n→∞
Pr
„
(1 −
nn
n − 1
)F(s|s) +
nn
n − 1
F(s|t) >
(1 −
nn − 1
n − 1
)F(t|s) +
nn − 1
n − 1
F(t|t)
«
= 1 ⇔
lim
n→∞
Pr
„
n >
F(t|s) − F(s|s)
F(s|t) − F(s|s) − F(t|t) + F(t|s)
+
F(s|s) − F(t|t)
n
«
= 1 (3)
By two simple applications of the Chernoff bound and an
application of the union bound, one can show the sequence
of random variables { n}∞
n=0 converges to in probability.
Next, if we let Xn = − n, X = − , b = −F(s|s) + F(t|t),
and a = − F (t|s)−F (s|s)
F (s|t)−F (s|s)−F (t|t)+F (t|s)
, by Theorem A.3
below, we get that limn→∞ Pr(Xn < a + b/n) = Pr(X < a).
Combining this with equation 3, Pr( > −a) = 1.
The proof of the following theorem is very similar to the
proof that a sequence of random variables that converges in
probability, also converge in distribution. A good
explanation of this can be found in [9], which is the basis for the
argument below.
Theorem A.3. If {Xn}∞
n=0 is a sequence of random
variables that converge in probability to the random variable X,
and a and b are constants, then limn→∞ Pr(Xn < a+b/n) =
Pr(X < a).
Proof. By Lemma A.1 (see below) we have the following
two inequalities,
Pr(X < a + b/n − τ)
≤ Pr(Xn < a + b/n) + Pr(|X − Xn| > τ),
Pr(Xn < a + b/n)
≤ Pr(X < a + b/n + τ) + Pr(|X − Xn| > τ).
Combining these gives,
Pr(X < a + b/n − τ) − Pr(|X − Xn| > τ)
≤ Pr(Xn < a + b/n)
≤ Pr(X < a + b/n + τ) + Pr(|X − Xn| > τ).
There exists an n0 such that for all n > n0, |b/n| < τ, so
the following statement holds for all n > n0.
Pr(X < a − 2τ) − Pr(|X − Xn| > τ)
≤ Pr(Xn < a + b/n)
≤ Pr(X < a + 2τ) + Pr(|X − Xn| > τ).
Take the limn→∞ of both sides of both inequalities, and
since Xn converges in probability to X,
Pr(X < a − 2τ) ≤ lim
n→∞
Pr(Xn < a + b/n) (4)
≤ Pr(X < a + 2τ). (5)
Recall that X is a continuous random variable representing
the fraction of mutants in an infinite sized graph. So if we
let FX (a) = Pr(X < a), we see that FX (a) is a cumulative
distribution function of a continuous random variable, and
is therefore continuous from the right. So
lim
τ↓0
FX (a − τ) = lim
τ↓0
FX (a + τ) = FX (a).
Thus if we take the limτ↓0 of inequalities 4 and 5 we get
Pr(X < a) = lim
n→∞
Pr(Xn < a + b/n).
The following lemma is quite useful, as it expresses the
cumulative distribution of one random variable Y , in terms
of the cumulative distribution of another random variable
X and the difference between X and Y .
Lemma A.1. If X and Y are random variables, c ∈
and τ > 0, then
Pr(Y < c) ≤ Pr(X < c + τ) + Pr(|Y − X| > τ).
Proof.
Pr(Y < c)
= Pr(Y < c, X < c + τ) + Pr(Y < c, X ≥ c + τ)
≤ Pr(Y < c | X < c + τ) Pr(X < c + τ)
+ Pr(|Y − X| > τ)
≤ Pr(X < c + τ) + Pr(|Y − X| > τ)
207 | nash equilibrium;game theory;randomization power;evolutionary stable strategy;natural strengthening;geographical restriction;equilibrium outcome;power of randomization;undirected graph;edge density condition;graph topology;mutation set;evolutionary game theory;graph-theoretic model;relationship of topology;network;topology relationship;pairwise interaction |
train_J-41 | An Analysis of Alternative Slot Auction Designs for Sponsored Search | Billions of dollars are spent each year on sponsored search, a form of advertising where merchants pay for placement alongside web search results. Slots for ad listings are allocated via an auction-style mechanism where the higher a merchant bids, the more likely his ad is to appear above other ads on the page. In this paper we analyze the incentive, efficiency, and revenue properties of two slot auction designs: rank by bid (RBB) and rank by revenue (RBR), which correspond to stylized versions of the mechanisms currently used by Yahoo! and Google, respectively. We also consider first- and second-price payment rules together with each of these allocation rules, as both have been used historically. We consider both the short-run incomplete information setting and the long-run complete information setting. With incomplete information, neither RBB nor RBR are truthful with either first or second pricing. We find that the informational requirements of RBB are much weaker than those of RBR, but that RBR is efficient whereas RBB is not. We also show that no revenue ranking of RBB and RBR is possible given an arbitrary distribution over bidder values and relevance. With complete information, we find that no equilibrium exists with first pricing using either RBB or RBR. We show that there typically exists a multitude of equilibria with second pricing, and we bound the divergence of (economic) value in such equilibria from the value obtained assuming all merchants bid truthfully. | 1. INTRODUCTION
Today, Internet giants Google and Yahoo! boast a
combined market capitalization of over $150 billion, largely on
the strength of sponsored search, the fastest growing
component of a resurgent online advertising industry.
PricewaterhouseCoopers estimates that 2004 industry-wide sponsored
search revenues were $3.9 billion, or 40% of total
Internet advertising revenues.1
Industry watchers expect 2005
revenues to reach or exceed $7 billion.2
Roughly 80% of
Google"s estimated $4 billion in 2005 revenue and roughly
45% of Yahoo!"s estimated $3.7 billion in 2005 revenue will
likely be attributable to sponsored search.3
A number of
other companies-including LookSmart, FindWhat,
InterActiveCorp (Ask Jeeves), and eBay (Shopping.com)-earn
hundreds of millions of dollars of sponsored search revenue
annually.
Sponsored search is a form of advertising where merchants
pay to appear alongside web search results. For example,
when a user searches for used honda accord san diego in
a web search engine, a variety of commercial entities (San
Diego car dealers, Honda Corp, automobile information
portals, classified ad aggregators, eBay, etc...) may bid to to
have their listings featured alongside the standard
algorithmic search listings. Advertisers bid for placement on the
page in an auction-style format where the higher they bid
the more likely their listing will appear above other ads on
the page. By convention, sponsored search advertisers
generally pay per click, meaning that they pay only when a user
clicks on their ad, and do not pay if their ad is displayed but
not clicked. Though many people claim to systematically
ignore sponsored search ads, Majestic Research reports that
1
www.iab.net/resources/adrevenue/pdf/IAB PwC 2004full.pdf
2
battellemedia.com/archives/002032.php
3
These are rough back of the envelope estimates. Google
and Yahoo! 2005 revenue estimates were obtained from
Yahoo! Finance. We assumed $7 billion in 2005 industry-wide
sponsored search revenues. We used Nielsen/NetRatings
estimates of search engine market share in the US, the most
monetized market:
wired-vig.wired.com/news/technology/0,1282,69291,00.html
Using comScore"s international search engine market share
estimates would yield different estimates:
www.comscore.com/press/release.asp?press=622
218
as many as 17% of Google searches result in a paid click, and
that Google earns roughly nine cents on average for every
search query they process.4
Usually, sponsored search results appear in a separate
section of the page designated as sponsored above or to the
right of the algorithmic results. Sponsored search results
are displayed in a format similar to algorithmic results: as
a list of items each containing a title, a text description,
and a hyperlink to a corresponding web page. We call each
position in the list a slot. Generally, advertisements that
appear in a higher ranked slot (higher on the page) garner
more attention and more clicks from users. Thus, all else
being equal, merchants generally prefer higher ranked slots
to lower ranked slots.
Merchants bid for placement next to particular search
queries; for example, Orbitz and Travelocity may bid for
las vegas hotel while Dell and HP bid for laptop
computer. As mentioned, bids are expressed as a maximum
willingness to pay per click. For example, a forty-cent bid
by HostRocket for web hosting means HostRocket is
willing to pay up to forty cents every time a user clicks on their
ad.5
The auctioneer (the search engine6
) evaluates the bids
and allocates slots to advertisers. In principle, the
allocation decision can be altered with each new incoming search
query, so in effect new auctions clear continuously over time
as search queries arrive.
Many allocation rules are plausible. In this paper, we
investigate two allocation rules, roughly corresponding to the
two allocation rules used by Yahoo! and Google. The rank
by bid (RBB) allocation assigns slots in order of bids, with
higher ranked slots going to higher bidders. The rank by
revenue (RBR) allocation assigns slots in order of the
product of bid times expected relevance, where relevance is the
proportion of users that click on the merchant"s ad after
viewing it. In our model, we assume that an ad"s expected
relevance is known to the auctioneer and the advertiser (but
not necessarily to other advertisers), and that clickthrough
rate decays monotonically with lower ranked slots. In
practice, the expected clickthrough rate depends on a number
of factors, including the position on the page, the ad text
(which in turn depends on the identity of the bidder), the
nature and intent of the user, and the context of other ads
and algorithmic results on the page, and must be learned
over time by both the auctioneer and the bidder [13]. As of
this writing, to a rough first-order approximation, Yahoo!
employs a RBB allocation and Google employs a RBR
allocation, though numerous caveats apply in both cases when
it comes to the vagaries of real-world implementations.7
Even when examining a one-shot version of a slot
auction, the mechanism differs from a standard multi-item
auc4
battellemedia.com/archives/001102.php
5
Usually advertisers also set daily or monthly budget caps;
in this paper we do not model budget constraints.
6
In the sponsored search industry, the auctioneer and search
engine are not always the same entity. For example Google
runs the sponsored search ads for AOL web search, with
revenue being shared. Similarly, Yahoo! currently runs the
sponsored search ads for MSN web search, though Microsoft
will begin independent operations soon.
7
Here are two among many exceptions to the Yahoo! =
RBB and Google = RBR assertion: (1) Yahoo! excludes
ads deemed insufficiently relevant either by a human editor
or due to poor historical click rate; (2) Google sets differing
reserve prices depending on Google"s estimate of ad quality.
tion in subtle ways. First, a single bid per merchant is used
to allocate multiple non-identical slots. Second, the bid is
communicated not as a direct preference over slots, but as
a preference for clicks that depend stochastically on slot
allocation.
We investigate a number of economic properties of RBB
and RBR slot auctions. We consider the short-run
incomplete information case in Section 3, adapting and
extending standard analyses of single-item auctions. In Section 4
we turn to the long-run complete information case; our
characterization results here draw on techniques from
linear programming. Throughout, important observations are
highlighted as claims supported by examples. Our
contributions are as follows:
• We show that with multiple slots, bidders do not reveal
their true values with either RBB or RBR, and with
either first- or second-pricing.
• With incomplete information, we find that the
informational requirements of playing the equilibrium bid
are much weaker for RBB than for RBR, because
bidders need not know any information about each others"
relevance (or even their own) with RBB.
• With incomplete information, we prove that RBR is
efficient but that RBB is not.
• We show via a simple example that no general revenue
ranking of RBB and RBR is possible.
• We prove that in a complete-information setting,
firstprice slot auctions have no pure strategy Nash
equilibrium, but that there always exists a pure-strategy
equilibrium with second pricing.
• We provide a constant-factor bound on the deviation
from efficiency that can occur in the equilibrium of a
second-price slot auction.
In Section 2 we specify our model of bidders and the
various slot auction formats.
In Section 3.1 we study the incentive properties of each
format, asking in which cases agents would bid truthfully.
There is possible confusion here because the second-price
design for slot auctions is reminiscent of the Vickrey auction
for a single item; we note that for slot auctions the Vickrey
mechanism is in fact very different from the second-price
mechanism, and so they have different incentive properties.8
In Section 3.2 we derive the Bayes-Nash equilibrium bids
for the various auction formats. This is useful for the
efficiency and revenue results in later sections. It should
become clear in this section that slot auctions in our model
are a straightforward generalization of single-item auctions.
Sections 3.3 and 3.4 address questions of efficiency and
revenue under incomplete information, respectively.
In Section 4.1 we determine whether pure-strategy
equilibria exist for the various auction formats, under complete
information. In Section 4.2 we derive bounds on the
deviation from efficiency in the pure-strategy equilibria of
secondprice slot auctions.
Our approach is positive rather than normative. We aim
to clarify the incentive, efficiency, and revenue properties of
two slot auction designs currently in use, under settings of
8
Other authors have also made this observation [5, 6].
219
incomplete and complete information. We do not attempt
to derive the optimal mechanism for a slot auction.
Related work. Feng et al. [7] compare the revenue
performance of various ranking mechanisms for slot auctions
in a model with incomplete information, much as we do in
Section 3.4, but they obtain their results via simulations
whereas we perform an equilibrium analysis.
Liu and Chen [12] study properties of slot auctions under
incomplete information. Their setting is essentially the same
as ours, except they restrict their attention to a model with
a single slot and a binary type for bidder relevance (high or
low). They find that RBR is efficient, but that no general
revenue ranking of RBB and RBR is possible, which agrees
with our results. They also take a design approach and
show how the auctioneer should assign relevance scores to
optimize its revenue.
Edelman et al. [6] model the slot auction problem both as
a static game of complete information and a dynamic game
of incomplete information. They study the locally
envyfree equilibria of the static game of complete information;
this is a solution concept motivated by certain bidding
behaviors that arise due to the presence of budget constraints.
They do not view slot auctions as static games of
incomplete information as we do, but do study them as dynamic
games of incomplete information and derive results on the
uniqueness and revenue properties of the resulting
equilibria. They also provide a nice description of the evolution of
the market for sponsored search.
Varian [18] also studies slot auctions under a setting of
complete information. He focuses on symmetric
equilibria, which are a refinement of Nash equilibria appropriate for
slot auctions. He provides bounds on the revenue obtained
in equilibrium. He also gives bounds that can be used to
infer bidder values given their bids, and performs some
empirical analysis using these results. In contrast, we focus
instead on efficiency and provide bounds on the deviation
from efficiency in complete-information equilibria.
2. PRELIMINARIES
We focus on a slot auction for a single keyword. In a
setting of incomplete information, a bidder knows only
distributions over others" private information (value per click
and relevance). With complete information, a bidder knows
others" private information, and so does not need to rely on
distributions to strategize. We first describe the model for
the case with incomplete information, and drop the
distributional information from the model when we come to the
complete-information case in Section 4.
2.1 The Model
There is a fixed number K of slots to be allocated among
N bidders. We assume without loss of generality that K ≤
N, since superfluous slots can remain blank. Bidder i assigns
a value of Xi to each click received on its advertisement,
regardless of this advertisement"s rank.9
The probability
that i"s advertisement will be clicked if viewed is Ai ∈ [0, 1].
We refer to Ai as bidder i"s relevance. We refer to Ri =
AiXi as bidder i"s revenue. The Xi, Ai, and Ri are random
9
Indeed Kitts et al. [10] find that in their sample of actual
click data, the correlation between rank and conversion rate
is not statistically significant. However, for the purposes
of our model it is also important that bidders believe that
conversion rate does not vary with rank.
variables and we denote their realizations by xi, αi, and ri
respectively. The probability that an advertisement will be
viewed if placed in slot j is γj ∈ [0, 1]. We assume γ1 >
γ2 > . . . > γK. Hence bidder i"s advertisement will have a
clickthrough rate of γjαi if placed in slot j. Of course, an
advertisement does not receive any clicks if it is not allocated
a slot.
Each bidder"s value and relevance pair (Xi, Ai) is
independently and identically distributed on [0, ¯x] × [0, 1] according
to a continuous density function f that has full support on
its domain. The density f and slot probabilities γ1, . . . , γK
are common knowledge. Only bidder i knows the
realization xi of its value per click Xi. Both bidder i and the seller
know the realization αi of Ai, but this realization remains
unobservable to the other bidders.
We assume that bidders have quasi-linear utility
functions. That is, the expected utility to bidder i of obtaining
the slot of rank j at a price of b per click is
ui(j, b) = γjαi(xi − b)
If the advertising firms bidding in the slot auction are
riskneutral and have ample liquidity, quasi-linearity is a
reasonable assumption.
The assumptions of independence, symmetry, and
riskneutrality made above are all quite standard in single-item
auction theory [11, 19]. The assumption that clickthrough
rate decays monotonically with lower slots-by the same
factors for each agent-is unique to the slot auction
problem. We view it as a main contribution of our work to
show that this assumption allows for tractable analysis of
the slot auction problem using standard tools from
singleitem auction theory. It also allows for interesting results in
the complete information case. A common model of
decaying clickthrough rate is the exponential decay model, where
γk = 1
δk−1 with decay δ > 1. Feng et al. [7] state that
their actual clickthrough data is fitted extremely well by an
exponential decay model with δ = 1.428.
Our model lacks budget constraints, which are an
important feature of real slot auctions. With budget constraints
keyword auctions cannot be considered independently of one
another, because the budget must be allocated across
multiple keywords-a single advertiser typically bids on multiple
keywords relevant to his business. Introducing this element
into the model is an important next step for future work.10
2.2 Auction Formats
In a slot auction a bidder provides to the seller a declared
value per click ˜xi(xi, αi) which depends on his true value
and relevance. We often denote this declared value (bid)
by ˜xi for short. Since a bidder"s relevance αi is observable
to the seller, the bidder cannot misrepresent it. We denote
the kth
highest of the N declared values by ˜x(k)
, and the
kth
highest of the N declared revenues by ˜r(k)
, where the
declared revenue of bidder i is ˜ri = αi ˜xi. We consider two
types of allocation rules, rank by bid (RBB) and rank
by revenue (RBR):
10
Models with budget constraints have begun to appear in
this research area. Abrams [1] and Borgs et al. [3] design
multi-unit auctions for budget-constrained bidders, which
can be interpreted as slot auctions, with a focus on revenue
optimization and truthfulness. Mehta et al. [14] address
the problem of matching user queries to budget-constrained
advertisers so as to maximize revenue.
220
RBB. Slot k goes to bidder i if and only if ˜xi = ˜x(k)
.
RBR. Slot k goes to bidder i if and only if ˜ri = ˜r(k)
.
We will commonly represent an allocation by a one-to-one
function σ : [K] → [N], where [n] is the set of integers
{1, 2, . . . , n}. Hence slot k goes to bidder σ(k).
We also consider two different types of payment rules.
Note that no matter what the payment rule, a bidder that
is not allocated a slot will pay 0 since his listing cannot
receive any clicks.
First-price. The bidder allocated slot k, namely σ(k), pays
˜xσ(k) per click under both the RBB and RBR
allocation rules.
Second-price. If k < N, bidder σ(k) pays ˜xσ(k+1) per click
under the RBB rule, and pays ˜rσ(k+1)/ασ(k) per click
under the RBR rule. If k = N, bidder σ(k) pays 0 per
click.11
Intuitively, a second-price payment rule sets a bidder"s
payment to the lowest bid it could have declared while
maintaining the same ranking, given the allocation rule used.
Overture introduced the first slot auction design in 1997,
using a first-price RBB scheme. Google then followed in
2000 with a second-price RBR scheme. In 2002, Overture
(at this point acquired by Yahoo!) then switched to second
pricing but still allocates using RBB. One possible reason
for the switch is given in Section 4.
We assume that ties are broken as follows in the event
that two agents make the exact same bid or declare the
same revenue. There is a permutation of the agents κ :
[N] → [N] that is fixed beforehand. If the bids of agents i
and j are tied, then agent i obtains a higher slot if and only
if κ(i) < κ(j). This is consistent with the practice in real
slot auctions where ties are broken by the bidders" order of
arrival.
3. INCOMPLETE INFORMATION
3.1 Incentives
It should be clear that with a first-price payment rule,
truthful bidding is neither a dominant strategy nor an ex
post Nash equilibrium using either RBB or RBR, because
this guarantees a payoff of 0. There is always an incentive
to shade true values with first pricing.
The second-price payment rule is reminiscent of the
secondprice (Vickrey) auction used for selling a single item, and in
a Vickrey auction it is a dominant strategy for a bidder to
reveal his true value for the item [19]. However, using a
second-price rule in a slot auction together with either
allocation rule above does not yield an incentive-compatible
mechanism, either in dominant strategies or ex post Nash
equilibrium.12
With a second-price rule there is no
incentive for a bidder to bid higher than his true value per click
using either RBB or RBR: this either leads to no change
11
We are effectively assuming a reserve price of zero, but in
practice search engines charge a non-zero reserve price per
click.
12
Unless of course there is only a single slot available, since
this is the single-item case. With a single slot both RBB
and RBR with a second-price payment rule are
dominantstrategy incentive-compatible.
in the outcome, or a situation in which he will have to pay
more than his value per click for each click received,
resulting in a negative payoff.13
However, with either allocation
rule there may be an incentive to shade true values with
second pricing.
Claim 1. With second pricing and K ≥ 2, truthful
bidding is not a dominant strategy nor an ex post Nash
equilibrium for either RBB or RBR.
Example. There are two agents and two slots. The
agents have relevance α1 = α2 = 1, whereas γ1 = 1 and
γ2 = 1/2. Agent 1 has a value of x1 = 6 per click, and agent
2 has a value of x2 = 4 per click. Let us first consider the
RBB rule. Suppose agent 2 bids truthfully. If agent 1 also
bids truthfully, he wins the first slot and obtains a payoff of
2. However, if he shades his bid down below 4, he obtains
the second slot at a cost of 0 per click yielding a payoff of
3. Since the agents have equal relevance, the exact same
situation holds with the RBR rule. Hence truthful bidding
is not a dominant strategy in either format, and neither is
it an ex post Nash equilibrium.
To find payments that make RBB and RBR
dominantstrategy incentive-compatible, we can apply Holmstrom"s
lemma [9] (see also chapter 3 in Milgrom [15]). Under the
restriction that a bidder with value 0 per click does not pay
anything (even if he obtains a slot, which can occur if there
are as many slots as bidders), this lemma implies that there
is a unique payment rule that achieves dominant-strategy
incentive compatibility for either allocation rule. For RBB,
the bidder allocated slot k is charged per click
KX
i=k+1
(γi−1 − γi)˜x(i)
+ γK ˜x(K+1)
(1)
Note that if K = N, ˜x(K+1)
= 0 since there is no K + 1th
bidder. For RBR, the bidder allocated slot k is charged per
click
1
ασ(k)
KX
i=k+1
(γi−1 − γi)˜r(i)
+ γK ˜r(K+1)
!
(2)
Using payment rule (2) and RBR, the auctioneer is aware
of the true revenues of the bidders (since they reveal their
values truthfully), and hence ranks them according to their
true revenues. We show in Section 3.3 that this allocation
is in fact efficient. Since the VCG mechanism is the unique
mechanism that is efficient, truthful, and ensures bidders
with value 0 pay nothing (by the Green-Laffont theorem [8]),
the RBR rule and payment scheme (2) constitute exactly the
VCG mechanism.
In the VCG mechanism an agent pays the externality he
imposes on others. To understand payment (2) in this sense,
note that the first term is the added utility (due to an
increased clickthrough rate) agents in slots k + 1 to K would
receive if they were all to move up a slot; the last term is the
utility that the agent with the K +1st
revenue would receive
by obtaining the last slot as opposed to nothing. The
leading coefficient simply reduces the agent"s expected payment
to a payment per click.
13
In a dynamic setting with second pricing, there may be an
incentive to bid higher than one"s true value in order to
exhaust competitors" budgets. This phenomenon is commonly
called bid jamming or antisocial bidding [4].
221
3.2 Equilibrium Analysis
To understand the efficiency and revenue properties of
the various auction formats, we must first understand which
rankings of the bidders occur in equilibrium with different
allocation and payment rule combinations. The following
lemma essentially follows from the Monotonic Selection
Theorem by Milgrom and Shannon [16].
Lemma 1. In a RBB (RBR) auction with either a
firstor second-price payment rule, the symmetric Bayes-Nash
equilibrium bid is strictly increasing with value ( revenue).
As a consequence of this lemma, we find that RBB and
RBR auctions allocate the slots greedily by the true values
and revenues of the agents, respectively (whether using
firstor second-price payment rules). This will be relevant in
Section 3.3 below. For a first-price payment rule, we can
explicitly derive the symmetric Bayes-Nash equilibrium bid
functions for RBB and RBR auctions. The purpose of this
exercise is to lend qualitative insights into the parameters
that influence an agent"s bidding, and to derive formulae for
the expected revenue in RBB and RBR auctions in order
to make a revenue ranking of these two allocation rules (in
Section 3.4).
Let G(y) be the expected resulting clickthrough rate, in
a symmetric equilibrium of the RBB auction (with either
payment rule), to a bidder with value y and relevance α =
1. Let H(y) be the analogous quantity for a bidder with
revenue y and relevance 1 in a RBR auction. By Lemma 1,
a bidder with value y will obtain slot k in a RBB auction
if y is the kth
highest of the true realized values. The same
applies in a RBR auction when y is the kth
highest of the true
realized revenues. Let FX (y) be the distribution function for
value, and let FR(y) be the distribution function for revenue.
The probability that y is the kth
highest out of N values is
N − 1
k − 1
!
(1 − FX (y))k−1
FX (y)N−k
whereas the probability that y is the kth
highest out of N
revenues is the same formula with FR replacing FX . Hence
we have
G(y) =
KX
k=1
γk
N − 1
k − 1
!
(1 − FX (y))k−1
FX (y)N−k
The H function is analogous to G with FR replacing FX .
In the two propositions that follow, g and h are the
derivatives of G and H respectively. We omit the proof of the
next proposition, because it is almost identical to the
derivation of the equilibrium bid in the single-item case (see
Krishna [11], Proposition 2.2).
Proposition 1. The symmetric Bayes-Nash equilibrium
strategies in a first-price RBB auction are given by
˜xB
(x, α) =
1
G(x)
Z x
0
y g(y)dy
The first-price equilibrium above closely parallels the
firstprice equilibrium in the single-item model. With a single
item g is the density of the second highest value among all
N agent values, whereas in a slot auction it is a weighted
combination of the densities for the second, third, etc.
highest values.
Note that the symmetric Bayes-Nash equilibrium bid in
a first-price RBB auction does not depend on a bidder"s
relevance α. To see clearly why, note that a bidder chooses
a bid b so as to maximize the objective
αG(˜x−1
(b))(x − b)
and here α is just a leading constant factor. So dropping
it does not change the set of optimal solutions. Hence the
equilibrium bid depends only on the value x and function
G, and G in turn depends only on the marginal cumulative
distribution of value FX . So really only the latter needs to
be common knowledge to the bidders. On the other hand,
we will now see that information about relevance is needed
for bidders to play the equilibrium in the first-price RBR
auction. So the informational requirements for a first-price
RBB auction are much weaker than for a first-price RBR
auction: in the RBB auction a bidder need not know his own
relevance, and need not know any distributional information
over others" relevance in order to play the equilibrium.
Again we omit the next proposition"s proof since it is so
similar to the one above.
Proposition 2. The symmetric Bayes-Nash equilibrium
strategies in a first-price RBR auction are given by
˜xR
(x, α) =
1
αH(αx)
Z αx
0
y h(y) dy
Here it can be seen that the equilibrium bid is increasing
with x, but not necessarily with α. This should not be much
of a concern to the auctioneer, however, because in any case
the declared revenue in equilibrium is always increasing in
the true revenue.
It would be interesting to obtain the equilibrium bids
when using a second-price payment rule, but it appears that
the resulting differential equations for this case do not have a
neat analytical solution. Nonetheless, the same conclusions
about the informational requirements of the RBB and RBR
rules still hold, as can be seen simply by inspecting the
objective function associated with an agent"s bidding problem
for the second-price case.
3.3 Efficiency
A slot auction is efficient if in equilibrium the sum of the
bidders" revenues from their allocated slots is maximized.
Using symmetry as our equilibrium selection criterion, we
find that the RBB auction is not efficient with either
payment rule.
Claim 2. The RBB auction is not efficient with either
first or second pricing.
Example. There are two agents and one slot, with γ1 =
1. Agent 1 has a value of x1 = 6 per click and relevance
α1 = 1/2. Agent 2 has a value of x2 = 4 per click and
relevance α2 = 1. By Lemma 1, agents are ranked greedily
by value. Hence agent 1 obtains the lone slot, for a total
revenue of 3 to the agents. However, it is most efficient to
allocate the slot to agent 2, for a total revenue of 4.
Examples with more agents or more slots are simple to
construct along the same lines. On the other hand, under
our assumptions on how clickthrough rate decreases with
lower rank, the RBR auction is efficient with either payment
rule.
222
Theorem 1. The RBR auction is efficient with either
first- or second-price payments rules.
Proof. Since by Lemma 1 the agents" equilibrium bids
are increasing functions of their revenues in the RBR
auction, slots are allocated greedily according to true revenues.
Let σ be a non-greedy allocation. Then there are slots s, t
with s < t and rσ(s) < rσ(t). We can switch the agents in
slots s and t to obtain a new allocation, and the difference
between the total revenue in this new allocation and the
original allocation"s total revenue is
`
γtrσ(s) + γsrσ(t)
´
−
`
γsrσ(s) + γtrσ(t)
´
= (γs − γt)
`
rσ(t) − rσ(s)
´
Both parenthesized terms above are positive. Hence the
switch has increased the total revenue to the bidders. If we
continue to perform such switches, we will eventually reach
a greedy allocation of greater revenue than the initial
allocation. Since the initial allocation was arbitrary, it follows
that a greedy allocation is always efficient, and hence the
RBR auction"s allocation is efficient.
Note that the assumption that clickthrough rate decays
montonically by the same factors γ1, . . . , γK for all agents is
crucial to this result. A greedy allocation scheme does not
necessarily find an efficient solution if the clickthrough rates
are monotonically decreasing in an independent fashion for
each agent.
3.4 Revenue
To obtain possible revenue rankings for the different
auction formats, we first note that when the allocation rule is
fixed to RBB, then using either a first-price, second-price, or
truthful payment rule leads to the same expected revenue in
a symmetric, increasing Bayes-Nash equilibrium. Because
a RBB auction ranks agents by their true values in
equilibrium for any of these payment rules (by Lemma 1), it
follows that expected revenue is the same for all these
payment rules, following arguments that are virtually identical
to those used to establish revenue equivalence in the
singleitem case (see e.g. Proposition 3.1 in Krishna [11]). The
same holds for RBR auctions; however, the revenue ranking
of the RBB and RBR allocation rules is still unclear.
Because of this revenue equivalence principle, we can choose
whichever payment rule is most convenient for the purpose
of making revenue comparisons.
Using Propositions 1 and 2, it is a simple matter to
derive formulae for the expected revenue under both allocation
rules. The payment of an agent in a RBB auction is
mB
(x, α) = αG(x)˜xV
(x, α)
The expected revenue is then N · E
ˆ
mV
(X, A)
˜
, where the
expectation is taken with respect to the joint density of value
and relevance. The expected revenue formula for RBR
auctions is entirely analogous using ˜xR
(x, α) and the H
function. With these in hand we can obtain revenue rankings
for specific numbers of bidders and slots, and specific
distributions over values and relevance.
Claim 3. For fixed K, N, and fixed γ1, . . . , γK, no
revenue ranking of RBB and RBR is possible for an arbitrary
density f.
Example. Assume there are 2 bidders, 2 slots, and that
γ1 = 1, γ2 = 1/2. Assume that value-relevance pairs are
uniformly distributed over [0, 1]× [0, 1]. For such a
distribution with a closed-form formula, it is most convenient to use
the revenue formulae just derived. RBB dominates RBR in
terms of revenue for these parameters. The formula for the
expected revenue in a RBB auction yields 1/12, whereas for
RBR auctions we have 7/108.
Assume instead that with probability 1/2 an agent"s
valuerelevance pair is (1, 1/2), and that with probability 1/2 it
is (1/2, 1). In this scenario it is more convenient to appeal
to formulae (1) and (2). In a truthful auction the second
agent will always pay 0. According to (1), in a truthful
RBB auction the first agent makes an expected payment of
E
ˆ
(γ1 − γ2)Aσ(1)Xσ(2)
˜
=
1
2
E
ˆ
Aσ(1)
˜
E
ˆ
Xσ(2)
˜
where we have used the fact that value and relevance are
independently distributed for different agents. The expected
relevance of the agent with the highest value is E
ˆ
Aσ(1)
˜
=
5/8. The expected second highest value is also E
ˆ
Xσ(2)
˜
=
5/8. The expected revenue for a RBB auction here is then
25/128. According to (2), in a truthful RBR auction the
first agent makes an expected payment of
E
ˆ
(γ1 − γ2)Rσ(2)
˜
=
1
2
E
ˆ
Rσ(2)
˜
In expectation the second highest revenue is E
ˆ
Rσ(2)
˜
=
1/2, so the expected revenue for a RBR auction is 1/4.
Hence in this case the RBR auction yields higher expected
revenue.1415
This example suggests the following conjecture: when
value and relevance are either uncorrelated or positively
correlated, RBB dominates RBR in terms of revenue. When
value and relevance are negatively correlated, RBR
dominates.
4. COMPLETE INFORMATION
In typical slot auctions such as those run by Yahoo! and
Google, bidders can adjust their bids up or down at any
time. As B¨orgers et al. [2] and Edelman et al. [6] have
noted, this can be viewed as a continuous-time process in
which bidders learn each other"s bids. If the process
stabilizes the result can then be modeled as a Nash equilibrium
in pure strategies of the static one-shot game of complete
information, since each bidder will be playing a best-response
to the others" bids.16
This argument seems especially
appropriate for Yahoo!"s slot auction design where all bids are
14
To be entirely rigorous and consistent with our initial
assumptions, we should have constructed a continuous
probability density with full support over an appropriate domain.
Taking the domain to be e.g. [0, 1] × [0, 1] and a continuous
density with full support that is sufficiently concentrated
around (1, 1/2) and (1/2, 1), with roughly equal mass around
both, would yield the same conclusion.
15
Claim 3 should serve as a word of caution, because Feng
et al. [7] find through their simulations that with a
bivariate normal distribution over value-relevance pairs, and with
5 slots, 15 bidders, and δ = 2, RBR dominates RBB in
terms of revenue for any level of correlation between value
and relevance. However, they assume that bidding behavior
in a second-price slot auction can be well approximated by
truthful bidding.
16
We do not claim that bidders will actually learn each
others" private information (value and relevance), just that for
a stable set of bids there is a corresponding equilibrium of
the complete information game.
223
made public. Google keeps bids private, but
experimentation can allow one to discover other bids, especially since
second pricing automatically reveals to an agent the bid of
the agent ranked directly below him.
4.1 Equilibrium Analysis
In this section we ask whether a pure-strategy Nash
equilibrium exists in a RBB or RBR slot auction, with either
first or second pricing.
Before dealing with the first-price case there is a technical
issue involving ties. In our model we allow bids to be
nonnegative real numbers for mathematical convenience, but
this can become problematic because there is then no bid
that is just higher than another. We brush over such
issues by assuming that an agent can bid infinitesimally
higher than another. This is imprecise but allows us to
focus on the intuition behind the result that follows. See
Reny [17] for a full treatment of such issues.
For the remainder of the paper, we assume that there are
as many slots as bidders. The following result shows that
there can be no pure-strategy Nash equilibrium with first
pricing.17
Note that the argument holds for both RBB and
RBR allocation rules. For RBB, bids should be interpreted
as declared values, and for RBR as declared revenues.
Theorem 2. There exists no complete information Nash
equilibrium in pure strategies in the first-price slot auction,
for any possible values of the agents, whether using a RBB
or RBR allocation rule.
Proof. Let σ : [K] → [N] be the allocation of slots to
the agents resulting from their bids. Let ri and bi be the
revenue and bid of the agent ranked ith
, respectively. Note
that we cannot have bi > bi+1, or else the agent in slot
i can make a profitable deviation by instead bidding bi −
> bi+1 for small enough > 0. This does not change
its allocation, but increases its profit. Hence we must have
bi = bi+1 (i.e. with one bidder bidding infinitesimally higher
than the other). Since this holds for any two consecutive
bidders, it follows that in a Nash equilibrium all bidders
must be bidding 0 (since the bidder ranked last matches the
bid directly below him, which is 0 by default because there
is no such bid). But this is impossible: consider the bidder
ranked last. The identity of this bidder is always clear given
the deterministic tie-breaking rule. This bidder can obtain
the top spot and increase his revenue by (γ1 −γK)rK > 0 by
bidding some > 0, and for small enough this is necessarily
a profitable deviation. Hence there is no Nash equilibrium
in pure strategies.
On the other hand, we find that in a second-price slot
auction there can be a multitude of pure strategy Nash
equilibria. The next two lemmas give conditions that characterize
the allocations that can occur as a result of an equilibrium
profile of bids, given fixed agent values and revenues. Then
if we can exhibit an allocation that satisfies these conditions,
there must exist at least one equilibrium. We first consider
the RBR case.
17
B¨orgers et al. [2] have proven this result in a model with
three bidders and three slots, and we generalize their
argument. Edelman et al. [6] also point out this non-existence
phenomenon. They only illustrate the fact with an example
because the result is quite immediate.
Lemma 2. Given an allocation σ, there exists a Nash
equilibrium profile of bids b leading to σ in a second-price RBR
slot auction if and only if
„
1 −
γi
γj+1
«
rσ(i) ≤ rσ(j)
for 1 ≤ j ≤ N − 2 and i ≥ j + 2.
Proof. There exists a desired vector b which constitutes
a Nash equilibrium if and only if the following set of
inequalities can be satisfied (the variables are the πi and bj):
πi ≥ γj(rσ(i) − bj) ∀i, ∀j < i (3)
πi ≥ γj(rσ(i) − bj+1) ∀i, ∀j > i (4)
πi = γi(rσ(i) − bi+1) ∀i (5)
bi ≥ bi+1 1 ≤ i ≤ N − 1 (6)
πi ≥ 0, bi ≥ 0 ∀i
Here rσ(i) is the revenue of the agent allocated slot i, and
πi and bi may be interpreted as this agent"s surplus and
declared revenue, respectively. We first argue that
constraints (6) can be removed, because the inequalities above
can be satisfied if and only if the inequalities without (6) can
be satisfied. The necessary direction is immediate. Assume
we have a vector (π, b) which satisfies all inequalities above
except (6). Then there is some i for which bi < bi+1.
Construct a new vector (π, b ) identical to the original except
with bi+1 = bi. We now have bi = bi+1. An agent in slot
k < i sees the price of slot i decrease from bi+1 to bi+1 = bi,
but this does not make i more preferred than k to this agent
because we have πk ≥ γi−1(rσ(k) − bi) ≥ γi(rσ(k) − bi) =
γi(rσ(k) −bi+1) (i.e. because the agent in slot k did not
originally prefer slot i − 1 at price bi, he will not prefer slot i
at price bi). A similar argument applies for agents in slots
k > i + 1. The agent in slot i sees the price of this slot
go down, which only makes it more preferred. Finally, the
agent in slot i + 1 sees no change in the price of any slot, so
his slot remains most preferred. Hence inequalities (3)-(5)
remain valid at (π, b ). We first make this change to the bi+1
where bi < bi+1 and index i is smallest. We then recursively
apply the change until we eventually obtain a vector that
satisfies all inequalities.
We safely ignore inequalities (6) from now on. By the
Farkas lemma, the remaining inequalities can be satisfied if
and only if there is no vector z such that
X
i,j
(γj rσ(i)) zσ(i)j > 0
X
i>j
γjzσ(i)j +
X
i<j
γj−1zσ(i)j−1 ≤ 0 ∀j (7)
X
j
zσ(i)j ≤ 0 ∀i (8)
zσ(i)j ≥ 0 ∀i, ∀j = i
zσ(i)i free ∀i
Note that a variable of the form zσ(i)i appears at most once
in a constraint of type (8), so such a variable can never be
positive. Also, zσ(i)1 = 0 for all i = 1 by constraint (7),
since such variables never appear with another of the form
zσ(i)i.
Now if we wish to raise zσ(i)j above 0 by one unit for j = i,
we must lower zσ(i)i by one unit because of the constraint
of type (8). Because γjrσ(i) ≤ γirσ(i) for i < j, raising
224
zσ(i)j with i < j while adjusting other variables to maintain
feasibility cannot make the objective
P
i,j(γjrσ(i))zσ(i)j
positive. If this objective is positive, then this is due to some
component zσ(i)j with i > j being positive.
Now for the constraints of type (7), if i > j then zσ(i)j
appears with zσ(j−1)j−1 (for 1 < j < N). So to raise the
former variable γ−1
j units and maintain feasibility, we must
(I) lower zσ(i)i by γ−1
j units, and (II) lower zσ(j−1)j−1 by
γ−1
j−1 units. Hence if the following inequalities hold:
rσ(i) ≤
„
γi
γj
«
rσ(i) + rσ(j−1) (9)
for 2 ≤ j ≤ N − 1 and i > j, raising some zσ(i)j with
i > j cannot make the objective positive, and there is no
z that satisfies all inequalities above. Conversely, if some
inequality (9) does not hold, the objective can be made
positive by raising the corresponding zσ(i)j and adjusting other
variables so that feasibility is just maintained. By a slight
reindexing, inequalities (9) yield the statement of the
theorem.
The RBB case is entirely analogous.
Lemma 3. Given an allocation σ, there exists a Nash
equilibrium profile of bids b leading to σ in a second-price RBB
slot auction if and only if
„
1 −
γi
γj+1
«
xσ(i) ≤ xσ(j)
for 1 ≤ j ≤ N − 2 and i ≥ j + 2.
Proof Sketch. The proof technique is the same as in the
previous lemma. The desired Nash equilibrium exists if and
only if a related set of inequalities can be satisfied; by the
Farkas lemma, this occurs if and only if an alternate set of
inequalities cannot be satisfied. The conditions that
determine whether the latter holds are given in the statement of
the lemma.
The two lemmas above immediately lead to the following
result.
Theorem 3. There always exists a complete information
Nash equilibrium in pure strategies in the second-price RBB
slot auction. There always exists an efficient complete
information Nash equilibrium in pure strategies in the
secondprice RBR slot auction.
Proof. First consider RBB. Suppose agents are ranked
according to their true values. Since xσ(i) ≤ xσ(j) for i > j,
the system of inequalities in Lemma 3 is satisfied, and the
allocation is the result of some Nash equilibrium bid profile.
By the same type of argument but appealing to Lemma 2
for RBR, there exists a Nash equilibrium bid profile such
that bidders are ranked according to their true revenues.
By Theorem 1, this latter allocation is efficient.
This theorem establishes existence but not uniqueness.
Indeed we expect that in many cases there will be multiple
allocations (and hence equilibria) which satisfy the
conditions of Lemmas 2 and 3. In particular, not all equilibria of
a second-price RBR auction will be efficient. For instance,
according to Lemma 2, with two agents and two slots any
allocation can arise in a RBR equilibrium because no
constraints apply.
Theorems 2 and 3 taken together provide a possible
explanation for Yahoo!"s switch from first to second pricing.
We saw in Section 3.1 that this does not induce truthfulness
from bidders. With first pricing, there will always be some
bidder that feels compelled to adjust his bid. Second pricing
is more convenient because an equilibrium can be reached,
and this reduces the cost of bid management.
4.2 Efficiency
For a given allocation rule, we call the allocation that
would result if the bidders reported their values truthfully
the standard allocation. Hence in the standard RBB
allocation bidders are ranked by true values, and in the standard
RBR allocation they are ranked by true revenues.
According to Lemmas 2 and 3, a ranking that results from a Nash
equilibrium profile can only deviate from the standard
allocation by having agents with relatively similar values or
revenues switch places. That is, if ri > rj then with RBR
agent j can be ranked higher than i only if the ratio rj/ri is
sufficiently large; similarly for RBB. This suggests that the
value of an equilibrium allocation cannot differ too much
from the value obtained in the standard allocation, and the
following theorems confirms this.
For an allocation σ of slots to agents, we denote its
total value by f(σ) =
PN
i=1 γirσ(i). We denote by g(σ) =
PN
i=1 γixσ(i) allocation σ"s value when assuming all agents
have identical relevance, normalized to 1. Let
L = min
i=1,...,N−1
min
γi+1
γi
, 1 −
γi+2
γi+1
ff
(where by default γN+1 = 0). Let ηx and ηr be the standard
allocations when using RBB and RBR, respectively.
Theorem 4. For an allocation σ that results from a
purestrategy Nash equilibrium of a second-price RBR slot
auction, we have f(σ) ≥ Lf(ηr).
Proof. We number the agents so that agent i has the
ith
highest revenue, so r1 ≥ r2 ≥ . . . ≥ rN . Hence the
standard allocation has value f(ηr) =
PN
i=1 γiri. To prove
the theorem, we will make repeated use of the fact thatP
k akP
k bk
≥ mink
ak
bk
when the ak and bk are positive. Note
that according to Lemma 2, if agent i lies at least two slots
below slot j, then rσ(j) ≥ ri
1 −
γj+2
γj+1
.
It may be the case that for some slot i, we have σ(i) > i
and for slots k > i + 1 we have σ(k) > i. We then say that
slot i is inverted. Let S be the set of agents with indices at
least i + 1; there are N − i of these. If slot i is inverted, it is
occupied by some agent from S. Also all slots strictly lower
than i + 1 must be occupied by the remaining agents from
S, since σ(k) > i for k ≥ i + 2. The agent in slot i + 1 must
then have an index σ(i + 1) ≤ i (note this means slot i + 1
cannot be inverted). Now there are two cases. In the first
case we have σ(i) = i + 1. Then
γirσ(i) + γi+1rσ(i+1)
γiri + γi+1ri+1
≥
γi+1ri + γiri+1
γiri + γi+1ri+1
≥ min
γi+1
γi
,
γi
γi+1
ff
=
γi+1
γi
In the second case we have σ(i) > i+1. Then since all agents
in S except the one in slot i lie strictly below slot i + 1, and
225
the agent in slot i is not agent i + 1, it must be that agent
i+1 is in a slot strictly below slot i+1. This means that it is
at least two slots below the agent that actually occupies slot
i, and by Lemma 2 we then have rσ(i) ≥ ri+1
1 −
γi+2
γi+1
.
Thus,
γirσ(i) + γi+1rσ(i+1)
γiri + γi+1ri+1
≥
γi+1ri + γirσ(i)
γiri + γi+1ri+1
≥ min
γi+1
γi
, 1 −
γi+2
γi+1
ff
If slot i is not inverted, then on one hand we may have
σ(i) ≤ i, in which case rσ(i)/ri ≥ 1. On the other hand we
may have σ(i) > i but there is some agent with index j ≤ i
that lies at least two slots below slot i. Then by Lemma 2,
rσ(i) ≥ rj
1 −
γi+2
γi+1
≥ ri
1 −
γi+2
γi+1
.
We write i ∈ I if slot i is inverted, and i ∈ I if neither i nor
i − 1 are inverted. By our arguments above two consecutive
slots cannot be inverted, so we can write
f(σ)
f(γr)
=
P
i∈I
`
γirσ(i) + γi+1rσ(i+1)
´
+
P
i∈I γirσ(i)
P
i∈I (γiri + γi+1ri+1) +
P
i∈I γiri
≥ min
min
i∈I
γirσ(i) + γi+1rσ(i+1)
γiri + γi+1ri+1
ff
, min
i∈I
γirσ(i)
γiri
ffff
≥ L
and this completes the proof.
Note that for RBR, the standard value is also the
efficient value by Theorem 1. Also note that for an exponential
decay model, L = min
˘1
δ
, 1 − 1
δ
¯
. With δ = 1.428 (see
Section 2.1), the factor is L ≈ 1/3.34, so the total value in a
pure-strategy Nash equilibrium of a second-price RBR slot
auction is always within a factor of 3.34 of the efficient value
with such a discount.
Again for RBB we have an analogous result.
Theorem 5. For an allocation σ that results from a
purestrategy Nash equilibrium of a second-price RBB slot
auction, we have g(σ) ≥ Lg(ηx).
Proof Sketch. Simply substitute bidder values for bidder
revenues in the proof of Theorem 4, and appeal to Lemma 3.
5. CONCLUSIONS
This paper analyzed stylized versions of the slot auction
designs currently used by Yahoo! and Google, namely rank
by bid (RBB) and rank by revenue (RBR), respectively.
We also considered first and second pricing rules together
with each of these allocation rules, since both have been used
historically. We first studied the short-run setting with
incomplete information, corresponding to the case where
agents have just approached the mechanism. Our
equilibrium analysis revealed that RBB has much weaker
informational requirements than RBR, because bidders need not
know any information about relevance (even their own) to
play the Bayes-Nash equilibrium. However, RBR leads to
an efficient allocation in equilibrium, whereas RBB does not.
We showed that for an arbitrary distribution over value and
relevance, no revenue ranking of RBB and RBR is possible.
We hope that the tools we used to establish these results
(revenue equivalence, the form of first-price equilibria, the
truthful payments rules) will help others wanting to pursue
further analyses of slot auctions.
We also studied the long-run case where agents have
experimented with their bids and each settled on one they
find optimal. We argued that a stable set of bids in this
setting can be modeled as a pure-strategy Nash equilibrium
of the static game of complete information. We showed that
no pure-strategy equilibrium exists with either RBB or RBR
using first pricing, but that with second pricing there always
exists such an equilibrium (in the case of RBR, an efficient
equilibrium). In general second pricing allows for multiple
pure-strategy equilibria, but we showed that the value of
such equilibria diverges by only a constant factor from the
value obtained if all agents bid truthfully (which in the case
of RBR is the efficient value).
6. FUTURE WORK
Introducing budget constraints into the model is a
natural next step for future work. The complication here lies
in the fact that budgets are often set for entire campaigns
rather than single keywords. Assuming that the optimal
choice of budget can be made independent of the choice
of bid for a specific keyword, it can be shown that it is a
dominant-strategy to report this optimal budget with one"s
bid. The problem is then to ascertain that bids and budgets
can indeed be optimized separately, or to find a plausible
model where deriving equilibrium bids and budgets together
is tractable.
Identifying a condition on the distribution over value and
relevance that actually does yield a revenue ranking of RBB
and RBR (such as correlation between value and relevance,
perhaps) would yield a more satisfactory characterization
of their relative revenue properties. Placing bounds on the
revenue obtained in a complete information equilibrium is
also a relevant question.
Because the incomplete information case is such a close
generalization of the most basic single-item auction model,
it would be interesting to see which standard results from
single-item auction theory (e.g. results with risk-averse
bidders, an endogenous number of bidders, asymmetries, etc...)
automatically generalize and which do not, to fully
understand the structural differences between single-item and slot
auctions.
Acknowledgements
David Pennock provided valuable guidance throughout this
project. I would also like to thank David Parkes for helpful
comments.
7. REFERENCES
[1] Z. Abrams. Revenue maximization when bidders have
budgets. In Proc. the ACM-SIAM Symposium on
Discrete Algorithms, 2006.
[2] T. B¨orgers, I. Cox, and M. Pesendorfer. Personal
Communication.
[3] C. Borgs, J. Chayes, N. Immorlica, M. Mahdian, and
A. Saberi. Multi-unit auctions with
budget-constrained bidders. In Proc. the Sixth ACM
Conference on Electronic Commerce, Vancouver, BC,
2005.
[4] F. Brandt and G. Weiß. Antisocial agents and Vickrey
auctions. In J.-J. C. Meyer and M. Tambe, editors,
226
Intelligent Agents VIII, volume 2333 of Lecture Notes
in Artificial Intelligence. Springer Verlag, 2001.
[5] B. Edelman and M. Ostrovsky. Strategic bidder
behavior in sponsored search auctions. In Workshop
on Sponsored Search Auctions, ACM Electronic
Commerce, 2005.
[6] B. Edelman, M. Ostrovsky, and M. Schwarz. Internet
advertising and the generalized second price auction:
Selling billions of dollars worth of keywords. NBER
working paper 11765, November 2005.
[7] J. Feng, H. K. Bhargava, and D. M. Pennock.
Implementing sponsored search in web search engines:
Computational evaluation of alternative mechanisms.
INFORMS Journal on Computing, 2005. Forthcoming.
[8] J. Green and J.-J. Laffont. Characterization of
satisfactory mechanisms for the revelation of
preferences for public goods. Econometrica,
45:427-438, 1977.
[9] B. Holmstrom. Groves schemes on restricted domains.
Econometrica, 47(5):1137-1144, 1979.
[10] B. Kitts, P. Laxminarayan, B. LeBlanc, and
R. Meech. A formal analysis of search auctions
including predictions on click fraud and bidding
tactics. In Workshop on Sponsored Search Auctions,
ACM Electronic Commerce, 2005.
[11] V. Krishna. Auction Theory. Academic Press, 2002.
[12] D. Liu and J. Chen. Designing online auctions with
past performance information. Decision Support
Systems, 2005. Forthcoming.
[13] C. Meek, D. M. Chickering, and D. B. Wilson.
Stochastic and contingent payment auctions. In
Workshop on Sponsored Search Auctions, ACM
Electronic Commerce, 2005.
[14] A. Mehta, A. Saberi, U. Vazirani, and V. Vazirani.
Adwords and generalized on-line matching. In Proc.
46th IEEE Symposium on Foundations of Computer
Science, 2005.
[15] P. Milgrom. Putting Auction Theory to Work.
Cambridge University Press, 2004.
[16] P. Milgrom and C. Shannon. Monotone comparative
statics. Econometrica, 62(1):157-180, 1994.
[17] P. J. Reny. On the existence of pure and mixed
strategy Nash equilibria in discontinuous games.
Econometrica, 67(5):1029-1056, 1999.
[18] H. R. Varian. Position auctions. Working Paper,
February 2006.
[19] W. Vickrey. Counterspeculation, auctions and
competitive sealed tenders. Journal of Finance,
16:8-37, 1961.
227 | rank by bid;second pricing;incomplete information;web search engine;resurgent online advertising industry;second-price payment rule;alternative slot auction design;rank by revenue;pay per click;divergence of value;divergence of economic value;auction theory;combined market capitalization;multitude of equilibrium;sponsor search;sponsored search;slot allocation;auction-style mechanism;search engine;ad listing;equilibrium multitude |
train_J-42 | The Dynamics of Viral Marketing ∗ | We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective. | 1. INTRODUCTION
With consumers showing increasing resistance to
traditional forms of advertising such as TV or newspaper ads,
marketers have turned to alternate strategies, including
viral marketing. Viral marketing exploits existing social
networks by encouraging customers to share product
information with their friends. Previously, a few in depth
studies have shown that social networks affect the adoption of
individual innovations and products (for a review see [15]
or [16]). But until recently it has been difficult to measure
how influential person-to-person recommendations actually
are over a wide range of products. We were able to directly
measure and model the effectiveness of recommendations by
studying one online retailer"s incentivised viral marketing
program. The website gave discounts to customers
recommending any of its products to others, and then tracked the
resulting purchases and additional recommendations.
Although word of mouth can be a powerful factor
influencing purchasing decisions, it can be tricky for advertisers
to tap into. Some services used by individuals to
communicate are natural candidates for viral marketing, because the
product can be observed or advertised as part of the
communication. Email services such as Hotmail and Yahoo had
very fast adoption curves because every email sent through
them contained an advertisement for the service and because
they were free. Hotmail spent a mere $50,000 on traditional
marketing and still grew from zero to 12 million users in 18
months [7]. Google"s Gmail captured a significant part of
market share in spite of the fact that the only way to sign
up for the service was through a referral.
Most products cannot be advertised in such a direct way.
At the same time the choice of products available to
consumers has increased manyfold thanks to online retailers
who can supply a much wider variety of products than
traditional brick-and-mortar stores. Not only is the variety
of products larger, but one observes a ‘fat tail"
phenomenon, where a large fraction of purchases are of relatively
obscure items. On Amazon.com, somewhere between 20 to
40 percent of unit sales fall outside of its top 100,000 ranked
products [2]. Rhapsody, a streaming-music service, streams
more tracks outside than inside its top 10,000 tunes [1].
Effectively advertising these niche products using traditional
advertising approaches is impractical. Therefore using more
targeted marketing approaches is advantageous both to the
merchant and the consumer, who would benefit from
learning about new products.
The problem is partly addressed by the advent of
online product and merchant reviews, both at retail sites such
as EBay and Amazon, and specialized product comparison
sites such as Epinions and CNET. Quantitative marketing
techniques have been proposed [12], and the rating of
products and merchants has been shown to effect the likelihood
of an item being bought [13, 4]. Of further help to the
consumer are collaborative filtering recommendations of the
form people who bought x also bought y feature [11].
These refinements help consumers discover new products
228
and receive more accurate evaluations, but they cannot
completely substitute personalized recommendations that one
receives from a friend or relative. It is human nature to be
more interested in what a friend buys than what an
anonymous person buys, to be more likely to trust their opinion,
and to be more influenced by their actions. Our friends are
also acquainted with our needs and tastes, and can make
appropriate recommendations. A Lucid Marketing survey
found that 68% of individuals consulted friends and relatives
before purchasing home electronics - more than the half who
used search engines to find product information [3].
Several studies have attempted to model just this kind
of network influence. Richardson and Domingos [14] used
Epinions" trusted reviewer network to construct an
algorithm to maximize viral marketing efficiency assuming that
individuals" probability of purchasing a product depends on
the opinions on the trusted peers in their network. Kempe,
Kleinberg and Tardos [8] evaluate the efficiency of several
algorithms for maximizing the size of influence set given
various models of adoption. While these models address the
question of maximizing the spread of influence in a network,
they are based on assumed rather than measured influence
effects.
In contrast, in our study we are able to directly observe
the effectiveness of person to person word of mouth
advertising for hundreds of thousands of products for the first time.
We find that most recommendation chains do not grow very
large, often terminating with the initial purchase of a
product. However, occasionally a product will propagate through
a very active recommendation network. We propose a simple
stochastic model that seems to explain the propagation of
recommendations. Moreover, the characteristics of
recommendation networks influence the purchase patterns of their
members. For example, individuals" likelihood of
purchasing a product initially increases as they receive additional
recommendations for it, but a saturation point is quickly
reached. Interestingly, as more recommendations are sent
between the same two individuals, the likelihood that they
will be heeded decreases. We also propose models to identify
products for which viral marketing is effective: We find that
the category and price of product plays a role, with
recommendations of expensive products of interest to small, well
connected communities resulting in a purchase more often.
We also observe patterns in the timing of recommendations
and purchases corresponding to times of day when people
are likely to be shopping online or reading email. We report
on these and other findings in the following sections.
2. THE RECOMMENDATION NETWORK
2.1 Dataset description
Our analysis focuses on the recommendation referral
program run by a large retailer. The program rules were as
follows. Each time a person purchases a book, music, or a
movie he or she is given the option of sending emails
recommending the item to friends. The first person to purchase
the same item through a referral link in the email gets a 10%
discount. When this happens the sender of the
recommendation receives a 10% credit on their purchase.
The recommendation dataset consists of 15,646,121
recommendations made among 3,943,084 distinct users. The
data was collected from June 5 2001 to May 16 2003. In
total, 548,523 products were recommended, 99% of them
belonging to 4 main product groups: Books, DVDs, Music
and Videos. In addition to recommendation data, we also
crawled the retailer"s website to obtain product categories,
reviews and ratings for all products. Of the products in our
data set, 5813 (1%) were discontinued (the retailer no longer
provided any information about them).
Although the data gives us a detailed and accurate view
of recommendation dynamics, it does have its limitations.
The only indication of the success of a recommendation
is the observation of the recipient purchasing the product
through the same vendor. We have no way of knowing if
the person had decided instead to purchase elsewhere,
borrow, or otherwise obtain the product. The delivery of the
recommendation is also somewhat different from one person
simply telling another about a product they enjoy, possibly
in the context of a broader discussion of similar products.
The recommendation is received as a form email including
information about the discount program. Someone reading
the email might consider it spam, or at least deem it less
important than a recommendation given in the context of
a conversation. The recipient may also doubt whether the
friend is recommending the product because they think the
recipient might enjoy it, or are simply trying to get a
discount for themselves. Finally, because the recommendation
takes place before the recommender receives the product,
it might not be based on a direct observation of the
product. Nevertheless, we believe that these recommendation
networks are reflective of the nature of word of mouth
advertising, and give us key insights into the influence of social
networks on purchasing decisions.
2.2 Recommendation network statistics
For each recommendation, the dataset included the
product and product price, sender ID, receiver ID, the sent date,
and a buy-bit, indicating whether the recommendation
resulted in a purchase and discount. The sender and receiver
ID"s were shadowed. We represent this data set as a
directed multi graph. The nodes represent customers, and
a directed edge contains all the information about the
recommendation. The edge (i, j, p, t) indicates that i
recommended product p to customer j at time t.
The typical process generating edges in the
recommendation network is as follows: a node i first buys a product p at
time t and then it recommends it to nodes j1, . . . , jn. The
j nodes can they buy the product and further recommend
it. The only way for a node to recommend a product is to
first buy it. Note that even if all nodes j buy a product,
only the edge to the node jk that first made the purchase
(within a week after the recommendation) will be marked
by a buy-bit. Because the buy-bit is set only for the first
person who acts on a recommendation, we identify additional
purchases by the presence of outgoing recommendations for
a person, since all recommendations must be preceded by a
purchase. We call this type of evidence of purchase a
buyedge. Note that buy-edges provide only a lower bound on the
total number of purchases without discounts. It is possible
for a customer to not be the first to act on a
recommendation and also to not recommend the product to others.
Unfortunately, this was not recorded in the data set. We
consider, however, the buy-bits and buy-edges as proxies for
the total number of purchases through recommendations.
For each product group we took recommendations on all
products from the group and created a network. Table 1
229
0 1 2 3 4
x 10
6
0
2
4
6
8
10
12
x 10
4
number of nodes
sizeofgiantcomponent
by month
quadratic fit
0 10 20
0
2
4
x 10
6
m (month)
n
# nodes
1.7*10
6
m
10
0
10
1
10
2
10
3
10
1
10
2
10
3
10
4
10
5
10
6
kp
(recommendations by a person for a product)
N(x>=k
p
)
level 0
γ = 2.6
level 1
γ = 2.0
level 2
γ = 1.5
level 3
γ = 1.2
level 4
γ = 1.2
(a) Network growth (b) Recommending by level
Figure 1: (a) The size of the largest connected
component of customers over time. The inset shows the
linear growth in the number of customers n over
time. (b) The number of recommendations sent by
a user with each curve representing a different depth
of the user in the recommendation chain. A power
law exponent γ is fitted to all but the tail.
(first 7 columns) shows the sizes of various product group
recommendation networks with p being the total number
of products in the product group, n the total number of
nodes spanned by the group recommendation network and
e the number of edges (recommendations). The column eu
shows the number of unique edges - disregarding multiple
recommendations between the same source and recipient.
In terms of the number of different items, there are by far
the most music CDs, followed by books and videos. There
is a surprisingly small number of DVD titles. On the other
hand, DVDs account for more half of all recommendations in
the dataset. The DVD network is also the most dense,
having about 10 recommendations per node, while books and
music have about 2 recommendations per node and videos
have only a bit more than 1 recommendation per node.
Music recommendations reached about the same number
of people as DVDs but used more than 5 times fewer
recommendations to achieve the same coverage of the nodes. Book
recommendations reached by far the most people - 2.8
million. Notice that all networks have a very small number
of unique edges. For books, videos and music the number
of unique edges is smaller than the number of nodes - this
suggests that the networks are highly disconnected [5].
Figure 1(a) shows the fraction of nodes in largest weakly
connected component over time. Notice the component is
very small. Even if we compose a network using all the
recommendations in the dataset, the largest connected
component contains less than 2.5% (100,420) of the nodes, and the
second largest component has only 600 nodes. Still, some
smaller communities, numbering in the tens of thousands
of purchasers of DVDs in categories such as westerns,
classics and Japanese animated films (anime), had connected
components spanning about 20% of their members.
The insert in figure 1(a) shows the growth of the
customer base over time. Surprisingly it was linear, adding on
average 165,000 new users each month, which is an
indication that the service itself was not spreading epidemically.
Further evidence of non-viral spread is provided by the
relatively high percentage (94%) of users who made their first
recommendation without having previously received one.
Back to table 1: given the total number of
recommendations e and purchases (bb + be) influenced by
recommendations we can estimate how many recommendations need
to be independently sent over the network to induce a new
purchase. Using this metric books have the most influential
recommendations followed by DVDs and music. For books
one out of 69 recommendations resulted in a purchase. For
DVDs it increases to 108 recommendations per purchase and
further increases to 136 for music and 203 for video.
Even with these simple counts we can make the first few
observations. It seems that some people got quite
heavily involved in the recommendation program, and that they
tended to recommend a large number of products to the
same set of friends (since the number of unique edges is so
small). This shows that people tend to buy more DVDs and
also like to recommend them to their friends, while they
seem to be more conservative with books. One possible
reason is that a book is bigger time investment than a DVD:
one usually needs several days to read a book, while a DVD
can be viewed in a single evening.
One external factor which may be affecting the
recommendation patterns for DVDs is the existence of referral websites
(www.dvdtalk.com). On these websites people, who want to
buy a DVD and get a discount, would ask for
recommendations. This way there would be recommendations made
between people who don"t really know each other but rather
have an economic incentive to cooperate. We were not able
to find similar referral sharing sites for books or CDs.
2.3 Forward recommendations
Not all people who make a purchase also decide to give
recommendations. So we estimate what fraction of people
that purchase also decide to recommend forward. To obtain
this information we can only use the nodes with purchases
that resulted in a discount.
The last 3 columns of table 1 show that only about a
third of the people that purchase also recommend the
product forward. The ratio of forward recommendations is much
higher for DVDs than for other kinds of products. Videos
also have a higher ratio of forward recommendations, while
books have the lowest. This shows that people are most keen
on recommending movies, while more conservative when
recommending books and music.
Figure 1(b) shows the cumulative out-degree distribution,
that is the number of people who sent out at least kp
recommendations, for a product. It shows that the deeper an
individual is in the cascade, if they choose to make
recommendations, they tend to recommend to a greater number
of people on average (the distribution has a higher
variance). This effect is probably due to only very heavily
recommended products producing large enough cascades to
reach a certain depth. We also observe that the probability
of an individual making a recommendation at all (which can
only occur if they make a purchase), declines after an initial
increase as one gets deeper into the cascade.
2.4 Identifying cascades
As customers continue forwarding recommendations, they
contribute to the formation of cascades. In order to
identify cascades, i.e. the causal propagation of
recommendations, we track successful recommendations as they influence
purchases and further recommendations. We define a
recommendation to be successful if it reached a node before its first
purchase. We consider only the first purchase of an item,
because there are many cases when a person made multiple
230
Group p n e eu bb be Purchases Forward Percent
Book 103,161 2,863,977 5,741,611 2,097,809 65,344 17,769 65,391 15,769 24.2
DVD 19,829 805,285 8,180,393 962,341 17,232 58,189 16,459 7,336 44.6
Music 393,598 794,148 1,443,847 585,738 7,837 2,739 7,843 1,824 23.3
Video 26,131 239,583 280,270 160,683 909 467 909 250 27.6
Total 542,719 3,943,084 15,646,121 3,153,676 91,322 79,164 90,602 25,179 27.8
Table 1: Product group recommendation statistics. p: number of products, n: number of nodes, e: number of
edges (recommendations), eu: number of unique edges, bb: number of buy bits, be: number of buy edges. Last
3 columns of the table: Fraction of people that purchase and also recommend forward. Purchases: number
of nodes that purchased. Forward: nodes that purchased and then also recommended the product.
973
938
(a) Medical book (b) Japanese graphic novel
Figure 2: Examples of two product recommendation networks: (a) First aid study guide First Aid for the
USMLE Step, (b) Japanese graphic novel (manga) Oh My Goddess!: Mara Strikes Back.
10
0
10
5
10
0
10
2
10
4
10
6
10
8
Number of recommendations
Count
= 3.4e6 x−2.30
R2
=0.96
10
0
10
1
10
2
10
3
10
4
10
0
10
2
10
4
10
6
10
8
Number of purchases
Count
= 4.1e6 x−2.49
R2
=0.99
(a) Recommendations (b) Purchases
Figure 3: Distribution of the number of
recommendations and number of purchases made by a node.
purchases of the same product, and in between those
purchases she may have received new recommendations. In this
case one cannot conclude that recommendations following
the first purchase influenced the later purchases.
Each cascade is a network consisting of customers (nodes)
who purchased the same product as a result of each other"s
recommendations (edges). We delete late recommendations
- all incoming recommendations that happened after the
first purchase of the product. This way we make the
network time increasing or causal - for each node all incoming
edges (recommendations) occurred before all outgoing edges.
Now each connected component represents a time obeying
propagation of recommendations.
Figure 2 shows two typical product recommendation
networks: (a) a medical study guide and (b) a Japanese graphic
novel. Throughout the dataset we observe very similar
patters. Most product recommendation networks consist of a
large number of small disconnected components where we
do not observe cascades. Then there is usually a small
number of relatively small components with recommendations
successfully propagating.
This observation is reflected in the heavy tailed
distribution of cascade sizes (see figure 4), having a power-law
exponent close to 1 for DVDs in particular.
We also notice bursts of recommendations (figure 2(b)).
Some nodes recommend to many friends, forming a star like
pattern. Figure 3 shows the distribution of the
recommendations and purchases made by a single node in the
recommendation network. Notice the power-law distributions and
long flat tails. The most active person made 83,729
recommendations and purchased 4,416 different items. Finally,
we also sometimes observe ‘collisions", where nodes receive
recommendations from two or more sources. A detailed
enumeration and analysis of observed topological cascade
patterns for this dataset is made in [10].
2.5 The recommendation propagation model
A simple model can help explain how the wide variance we
observe in the number of recommendations made by
individuals can lead to power-laws in cascade sizes (figure 4). The
model assumes that each recipient of a recommendation will
forward it to others if its value exceeds an arbitrary
threshold that the individual sets for herself. Since exceeding this
value is a probabilistic event, let"s call pt the probability
that at time step t the recommendation exceeds the
thresh231
10
0
10
1
10
2
10
0
10
2
10
4
10
6
= 1.8e6 x−4.98
R2
=0.99
10
0
10
1
10
2
10
3
10
0
10
2
10
4
= 3.4e3 x−1.56
R2
=0.83
10
0
10
1
10
2
10
0
10
2
10
4
= 4.9e5 x−6.27
R2
=0.97
10
0
10
1
10
2
10
0
10
2
10
4
= 7.8e4 x−5.87
R2
=0.97
(a) Book (b) DVD (c) Music (d) Video
Figure 4: Size distribution of cascades (size of cascade vs. count). Bold line presents a power-fit.
old. In that case the number of recommendations Nt+1 at
time (t + 1) is given in terms of the number of
recommendations at an earlier time by
Nt+1 = ptNt (1)
where the probability pt is defined over the unit interval.
Notice that, because of the probabilistic nature of the
threshold being exceeded, one can only compute the final
distribution of recommendation chain lengths, which we now
proceed to do.
Subtracting from both sides of this equation the term Nt
and diving by it we obtain
N(t+1) − Nt
Nt
= pt − 1 (2)
Summing both sides from the initial time to some very
large time T and assuming that for long times the numerator
is smaller than the denominator (a reasonable assumption)
we get
dN
N
= pt (3)
The left hand integral is just ln(N), and the right hand
side is a sum of random variables, which in the limit of a very
large uncorrelated number of recommendations is normally
distributed (central limit theorem).
This means that the logarithm of the number of messages
is normally distributed, or equivalently, that the number of
messages passed is log-normally distributed. In other words
the probability density for N is given by
P(N) =
1
N
√
2πσ2
exp
−(ln(N) − μ)2
2σ2
(4)
which, for large variances describes a behavior whereby
the typical number of recommendations is small (the mode
of the distribution) but there are unlikely events of large
chains of recommendations which are also observable.
Furthermore, for large variances, the lognormal
distribution can behave like a power law for a range of values. In
order to see this, take the logarithms on both sides of the
equation (equivalent to a log-log plot) and one obtains
ln(P(N)) = − ln(N) − ln(
√
2πσ2) −
(ln (N) − μ)2
2σ2
(5)
So, for large σ, the last term of the right hand side goes
to zero, and since the the second term is a constant one
obtains a power law behavior with exponent value of
minus one. There are other models which produce power-law
distributions of cascade sizes, but we present ours for its
simplicity, since it does not depend on network topology [6]
or critical thresholds in the probability of a recommendation
being accepted [18].
3. SUCCESS OF RECOMMENDATIONS
So far we only looked into the aggregate statistics of the
recommendation network. Next, we ask questions about
the effectiveness of recommendations in the
recommendation network itself. First, we analyze the probability of
purchasing as one gets more and more recommendations. Next,
we measure recommendation effectiveness as two people
exchange more and more recommendations. Lastly, we
observe the recommendation network from the perspective of
the sender of the recommendation. Does a node that makes
more recommendations also influence more purchases?
3.1 Probability of buying versus number of
incoming recommendations
First, we examine how the probability of purchasing changes
as one gets more and more recommendations. One would
expect that a person is more likely to buy a product if she gets
more recommendations. On the other had one would also
think that there is a saturation point - if a person hasn"t
bought a product after a number of recommendations, they
are not likely to change their minds after receiving even more
of them. So, how many recommendations are too many?
Figure 5 shows the probability of purchasing a product
as a function of the number of incoming recommendations
on the product. As we move to higher numbers of incoming
recommendations, the number of observations drops rapidly.
For example, there were 5 million cases with 1 incoming
recommendation on a book, and only 58 cases where a
person got 20 incoming recommendations on a particular book.
The maximum was 30 incoming recommendations. For these
reasons we cut-off the plot when the number of observations
becomes too small and the error bars too large.
Figure 5(a) shows that, overall, book recommendations
are rarely followed. Even more surprisingly, as more and
more recommendations are received, their success decreases.
We observe a peak in probability of buying at 2 incoming
recommendations and then a slow drop.
For DVDs (figure 5(b)) we observe a saturation around 10
incoming recommendations. This means that after a person
gets 10 recommendations on a particular DVD, they
become immune to them - their probability of buying does
not increase anymore. The number of observations is 2.5
million at 1 incoming recommendation and 100 at 60
incoming recommendations. The maximal number of received
recommendations is 172 (and that person did not buy)
232
2 4 6 8 10
0
0.01
0.02
0.03
0.04
0.05
0.06
Incoming Recommendations
ProbabilityofBuying
10 20 30 40 50 60
0
0.02
0.04
0.06
0.08
Incoming Recommendations
ProbabilityofBuying
(a) Books (b) DVD
Figure 5: Probability of buying a book (DVD) given a number of incoming recommendations.
5 10 15 20 25 30 35 40
4
6
8
10
12
x 10
−3
Exchanged recommendations
Probabilityofbuying
5 10 15 20 25 30 35 40
0.02
0.03
0.04
0.05
0.06
0.07
Exchanged recommendations
Probabilityofbuying
(a) Books (b) DVD
Figure 6: The effectiveness of recommendations
with the total number of exchanged
recommendations.
3.2 Success of subsequent recommendations
Next, we analyze how the effectiveness of
recommendations changes as two persons exchange more and more
recommendations. A large number of exchanged
recommendations can be a sign of trust and influence, but a sender of too
many recommendations can be perceived as a spammer. A
person who recommends only a few products will have her
friends" attention, but one who floods her friends with all
sorts of recommendations will start to loose her influence.
We measure the effectiveness of recommendations as a
function of the total number of previously exchanged
recommendations between the two nodes. We construct the
experiment in the following way. For every recommendation
r on some product p between nodes u and v, we first
determine how many recommendations were exchanged between
u and v before recommendation r. Then we check whether v,
the recipient of recommendation, purchased p after
recommendation r arrived. For the experiment we consider only
node pairs (u, v), where there were at least a total of 10
recommendations sent from u to v. We perform the experiment
using only recommendations from the same product group.
Figure 6 shows the probability of buying as a function of
the total number of exchanged recommendations between
two persons up to that point. For books we observe that
the effectiveness of recommendation remains about constant
up to 3 exchanged recommendations. As the number of
exchanged recommendations increases, the probability of
buying starts to decrease to about half of the original value and
then levels off. For DVDs we observe an immediate and
consistent drop. This experiment shows that recommendations
start to lose effect after more than two or three are passed
between two people. We performed the experiment also for
video and music, but the number of observations was too
low and the measurements were noisy.
3.3 Success of outgoing recommendations
In previous sections we examined the data from the
viewpoint of the receiver of the recommendation. Now we look
from the viewpoint of the sender. The two interesting
questions are: how does the probability of getting a 10% credit
change with the number of outgoing recommendations; and
given a number of outgoing recommendations, how many
purchases will they influence?
One would expect that recommendations would be the
most effective when recommended to the right subset of
friends. If one is very selective and recommends to too few
friends, then the chances of success are slim. One the other
hand, recommending to everyone and spamming them with
recommendations may have limited returns as well.
The top row of figure 7 shows how the average number
of purchases changes with the number of outgoing
recommendations. For books, music, and videos the number of
purchases soon saturates: it grows fast up to around 10
outgoing recommendations and then the trend either slows or
starts to drop. DVDs exhibit different behavior, with the
expected number of purchases increasing throughout. But
if we plot the probability of getting a 10% credit as a
function of the number of outgoing recommendations, as in the
bottom row of figure 7, we see that the success of DVD
recommendations saturates as well, while books, videos and
music have qualitatively similar trends. The difference in
the curves for DVD recommendations points to the presence
of collisions in the dense DVD network, which has 10
recommendations per node and around 400 per product - an
order of magnitude more than other product groups. This
means that many different individuals are recommending to
the same person, and after that person makes a purchase,
even though all of them made a ‘successful recommendation"
233
10 20 30 40 50 60
0
0.1
0.2
0.3
0.4
0.5
Outgoing Recommendations
NumberofPurchases
20 40 60 80 100 120 140
0
1
2
3
4
5
6
7
Outgoing Recommendations
NumberofPurchases
5 10 15 20
0
0.05
0.1
0.15
0.2
Outgoing Recommendations
NumberofPurchases
2 4 6 8 10 12
0
0.05
0.1
0.15
0.2
0.25
Outgoing Recommendations
NumberofPurchases
10 20 30 40 50 60 70 80
0
0.05
0.1
0.15
0.2
0.25
Outgoing Recommendations
ProbabilityofCredit
10 20 30 40 50 60 70 80
0
0.02
0.04
0.06
0.08
0.1
0.12
Outgoing Recommendations
ProbabilityofCredit
5 10 15 20
0
0.02
0.04
0.06
0.08
0.1
Outgoing Recommendations
ProbabilityofCredit
2 4 6 8 10 12 14
0
0.02
0.04
0.06
0.08
Outgoing Recommendations
ProbabilityofCredit
(a) Books (b) DVD (c) Music (d) Video
Figure 7: Top row: Number of resulting purchases given a number of outgoing recommendations. Bottom
row: Probability of getting a credit given a number of outgoing recommendations.
1 2 3 4 5 6 7 > 7
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Lag [day]
ProportionofPurchases
1 2 3 4 5 6 7 > 7
0
0.1
0.2
0.3
0.4
0.5
Lag [day]
ProportionofPurchases
(a) Books (b) DVD
Figure 8: The time between the recommendation
and the actual purchase. We use all purchases.
by our definition, only one of them receives a credit.
4. TIMING OF RECOMMENDATIONS AND
PURCHASES
The recommendation referral program encourages people
to purchase as soon as possible after they get a
recommendation, since this maximizes the probability of getting a
discount. We study the time lag between the recommendation
and the purchase of different product groups, effectively how
long it takes a person to both receive a recommendation,
consider it, and act on it.
We present the histograms of the thinking time, i.e. the
difference between the time of purchase and the time the last
recommendation was received for the product prior to the
purchase (figure 8). We use a bin size of 1 day. Around
35%40% of book and DVD purchases occurred within a day after
the last recommendation was received. For DVDs 16%
purchases occur more than a week after last recommendation,
while this drops to 10% for books. In contrast, if we consider
the lag between the purchase and the first recommendation,
only 23% of DVD purchases are made within a day, while the
proportion stays the same for books. This reflects a greater
likelihood for a person to receive multiple recommendations
for a DVD than for a book. At the same time, DVD
recommenders tend to send out many more recommendations,
only one of which can result in a discount. Individuals then
often miss their chance of a discount, which is reflected in
the high ratio (78%) of recommended DVD purchases that
did not a get discount (see table 1, columns bb and be). In
contrast, for books, only 21% of purchases through
recommendations did not receive a discount.
We also measure the variation in intensity by time of day
for three different activities in the recommendation system:
recommendations (figure 9(a)), all purchases (figure 9(b)),
and finally just the purchases which resulted in a discount
(figure 9(c)). Each is given as a total count by hour of day.
The recommendations and purchases follow the same
pattern. The only small difference is that purchases reach a
sharper peak in the afternoon (after 3pm Pacific Time, 6pm
Eastern time). The purchases that resulted in a discount
look like a negative image of the first two figures. This means
that most of discounted purchases happened in the morning
when the traffic (number of purchases/recommendations) on
the retailer"s website was low. This makes a lot of sense since
most of the recommendations happened during the day, and
if the person wanted to get the discount by being the first
one to purchase, she had the highest chances when the traffic
on the website was the lowest.
5. RECOMMENDATION EFFECTIVENESS
BY BOOK CATEGORY
Social networks are a product of the contexts that bring
people together. Some contexts result in social ties that
are more effective at conducting an action. For example,
in small world experiments, where participants attempt to
reach a target individual through their chain of
acquaintances, profession trumped geography, which in turn was
more useful in locating a target than attributes such as
religion or hobbies [9, 17]. In the context of product
recommendations, we can ask whether a recommendation for a work
of fiction, which may be made by any friend or neighbor, is
234
0 5 10 15 20 25
0
2
4
6
8
10
x 10
5
Hour of the Day
Recommendtions
0 5 10 15 20 25
0
0.5
1
1.5
2
x 10
4
Hour of the Day
AllPurchases
0 5 10 15 20 25
0
1000
2000
3000
4000
5000
6000
7000
Hour of the Day
DiscountedPurchases
(a) Recommendations (b) Purchases (c) Purchases with Discount
Figure 9: Time of day for purchases and recommendations. (a) shows the distribution of recommendations
over the day, (b) shows all purchases and (c) shows only purchases that resulted in getting discount.
more or less influential than a recommendation for a
technical book, which may be made by a colleague at work or
school.
Table 2 shows recommendation trends for all top level
book categories by subject. An analysis of other product
types can be found in the extended version of the paper. For
clarity, we group the results by 4 different category types:
fiction, personal/leisure, professional/technical, and
nonfiction/other. Fiction encompasses categories such as Sci-Fi
and Romance, as well as children"s and young adult books.
Personal/Leisure encompasses everything from gardening,
photography and cooking to health and religion.
First, we compare the relative number of
recommendations to reviews posted on the site (column cav/rp1 of
table 2). Surprisingly, we find that the number of people
making personal recommendations was only a few times greater
than the number of people posting a public review on the
website. We observe that fiction books have relatively few
recommendations compared to the number of reviews, while
professional and technical books have more
recommendations than reviews. This could reflect several factors. One is
that people feel more confident reviewing fiction than
technical books. Another is that they hesitate to recommend a
work of fiction before reading it themselves, since the
recommendation must be made at the point of purchase. Yet
another explanation is that the median price of a work of
fiction is lower than that of a technical book. This means
that the discount received for successfully recommending a
mystery novel or thriller is lower and hence people have less
incentive to send recommendations.
Next, we measure the per category efficacy of
recommendations by observing the ratio of the number of purchases
occurring within a week following a recommendation to the
number of recommenders for each book subject category
(column b of table 2). On average, only 2% of the
recommenders of a book received a discount because their
recommendation was accepted, and another 1% made a
recommendation that resulted in a purchase, but not a discount.
We observe marked differences in the response to
recommendation for different categories of books. Fiction in general
is not very effectively recommended, with only around 2%
of recommenders succeeding. The efficacy was a bit higher
(around 3%) for non-fiction books dealing with personal and
leisure pursuits, but is significantly higher in the professional
and technical category. Medical books have nearly double
the average rate of recommendation acceptance. This could
be in part attributed to the higher median price of medical
books and technical books in general. As we will see in
Section 6, a higher product price increases the chance that a
recommendation will be accepted.
Recommendations are also more likely to be accepted for
certain religious categories: 4.3% for Christian living and
theology and 4.8% for Bibles. In contrast, books not tied
to organized religions, such as ones on the subject of new
age (2.5%) and occult (2.2%) spirituality, have lower
recommendation effectiveness. These results raise the interesting
possibility that individuals have greater influence over one
another in an organized context, for example through a
professional contact or a religious one. There are exceptions of
course. For example, Japanese anime DVDs have a strong
following in the US, and this is reflected in their frequency
and success in recommendations. Another example is that of
gardening. In general, recommendations for books relating
to gardening have only a modest chance of being accepted,
which agrees with the individual prerogative that
accompanies this hobby. At the same time, orchid cultivation can be
a highly organized and social activity, with frequent ‘shows"
and online communities devoted entirely to orchids. Perhaps
because of this, the rate of acceptance of orchid book
recommendations is twice as high as those for books on vegetable
or tomato growing.
6. MODELING THE RECOMMENDATION
SUCCESS
We have examined the properties of recommendation
network in relation to viral marketing, but one question still
remains: what determines the product"s viral marketing
success? We present a model which characterizes product
categories for which recommendations are more likely to be
accepted. We use a regression of the following product
attributes to correlate them with recommendation success:
• r: number of recommendations
• ns: number of senders of recommendations
• nr: number of recipients of recommendations
• p: price of the product
• v: number of reviews of the product
• t: average product rating
235
category np n cc rp1 vav cav/ pm b ∗ 100
rp1
Books general 370230 2,860,714 1.87 5.28 4.32 1.41 14.95 3.12
Fiction
Children"s Books 46,451 390,283 2.82 6.44 4.52 1.12 8.76 2.06**
Literature & Fiction 41,682 502,179 3.06 13.09 4.30 0.57 11.87 2.82*
Mystery and Thrillers 10,734 123,392 6.03 20.14 4.08 0.36 9.60 2.40**
Science Fiction & Fantasy 10,008 175,168 6.17 19.90 4.15 0.64 10.39 2.34**
Romance 6,317 60,902 5.65 12.81 4.17 0.52 6.99 1.78**
Teens 5,857 81,260 5.72 20.52 4.36 0.41 9.56 1.94**
Comics & Graphic Novels 3,565 46,564 11.70 4.76 4.36 2.03 10.47 2.30*
Horror 2,773 48,321 9.35 21.26 4.16 0.44 9.60 1.81**
Personal/Leisure
Religion and Spirituality 43,423 441,263 1.89 3.87 4.45 1.73 9.99 3.13
Health Mind and Body 33,751 572,704 1.54 4.34 4.41 2.39 13.96 3.04
History 28,458 28,3406 2.74 4.34 4.30 1.27 18.00 2.84
Home and Garden 19,024 180,009 2.91 1.78 4.31 3.48 15.37 2.26**
Entertainment 18,724 258,142 3.65 3.48 4.29 2.26 13.97 2.66*
Arts and Photography 17,153 179,074 3.49 1.56 4.42 3.85 20.95 2.87
Travel 12,670 113,939 3.91 2.74 4.26 1.87 13.27 2.39**
Sports 10,183 120,103 1.74 3.36 4.34 1.99 13.97 2.26**
Parenting and Families 8,324 182,792 0.73 4.71 4.42 2.57 11.87 2.81
Cooking Food and Wine 7,655 146,522 3.02 3.14 4.45 3.49 13.97 2.38*
Outdoors & Nature 6,413 59,764 2.23 1.93 4.42 2.50 15.00 3.05
Professional/Technical
Professional & Technical 41,794 459,889 1.72 1.91 4.30 3.22 32.50 4.54**
Business and Investing 29,002 476,542 1.55 3.61 4.22 2.94 20.99 3.62**
Science 25,697 271,391 2.64 2.41 4.30 2.42 28.00 3.90**
Computers and Internet 18,941 375,712 2.22 4.51 3.98 3.10 34.95 3.61**
Medicine 16,047 175,520 1.08 1.41 4.40 4.19 39.95 5.68**
Engineering 10,312 107,255 1.30 1.43 4.14 3.85 59.95 4.10**
Law 5,176 53,182 2.64 1.89 4.25 2.67 24.95 3.66*
Nonfiction-other
Nonfiction 55,868 560,552 2.03 3.13 4.29 1.89 18.95 3.28**
Reference 26,834 371,959 1.94 2.49 4.19 3.04 17.47 3.21
Biographies and Memoirs 18,233 277,356 2.80 7.65 4.34 0.90 14.00 2.96
Table 2: Statistics by book category: np:number of products in category, n number of customers, cc percentage
of customers in the largest connected component, rp1 av. # reviews in 2001 - 2003, rp2 av. # reviews
1st 6 months 2005, vav average star rating, cav average number of people recommending product, cav/rp1
ratio of recommenders to reviewers, pm median price, b ratio of the number of purchases resulting from a
recommendation to the number of recommenders. The symbol ** denotes statistical significance at the 0.01
level, * at the 0.05 level.
From the original set of half a million products, we
compute a success rate s for the 48,218 products that had at
least one purchase made through a recommendation and for
which a price was given. In section 5 we defined
recommendation success rate s as the ratio of the total number
purchases made through recommendations and the number
of senders of the recommendations. We decided to use this
kind of normalization, rather than normalizing by the total
number of recommendations sent, in order not to penalize
communities where a few individuals send out many
recommendations (figure 2(b)). Since the variables follow a heavy
tailed distribution, we use the following model:
s = exp(
i
βi log(xi) + i)
where xi are the product attributes (as described on
previous page), and i is random error.
We fit the model using least squares and obtain the
coefficients βi shown on table 3. With the exception of the
average rating, they are all significant. The only two
attributes with a positive coefficient are the number of
recommendations and price. This shows that more expensive
and more recommended products have a higher success rate.
The number of senders and receivers have large negative
coefficients, showing that successfully recommended products
are more likely to be not so widely popular. They have
relatively many recommendations with a small number of
senders and receivers, which suggests a very dense
recommendation network where lots of recommendations were
exchanged between a small community of people.
These insights could be to marketers - personal
recommendations are most effective in small, densely connected
communities enjoying expensive products.
236
Variable Coefficient βi
const -0.940 (0.025)**
r 0.426 (0.013)**
ns -0.782 (0.004)**
nr -1.307 (0.015)**
p 0.128 (0.004)**
v -0.011 (0.002)**
t -0.027 (0.014)*
R2
0.74
Table 3: Regression using the log of the
recommendation success rate, ln(s), as the dependent variable.
For each coefficient we provide the standard error
and the statistical significance level (**:0.01, *:0.1).
7. DISCUSSION AND CONCLUSION
Although the retailer may have hoped to boost its
revenues through viral marketing, the additional purchases that
resulted from recommendations are just a drop in the bucket
of sales that occur through the website. Nevertheless, we
were able to obtain a number of interesting insights into how
viral marketing works that challenge common assumptions
made in epidemic and rumor propagation modeling.
Firstly, it is frequently assumed in epidemic models that
individuals have equal probability of being infected every
time they interact. Contrary to this we observe that the
probability of infection decreases with repeated interaction.
Marketers should take heed that providing excessive
incentives for customers to recommend products could backfire
by weakening the credibility of the very same links they are
trying to take advantage of.
Traditional epidemic and innovation diffusion models also
often assume that individuals either have a constant
probability of ‘converting" every time they interact with an
infected individual or that they convert once the fraction of
their contacts who are infected exceeds a threshold. In both
cases, an increasing number of infected contacts results in
an increased likelihood of infection. Instead, we find that
the probability of purchasing a product increases with the
number of recommendations received, but quickly saturates
to a constant and relatively low probability. This means
individuals are often impervious to the recommendations of
their friends, and resist buying items that they do not want.
In network-based epidemic models, extremely highly
connected individuals play a very important role. For example,
in needle sharing and sexual contact networks these nodes
become the super-spreaders by infecting a large number
of people. But these models assume that a high degree node
has as much of a probability of infecting each of its neighbors
as a low degree node does. In contrast, we find that there
are limits to how influential high degree nodes are in the
recommendation network. As a person sends out more and
more recommendations past a certain number for a product,
the success per recommendation declines. This would seem
to indicate that individuals have influence over a few of their
friends, but not everybody they know.
We also presented a simple stochastic model that allows
for the presence of relatively large cascades for a few
products, but reflects well the general tendency of
recommendation chains to terminate after just a short number of steps.
We saw that the characteristics of product reviews and
effectiveness of recommendations vary by category and price,
with more successful recommendations being made on
technical or religious books, which presumably are placed in the
social context of a school, workplace or place of worship.
Finally, we presented a model which shows that smaller
and more tightly knit groups tend to be more conducive to
viral marketing. So despite the relative ineffectiveness of the
viral marketing program in general, we found a number of
new insights which we hope will have general applicability
to marketing strategies and to future models of viral
information spread.
8. REFERENCES
[1] Anonymous. Profiting from obscurity: What the long
tail means for the economics of e-commerce.
Economist, 2005.
[2] E. Brynjolfsson, Y. Hu, and M. D. Smith. Consumer
surplus in the digital economy: Estimating the value
of increased product variety at online booksellers.
Management Science, 49(11), 2003.
[3] K. Burke. As consumer attitudes shift, so must
marketing strategies. 2003.
[4] J. Chevalier and D. Mayzlin. The effect of word of
mouth on sales: Online book reviews. 2004.
[5] P. Erd¨os and A. R´enyi. On the evolution of random
graphs. Publ. Math. Inst. Hung. Acad. Sci., 1960.
[6] D. Gruhl, R. Guha, D. Liben-Nowell, and A. Tomkins.
Information diffusion through blogspace. In WWW
"04, 2004.
[7] S. Jurvetson. What exactly is viral marketing? Red
Herring, 78:110-112, 2000.
[8] D. Kempe, J. Kleinberg, and E. Tardos. Maximizing
the spread of infuence in a social network. In ACM
SIGKDD International Conference on Knowledge
Discovery and Data Mining (KDD), 2003.
[9] P. Killworth and H. Bernard. Reverse small world
experiment. Social Networks, 1:159-192, 1978.
[10] J. Leskovec, A. Singh, and J. Kleinberg. Patterns of
influence in a recommendation network. In
Pacific-Asia Conference on Knowledge Discovery and
Data Mining (PAKDD), 2006.
[11] G. Linden, B. Smith, and J. York. Amazon.com
recommendations: item-to-item collaborative filtering.
IEEE Internet Computing, 7(1):76-80, 2003.
[12] A. L. Montgomery. Applying quantitative marketing
techniques to the internet. Interfaces, 30:90-108, 2001.
[13] P. Resnick and R. Zeckhauser. Trust among strangers
in internet transactions: Empirical analysis of ebays
reputation system. In The Economics of the Internet
and E-Commerce. Elsevier Science, 2002.
[14] M. Richardson and P. Domingos. Mining
knowledge-sharing sites for viral marketing. In ACM
SIGKDD International Conference on Knowledge
Discovery and Data Mining (KDD), 2002.
[15] E. M. Rogers. Diffusion of Innovations. Free Press,
New York, fourth edition, 1995.
[16] D. Strang and S. A. Soule. Diffusion in organizations
and social movements: From hybrid corn to poison
pills. Annual Review of Sociology, 24:265-290, 1998.
[17] J. Travers and S. Milgram. An experimental study of
the small world problem. Sociometry, 1969.
[18] D. Watts. A simple model of global cascades on
random networks. PNAS, 99(9):4766-5771, Apr 2002.
237 | directed multi graph;product;consumer;recommender system;viral market;recommendation network;viral marketing;advertisement;pricing category;probability;e-commerce;purchase;stochastic model;connected individual |
train_J-44 | Scouts, Promoters, and Connectors: The Roles of Ratings in Nearest Neighbor Collaborative Filtering | Recommender systems aggregate individual user ratings into predictions of products or services that might interest visitors. The quality of this aggregation process crucially affects the user experience and hence the effectiveness of recommenders in e-commerce. We present a novel study that disaggregates global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearestneighbor collaborative filtering. In particular, we formulate three roles-scouts, promoters, and connectors-that capture how users receive recommendations, how items get recommended, and how ratings of these two types are themselves connected (resp.). These roles find direct uses in improving recommendations for users, in better targeting of items and, most importantly, in helping monitor the health of the system as a whole. For instance, they can be used to track the evolution of neighborhoods, to identify rating subspaces that do not contribute (or contribute negatively) to system performance, to enumerate users who are in danger of leaving, and to assess the susceptibility of the system to attacks such as shilling. We argue that the three rating roles presented here provide broad primitives to manage a recommender system and its community. | 1. INTRODUCTION
Recommender systems have become integral to e-commerce,
providing technology that suggests products to a visitor
based on previous purchases or rating history.
Collaborative filtering, a common form of recommendation, predicts
a user"s rating for an item by combining (other) ratings of
that user with other users" ratings. Significant research has
been conducted in implementing fast and accurate
collaborative filtering algorithms [2, 7], designing interfaces for
presenting recommendations to users [1], and studying the
robustness of these algorithms [8]. However, with the
exception of a few studies on the influence of users [10], little
attention has been paid to unraveling the inner workings
of a recommender in terms of the individual ratings and the
roles they play in making (good) recommendations. Such an
understanding will give an important handle to monitoring
and managing a recommender system, to engineer
mechanisms to sustain the recommender, and thereby ensure its
continued success.
Our motivation here is to disaggregate global recommender
performance metrics into contributions made by each
individual rating, allowing us to characterize the many roles
played by ratings in nearest-neighbor collaborative filtering.
We identify three possible roles: (scouts) to connect the user
into the system to receive recommendations, (promoters) to
connect an item into the system to be recommended, and
(connectors) to connect ratings of these two kinds. Viewing
ratings in this way, we can define the contribution of a
rating in each role, both in terms of allowing recommendations
to occur, and in terms of influence on the quality of
recommendations. In turn, this capability helps support scenarios
such as:
1. Situating users in better neighborhoods: A user"s
ratings may inadvertently connect the user to a
neighborhood for which the user"s tastes may not be a perfect
match. Identifying ratings responsible for such bad
recommendations and suggesting new items to rate can
help situate the user in a better neighborhood.
2. Targeting items: Recommender systems suffer from
lack of user participation, especially in cold-start
scenarios [13] involving newly arrived items. Identifying
users who can be encouraged to rate specific items
helps ensure coverage of the recommender system.
3. Monitoring the evolution of the recommender system
and its stakeholders: A recommender system is
constantly under change: growing with new users and
250
items, shrinking with users leaving the system, items
becoming irrelevant, and parts of the system under
attack. Tracking the roles of a rating and its evolution
over time provides many insights into the health of the
system, and how it could be managed and improved.
These include being able to identify rating subspaces
that do not contribute (or contribute negatively) to
system performance, and could be removed; to
enumerate users who are in danger of leaving, or have left
the system; and to assess the susceptibility of the
system to attacks such as shilling [5].
As we show, the characterization of rating roles presented
here provides broad primitives to manage a recommender
system and its community. The rest of the paper is
organized as follows. Background on nearest-neighbor
collaborative filtering and algorithm evaluation is discussed in
Section 2. Section 3 defines and discusses the roles of a rating,
and Section 4 defines measures of the contribution of a
rating in each of these roles. In Section 5, we illustrate the use
of these roles to address the goals outlined above.
2. BACKGROUND
2.1 Algorithms
Nearest-neighbor collaborative filtering algorithms either
use neighborhoods of users or neighborhoods of items to
compute a prediction. An algorithm of the first kind is
called user-based, and one of the second kind is called
itembased [12]. In both families of algorithms, neighborhoods are
formed by first computing the similarity between all pairs
of users (for user-based) or items (for item-based).
Predictions are then computed by aggregating ratings, which in a
user-based algorithm involves aggregating the ratings of the
target item by the user"s neighbors and, in an item-based
algorithm, involves aggregating the user"s ratings of items
that are neighbors of the target item. Algorithms within
these families differ in the definition of similarity,
formation of neighborhoods, and the computation of predictions.
We consider a user-based algorithm based on that defined
for GroupLens [11] with variations from Herlocker et al. [2],
and an item-based algorithm similar to that of Sarwar et
al. [12].
The algorithm used by Resnick et al. [11] defines the
similarity of two users u and v as the Pearson correlation of
their common ratings:
sim(u, v) =
P
i∈Iu∩Iv
(ru,i − ¯ru)(rv,i − ¯rv)
qP
i∈Iu
(ru,i − ¯ru)2
qP
i∈Iv
(rv,i − ¯rv)2
,
where Iu is the set of items rated by user u, ru,i is user u"s
rating for item i, and ¯ru is the average rating of user u
(similarly for v). Similarity computed in this manner is typically
scaled by a factor proportional to the number of common
ratings, to reduce the chance of making a recommendation
made on weak connections:
sim (u, v) =
max(|Iu ∩ Iv|, γ)
γ
· sim(u, v),
where γ ≈ 5 is a constant used as a lower limit in scaling [2].
These new similarities are then used to define a static
neighborhood Nu for each user u consisting of the top K users
most similar to user u. A prediction for user u and item
i is computed by a weighted average of the ratings by the
neighbors
pu,i = ¯ru +
P
v∈V sim (u, v)(rv,i − ¯rv)
P
v∈V sim (u, v)
(1)
where V = Nu ∩ Ui is the set of users most similar to u who
have rated i.
The item-based algorithm we use is the one defined by
Sarwar et al. [12]. In this algorithm, similarity is defined as
the adjusted cosine measure
sim(i, j) =
P
u∈Ui∩Uj
(ru,i − ¯ru)(ru,j − ¯ru)
qP
u∈Ui
(ru,i − ¯ru)2
qP
u∈Uj
(ru,j − ¯ru)2
(2)
where Ui is the set of users who have rated item i. As for
the user-based algorithm, the similarity weights are adjusted
proportionally to the number of users that have rated the
items in common
sim (i, j) =
max(|Ui ∩ Uj|, γ)
γ
· sim(i, j). (3)
Given the similarities, the neighborhood Ni of an item i is
defined as the top K most similar items for that item. A
prediction for user u and item i is computed as the weighted
average
pu,i = ¯ri +
P
j∈J sim (i, j)(ru,j − ¯rj)
P
j∈J sim (i, j)
(4)
where J = Ni ∩ Iu is the set of items rated by u that are
most similar to i.
2.2 Evaluation
Recommender algorithms have typically been evaluated
using measures of predictive accuracy and coverage [3].
Studies on recommender algorithms, notably Herlocker et al. [2]
and Sarwar et al. [12], typically compute predictive accuracy
by dividing a set of ratings into training and test sets, and
compute the prediction for an item in the test set using the
ratings in the training set. A standard measure of predictive
accuracy is mean absolute error (MAE), which for a test set
T = {(u, i)} is defined as,
MAE =
P
(u,i)∈T |pu,i − ru,i|
|T |
. (5)
Coverage has a number of definitions, but generally refers
to the proportion of items that can be predicted by the
algorithm [3].
A practical issue with predictive accuracy is that users
typically are presented with recommendation lists, and not
individual numeric predictions. Recommendation lists are
lists of items in decreasing order of prediction (sometimes
stated in terms of star-ratings), and so predictive accuracy
may not be reflective of the accuracy of the list. So, instead
we can measure recommendation or rank accuracy, which
indicates the extent to which the list is in the correct
order. Herlocker et al. [3] discuss a number of rank accuracy
measures, which range from Kendall"s Tau to measures that
consider the fact that users tend to only look at a prefix of
the list [5]. Kendall"s Tau measures the number of inversions
when comparing ordered pairs in the true user ordering of
251
Jim
Tom
Jeff
My Cousin Vinny
The Matrix
Star Wars
The Mask
Figure 1: Ratings in simple movie recommender.
items and the recommended order, and is defined as
τ =
C − D
p
(C + D + TR)(C + D + TP)
(6)
where C is the number of pairs that the system predicts in
the correct order, D the number of pairs the system
predicts in the wrong order, TR the number of pairs in the true
ordering that have the same ratings, and TP is the
number of pairs in the predicted ordering that have the same
ratings [3]. A shortcoming of the Tau metric is that it is
oblivious to the position in the ordered list where the
inversion occurs [3]. For instance, an inversion toward the end
of the list is given the same weight as one in the beginning.
One solution is to consider inversions only in the top few
items in the recommended list or to weight inversions based
on their position in the list.
3. ROLES OF A RATING
Our basic observation is that each rating plays a
different role in each prediction in which it is used. Consider a
simplified movie recommender system with three users Jim,
Jeff, and Tom and their ratings for a few movies, as shown
in Fig. 1. (For this initial discussion we will not consider the
rating values involved.) The recommender predicts whether
Tom will like The Mask using the other already available
ratings. How this is done depends on the algorithm:
1. An item-based collaborative filtering algorithm
constructs a neighborhood of movies around The Mask by
using the ratings of users who rated The Mask and
other movies similarly (e.g., Jim"s ratings of The Matrix
and The Mask; and Jeff"s ratings of Star Wars and The
Mask). Tom"s ratings of those movies are then used to
make a prediction for The Mask.
2. A user-based collaborative filtering algorithm would
construct a neighborhood around Tom by tracking other
users whose rating behaviors are similar to Tom"s (e.g.,
Tom and Jeff have rated Star Wars; Tom and Jim have
rated The Matrix). The prediction of Tom"s rating for
The Mask is then based on the ratings of Jeff and Tim.
Although the nearest-neighbor algorithms aggregate the
ratings to form neighborhoods used to compute predictions, we
can disaggregate the similarities to view the computation of
a prediction as simultaneously following parallel paths of
ratings. So, irrespective of the collaborative filtering
algorithm used, we can visualize the prediction of Tom"s rating
of The Mask as walking through a sequence of ratings. In
Jim
Tom
Jeff
The Matrix
Star Wars
The Mask
q1
q2 q3
p1
p2
p3
Figure 2: Ratings used to predict The Mask for Tom.
Jim
Tom
Jeff
The Matrix
Star Wars
The Mask
q1
q2
q3
p1 p2
p3
Jerry
r2
r3
Figure 3: Prediction of The Mask for Tom in which a
rating is used more than once.
this example, two paths were used for this prediction as
depicted in Fig. 2: (p1, p2, p3) and (q1, q2, q3). Note that these
paths are undirected, and are all of length 3. Only the order
in which the ratings are traversed is different between the
item-based algorithm (e.g., (p3, p2, p1), (q3, q2, q1)) and the
user-based algorithm (e.g., (p1, p2, p3), (q1, q2, q3).) A rating
can be part of many paths for a single prediction as shown
in Fig. 3, where three paths are used for a prediction, two
of which follow p1: (p1, p2, p3) and (p1, r2, r3).
Predictions in a collaborative filtering algorithms may
involve thousands of such walks in parallel, each playing a part
in influencing the predicted value. Each prediction path
consists of three ratings, playing roles that we call scouts,
promoters, and connectors. To illustrate these roles, consider
the path (p1, p2, p3) in Fig. 2 used to make a prediction of
The Mask for Tom:
1. The rating p1 (Tom → Star Wars) makes a connection
from Tom to other ratings that can be used to predict
Tom"s rating for The Mask. This rating serves as a
scout in the bipartite graph of ratings to find a path
that leads to The Mask.
2. The rating p2 (Jeff → Star Wars) helps the system
recommend The Mask to Tom by connecting the scout to
the promoter.
3. The rating p3 (Jeff → The Mask) allows connections to
The Mask, and, therefore, promotes this movie to Tom.
Formally, given a prediction pu,a of a target item a for user
u, a scout for pu,a is a rating ru,i such that there exists a
user v with ratings rv,a and rv,i for some item i; a promoter
for pu,a is a rating rv,a for some user v, such that there exist
ratings rv,i and ru,i for an item i, and; a connector for pu,a
252
Jim
Tom
Jeff
Jerry
My Cousin Vinny
The Matrix
Star Wars
The Mask
Jurasic Park
Figure 4: Scouts, promoters, and connectors.
is a rating rv,i by some user v and rating i, such that there
exists ratings ru,i and rv,a. The scouts, connectors, and
promoters for the prediction of Tom"s rating of The Mask
are p1 and q1, p2 and q2, and p3 and q3 (respectively). Each
of these roles has a value in the recommender to the user,
the user"s neighborhood, and the system in terms of allowing
recommendations to be made.
3.1 Roles in Detail
Ratings that act as scouts tend to help the recommender
system suggest more movies to the user, though the extent
to which this is true depends on the rating behavior of other
users. For example, in Fig. 4 the rating Tom → Star Wars
helps the system recommend only The Mask to him, while
Tom → The Matrix helps recommend The Mask, Jurassic
Park, and My Cousin Vinny. Tom makes a connection to
Jim who is a prolific user of the system, by rating The
Matrix. However, this does not make The Matrix the best
movie to rate for everyone. For example, Jim is benefited
equally by both The Mask and The Matrix, which allow the
system to recommend Star Wars to him. His rating of The
Mask is the best scout for Jeff, and Jerry"s only scout is his
rating of Star Wars. This suggests that good scouts allow
a user to build similarity with prolific users, and thereby
ensure they get more from the system.
While scouts represent beneficial ratings from the
perspective of a user, promoters are their duals, and are of benefit
to items. In Fig. 4, My Cousin Vinny benefits from Jim"s
rating, since it allows recommendations to Jeff and Tom.
The Mask is not so dependent on just one rating, since the
ratings by Jim and Jeff help it. On the other hand, Jerry"s
rating of Star Wars does not help promote it to any other
user. We conclude that a good promoter connects an item to
a broader neighborhood of other items, and thereby ensures
that it is recommended to more users.
Connectors serve a crucial role in a recommender system
that is not as obvious. The movies My Cousin Vinny and
Jurassic Park have the highest recommendation potential
since they can be recommended to Jeff, Jerry and Tom based
on the linkage structure illustrated in Fig. 4. Beside the
fact that Jim rated these movies, these recommendations are
possible only because of the ratings Jim → The Matrix and
Jim → The Mask, which are the best connectors. A
connector improves the system"s ability to make recommendations
with no explicit gain for the user.
Note that every rating can be of varied benefit in each of
these roles. The rating Jim → My Cousin Vinny is a poor
scout and connector, but is a very good promoter. The
rating Jim → The Mask is a reasonably good scout, a very
good connector, and a good promoter. Finally, the rating
Jerry → Star Wars is a very good scout, but is of no value
as a connector or promoter. As illustrated here, a rating
can have different value in each of the three roles in terms of
whether a recommendation can be made or not. We could
measure this value by simply counting the number of times
a rating is used in each role, which alone would be
helpful in characterizing the behavior of a system. But we can
also measure the contribution of each rating to the quality
of recommendations or health of the system. Since every
prediction is a combined effort of several recommendation
paths, we are interested in discerning the influence of each
rating (and, hence, each path) in the system towards the
system"s overall error. We can understand the dynamics of
the system at a finer granularity by tracking the influence
of a rating according to the role played. The next section
describes the approach to measuring the values of a rating
in each role.
4. CONTRIBUTIONS OF RATINGS
As we"ve seen, a rating may play different roles in different
predictions and, in each prediction, contribute to the quality
of a prediction in different ways. Our approach can use any
numeric measure of a property of system health, and assigns
credit (or blame) to each rating proportional to its influence
in the prediction. By tracking the role of each rating in a
prediction, we can accumulate the credit for a rating in each
of the three roles, and also track the evolution of the roles
of rating over time in the system.
This section defines the methodology for computing the
contribution of ratings by first defining the influence of a
rating, and then instantiating the approach for predictive
accuracy, and then rank accuracy. We also demonstrate how
these contributions can be aggregated to study the
neighborhood of ratings involved in computing a user"s
recommendations. Note that although our general formulation for rating
influence is algorithm independent, due to space
considerations, we present the approach for only item-based
collaborative filtering. The definition for user-based algorithms is
similar and will be presented in an expanded version of this
paper.
4.1 Influence of Ratings
Recall that an item-based approach to collaborative
filtering relies on building item neighborhoods using the
similarity of ratings by the same user. As described earlier,
similarity is defined by the adjusted cosine measure (Equations (2)
and (3)). A set of the top K neighbors is maintained for all
items for space and computational efficiency. A prediction
of item i for a user u is computed as the weighted deviation
from the item"s mean rating as shown in Equation (4). The
list of recommendations for a user is then the list of items
sorted in descending order of their predicted values.
We first define impact(a, i, j), the impact a user a has in
determining the similarity between two items i and j. This
is the change in the similarity between i and j when a"s
rating is removed, and is defined as
impact(a, i, j) =
|sim (i, j) − sim¯a(i, j)|
P
w∈Cij
|sim (i, j) − sim ¯w(i, j)|
where Cij = {u ∈ U | ∃ ru,i, ru,j ∈ R(u)} is the set of coraters
253
of items i and j (users who rate both i and j), R(u) is the set
of ratings provided by user u, and sim¯a(i, j) is the similarity
of i and j when the ratings of user a are removed
sim¯a(i, j) =
P
v∈U\{a} (ru,i − ¯ru)(ru,j − ¯ru)
qP
u∈U\{a}(ru,i − ¯ru)2
qP
u∈U\{a}(ru,j − ¯ru)2
,
and adjusted for the number of raters
sim¯a(i, j) =
max(|Ui ∩ Uj| − 1, γ)
γ
· sim(i, j).
If all coraters of i and j rate them identically, we define the
impact as
impact(a, i, j) =
1
|Cij|
since
P
w∈Cij
|sim (i, j) − sim ¯w(i, j)| = 0.
The influence of each path (u, j, v, i) = [ru,j, rv,j, rv,i] in
the prediction of pu,i is given by
influence(u, j, v, i) =
sim (i, j)
P
l∈Ni∩Iu
sim (i, l)
· impact(v, i, j)
It follows that the sum of influences over all such paths, for
a given set of endpoints, is 1.
4.2 Role Values for Predictive Accuracy
The value of a rating in each role is computed from the
influence depending on the evaluation measure employed.
Here we illustrate the approach using predictive accuracy as
the evaluation metric.
In general, the goodness of a prediction decides whether
the ratings involved must be credited or discredited for their
role. For predictive accuracy, the error in prediction e =
|pu,i − ru,i| is mapped to a comfort level using a mapping
function M(e). Anecdotal evidence suggests that users are
unable to discern errors less than 1.0 (for a rating scale of 1
to 5) [4], and so an error less than 1.0 is considered
acceptable, but anything larger is not. We hence define M(e) as
(1 − e) binned to an appropriate value in [−1, −0.5, 0.5, 1].
For each prediction pu,i, M(e) is attributed to all the paths
that assisted the computation of pu,i, proportional to their
influences. This tribute, M(e)∗influence(u, j, v, i), is in turn
inherited by each of the ratings in the path [ru,j, rv,j, rv,i],
with the credit/blame accumulating to the respective roles
of ru,j as a scout, rv,j as a connector, and rv,i as a
promoter. In other words, the scout value SF(ru,j), the
connector value CF(rv,j) and the promoter value PF(rv,i) are
all incremented by the tribute amount. Over a large
number of predictions, scouts that have repeatedly resulted in
big error rates have a big negative scout value, and vice
versa (similarly with the other roles). Every rating is thus
summarized by its triple [SF, CF, PF].
4.3 Role Values for Rank Accuracy
We now define the computation of the contribution of
ratings to observed rank accuracy. For this computation, we
must know the user"s preference order for a set of items for
which predictions can be computed. We assume that we
have a test set of the users" ratings of the items presented
in the recommendation list. For every pair of items rated
by a user in the test data, we check whether the predicted
order is concordant with his preference. We say a pair (i, j)
is concordant (with error ) whenever one of the following
holds:
• if (ru,i < ru,j) then (pu,i − pu,j < );
• if (ru,i > ru,j) then (pu,i − pu,j > ); or
• if (ru,i = ru,j) then (|pu,i − pu,j| ≤ ).
Similarly, a pair (i, j) is discordant (with error ) if it is not
concordant. Our experiments described below use an error
tolerance of = 0.1.
All paths involved in the prediction of the two items in
a concordant pair are credited, and the paths involved in
a discordant pair are discredited. The credit assigned to a
pair of items (i, j) in the recommendation list for user u is
computed as
c(i, j) =
(
t
T
· 1
C+D
if (i, j) are concordant
− t
T
· 1
C+D
if (i, j) are discordant
(7)
where t is the number of items in the user"s test set whose
ratings could be predicted, T is the number of items rated
by user u in the test set, C is the number of concordances
and D is the number of discordances. The credit c is then
divided among all paths responsible for predicting pu,i and
pu,j proportional to their influences. We again add the role
values obtained from all the experiments to form a triple
[SF, CF, PF] for each rating.
4.4 Aggregating rating roles
After calculating the role values for individual ratings, we
can also use these values to study neighborhoods and the
system. Here we consider how we can use the role values
to characterize the health of a neighborhood. Consider the
list of top recommendations presented to a user at a
specific point in time. The collaborative filtering algorithm
traversed many paths in his neighborhood through his scouts
and other connectors and promoters to make these
recommendations. We call these ratings the recommender
neighborhood of the user. The user implicitly chooses this
neighborhood of ratings through the items he rates. Apart from
the collaborative filtering algorithm, the health of this
neighborhood completely influences a user"s satisfaction with the
system. We can characterize a user"s recommender
neighborhood by aggregating the individual role values of the
ratings involved, weighted by the influence of individual ratings
in determining his recommended list. Different sections of
the user"s neighborhood wield varied influence on his
recommendation list. For instance, ratings reachable through
highly rated items have a bigger say in the recommended
items.
Our aim is to study the system and classify users with
respect to their positioning in a healthy or unhealthy
neighborhood. A user can have a good set of scouts, but may
be exposed to a neighborhood with bad connectors and
promoters. He can have a good neighborhood, but his bad
scouts may ensure the neighborhood"s potential is rendered
useless. We expect that users with good scouts and good
neighborhoods will be most satisfied with the system in the
future.
A user"s neighborhood is characterized by a triple that
represents the weighted sum of the role values of individual
ratings involved in making recommendations. Consider a
user u and his ordered list of recommendations L. An item i
254
in the list is weighted inversely, as K(i), depending on its
position in the list. In our studies we use K(i) =
p
position(i).
Several paths of ratings [ru,j, rv,j, rv,i] are involved in
predicting pu,i which ultimately decides its position in L, each
with influence(u, j, v, i).
The recommender neighborhood of a user u is
characterized by the triple, [SFN(u), CFN(u), PFN(u)] where
SFN(u) =
X
i∈L
P
[ru,j ,rv,j ,rv,i] SF(ru,j)influence(u, j, v, i)
K(i)
!
CFN(u) and PFN(u) are defined similarly. This triple
estimates the quality of u"s recommendations based on the past
track record of the ratings involved in their respective roles.
5. EXPERIMENTATION
As we have seen, we can assign role values to each rating
when evaluating a collaborative filtering system. In this
section, we demonstrate the use of this approach to our overall
goal of defining an approach to monitor and manage the
health of a recommender system through experiments done
on the MovieLens million rating dataset. In particular, we
discuss results relating to identifying good scouts,
promoters, and connectors; the evolution of rating roles; and the
characterization of user neighborhoods.
5.1 Methodology
Our experiments use the MovieLens million rating dataset,
which consists of ratings by 6040 users of 3952 movies. The
ratings are in the range 1 to 5, and are labeled with the time
the rating was given. As discussed before, we consider only
the item-based algorithm here (with item neighborhoods of
size 30) and, due to space considerations, only present role
value results for rank accuracy.
Since we are interested in the evolution of the rating role
values over time, the model of the recommender system is
built by processing ratings in their arrival order. The
timestamping provided by MovieLens is hence crucial for the
analyses presented here. We make assessments of rating roles at
intervals of 10,000 ratings and processed the first 200,000
ratings in the dataset (giving rise to 20 snapshots). We
incrementally update the role values as the time ordered
ratings are merged into the model. To keep the experiment
computationally manageable, we define a test dataset for
each user. As the time ordered ratings are merged into the
model, we label a small randomly selected percentage (20%)
as test data. At discrete epochs, i.e., after processing every
10,000 ratings, we compute the predictions for the ratings
in the test data, and then compute the role values for the
ratings used in the predictions. One potential criticism of
this methodology is that the ratings in the test set are never
evaluated for their roles. We overcome this concern by
repeating the experiment, using different random seeds. The
probability that every rating is considered for evaluation is
then considerably high: 1 − 0.2n
, where n is the number
of times the experiment is repeated with different random
seeds. The results here are based on n = 4 repetitions.
The item-based collaborative filtering algorithm"s
performance was ordinary with respect to rank accuracy. Fig. 5
shows a plot of the precision and recall as ratings were
merged in time order into the model. The recall was always
high, but the average precision was just about 53%.
0
0.2
0.4
0.6
0.8
1
1.2
10000
30000
50000
70000
90000110000130000150000
Ratings merged into model
Value
Precision
Recall
Figure 5: Precision and recall for the item-based
collaborative filtering algorithm.
5.2 Inducing good scouts
The ratings of a user that serve as scouts are those that
allow the user to receive recommendations. We claim that
users with ratings that have respectable scout values will
be happier with the system than those with ratings with
low scout values. Note that the item-based algorithm
discussed here produces recommendation lists with nearly half
of the pairs in the list discordant from the user"s preference.
Whether all of these discordant pairs are observable by the
user is unclear, however, certainly this suggests that there
is a need to be able to direct users to items whose ratings
would improve the lists.
The distribution of the scout values for most users"
ratings are Gaussian with mean zero. Fig. 6 shows the
frequency distribution of scout values for a sample user at a
given snapshot. We observe that a large number of ratings
never serve as scouts for their users. A relatable scenario
is when Amazon"s recommender makes suggestions of books
or items based on other items that were purchased as gifts.
With simple relevance feedback from the user, such ratings
can be isolated as bad scouts and discounted from future
predictions. Removing bad scouts was found to be
worthwhile for individual users but the overall performance
improvement was only marginal.
An obvious question is whether good scouts can be formed
by merely rating popular movies as suggested by Rashid et
al. [9]. They show that a mix of popularity and rating
entropy identifies the best items to suggest to new users
when evaluated using MAE. Following their intuition, we
would expect to see a higher correlation between
popularityentropy and good scouts. We measured the Pearson
correlation coefficient between aggregated scout values for a movie
with the popularity of a movie (number of times it is rated);
and with its popularity*variance measure at different
snapshots of the system. Note that the scout values were initially
anti-correlated with popularity (Fig. 7), but became
moderately correlated as the system evolved. Both popularity and
popularity*variance performed similarly. A possible
explanation is that there has been insufficient time for the popular
movies to accumulate ratings.
255
-10
0
10
20
30
40
50
60
-0.08 -0.06 -0.04 -0.02 0 0.02 0.04
Scout Value
Frequency
Figure 6: Distribution of scout values for a sample
user.
-0.4
-0.2
0
0.2
0.4
0.6
0.8
30000 60000 90000 120000 150000 180000
Popularity
Pop*Var
Figure 7: Correlation between aggregated scout
value and item popularity (computed at different
intervals).
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
30000 60000 90000 120000 150000 180000
Figure 8: Correlation between aggregated promoter
value and user prolificity (computed at different
intervals).
Table 1: Movies forming the best scouts.
Best Scouts Conf. Pop.
Being John Malkovich (1999) 1.00 445
Star Wars: Episode IV - A New Hope (1977) 0.92 623
Princess Bride, The (1987) 0.85 477
Sixth Sense, The (1999) 0.85 617
Matrix, The (1999) 0.77 522
Ghostbusters (1984) 0.77 441
Casablanca (1942) 0.77 384
Insider, The (1999) 0.77 235
American Beauty (1999) 0.69 624
Terminator 2: Judgment Day (1991) 0.69 503
Fight Club (1999) 0.69 235
Shawshank Redemption, The (1994) 0.69 445
Run Lola Run (Lola rennt) (1998) 0.69 220
Terminator, The (1984) 0.62 450
Usual Suspects, The (1995) 0.62 326
Aliens (1986) 0.62 385
North by Northwest (1959) 0.62 245
Fugitive, The (1993) 0.62 402
End of Days (1999) 0.62 132
Raiders of the Lost Ark (1981) 0.54 540
Schindler"s List (1993) 0.54 453
Back to the Future (1985) 0.54 543
Toy Story (1995) 0.54 419
Alien (1979) 0.54 415
Abyss, The (1989) 0.54 345
2001: A Space Odyssey (1968) 0.54 358
Dogma (1999) 0.54 228
Little Mermaid, The (1989) 0.54 203
Table 2: Movies forming the worst scouts.
Worst scouts Conf. Pop.
Harold and Maude (1971) 0.46 141
Grifters, The (1990) 0.46 180
Sting, The (1973) 0.38 244
Godfather: Part III, The (1990) 0.38 154
Lawrence of Arabia (1962) 0.38 167
High Noon (1952) 0.38 84
Women on the Verge of a... (1988) 0.38 113
Grapes of Wrath, The (1940) 0.38 115
Duck Soup (1933) 0.38 131
Arsenic and Old Lace (1944) 0.38 138
Midnight Cowboy (1969) 0.38 137
To Kill a Mockingbird (1962) 0.31 195
Four Weddings and a Funeral (1994) 0.31 271
Good, The Bad and The Ugly, The (1966) 0.31 156
It"s a Wonderful Life (1946) 0.31 146
Player, The (1992) 0.31 220
Jackie Brown (1997) 0.31 118
Boat, The (Das Boot) (1981) 0.31 210
Manhattan (1979) 0.31 158
Truth About Cats & Dogs, The (1996) 0.31 143
Ghost (1990) 0.31 227
Lone Star (1996) 0.31 125
Big Chill, The (1983) 0.31 184
256
By studying the evolution of scout values, we can identify
movies that consistently feature in good scouts over time.
We claim these movies will make viable scouts for other
users. We found the aggregated scout values for all movies
in intervals of 10,000 ratings each. A movie is said to induce
a good scout if the movie was in the top 100 of the sorted
list, and to induce a bad scout if it was in bottom 100 of
the same list. Movies appearing consistently high over time
are expected to remain up there in the future. The effective
confidence in a movie m is
Cm =
Tm − Bm
N
(8)
where Tm is the number of times it appeared in the top
100, Bm the number of times it appeared in the bottom
100, and N is the number of intervals considered. Using
this measure, the top few movies expected to induce the
best scouts are shown in Table 1. Movies that would be bad
scout choices are shown in Table 2 with their associated
confidences. The popularities of the movies are also displayed.
Although more popular movies appear in the list of good
scouts, these tables show that a blind choice of scout based
on popularity alone can be potentially dangerous.
Interestingly, the best scout-‘Being John Malkovich"-is about a
puppeteer who discovers a portal into a movie star, a movie
that has been described variously on amazon.com as ‘makes
you feel giddy," ‘seriously weird," ‘comedy with depth," ‘silly,"
‘strange," and ‘inventive." Indicating whether someone likes
this movie or not goes a long way toward situating the user
in a suitable neighborhood, with similar preferences.
On the other hand, several factors may have made a movie
a bad scout, like the sharp variance in user preferences in
the neighborhood of a movie. Two users may have the
same opinion about Lawrence of Arabia, but they may
differ sharply about how they felt about the other movies they
saw. Bad scouts ensue when there is deviation in behavior
around a common synchronization point.
5.3 Inducing good promoters
Ratings that serve to promote items in a collaborative
filtering system are critical to allowing a new item be
recommended to users. So, inducing good promoters is important
for cold-start recommendation. We note that the frequency
distribution of promoter values for a sample movie"s ratings
is also Gaussian (similar to Fig. 6). This indicates that the
promotion of a movie is benefited most by the ratings of a
few users, and are unaffected by the ratings of most users.
We find a strong correlation between a user"s number of
ratings and his aggregated promoter value. Fig. 8 depicts the
evolution of the Pearson correlation co-efficient between the
prolificity of a user (number of ratings) versus his
aggregated promoter value. We expect that conspicuous shills,
by recommending wrong movies to users, will be
discredited with negative aggregate promoter values and should be
identifiable easily.
Given this observation, the obvious rule to follow when
introducing a new movie is to have it rated directly by
prolific users who posses high aggregated promoter values. A
new movie is thus cast into the neighborhood of many other
movies improving its visibility. Note, though, that a user
may have long stopped using the system. Tracking
promoter values consistently allows only the most active recent
users to be considered.
5.4 Inducing good connectors
Given the way scouts, connectors, and promoters are
characterized, it follows that the movies that are part of the best
scouts are also part of the best connectors. Similarly, the
users that constitute the best promoters are also part of the
best connectors. Good connectors are induced by
ensuring a user with a high promoter value rates a movie with a
high scout value. In our experiments, we find that a rating"s
longest standing role is often as a connector. A rating with a
poor connector value is often seen due to its user being a bad
promoter, or its movie being a bad scout. Such ratings can
be removed from the prediction process to bring marginal
improvements to recommendations. In some selected
experiments, we observed that removing a set of badly behaving
connectors helped improve the system"s overall performance
by 1.5%. The effect was even higher on a few select users
who observed an improvement of above 10% in precision
without much loss in recall.
5.5 Monitoring the evolution of rating roles
One of the more significant contributions of our work is
the ability to model the evolution of recommender systems,
by studying the changing roles of ratings over time. The role
and value of a rating can change depending on many factors
like user behavior, redundancy, shilling effects or
properties of the collaborative filtering algorithm used. Studying
the dynamics of rating roles in terms of transitions between
good, bad, and negligible values can provide insights into the
functioning of the recommender system. We believe that a
continuous visualization of these transitions will improve the
ability to manage a recommender system.
We classify different rating states as good, bad, or
negligible. Consider a user who has rated 100 movies in a
particular interval, of which 20 are part of the test set. If a
scout has a value greater than 0.005, it indicates that it is
uniquely involved in at least 2 concordant predictions, which
we will say is good. Thus, a threshold of 0.005 is chosen to
bin a rating as good, bad or negligible in terms of its scout,
connector and promoter value. For instance, a rating r, at
time t with role value triple [0.1, 0.001, −0.01] is classified as
[scout +, connector 0, promoter −], where + indicates good,
0 indicates negligible, and − indicates bad.
The positive credit held by a rating is a measure of its
contribution to the betterment of the system, and the discredit
is a measure of its contribution to the detriment of the
system. Even though the positive roles (and the negative roles)
make up a very small percentage of all ratings, their
contribution supersedes their size. For example, even though only
1.7% of all ratings were classified as good scouts, they hold
79% of all positive credit in the system! Similarly, the bad
scouts were just 1.4% of all ratings but hold 82% of all
discredit. Note that good and bad scouts, together, comprise
only 1.4% + 1.7% = 3.1% of the ratings, implying that the
majority of the ratings are negligible role players as scouts
(more on this later). Likewise, good connectors were 1.2%
of the system, and hold 30% of all positive credit. The bad
connectors (0.8% of the system) hold 36% of all discredit.
Good promoters (3% of the system) hold 46% of all credit,
while bad promoters (2%) hold 50% of all discredit. This
reiterates that a few ratings influence most of the system"s
performance. Hence it is important to track transitions
between them regardless of their small numbers.
257
Across different snapshots, a rating can remain in the
same state or change. A good scout can become a bad scout,
a good promoter can become a good connector, good and
bad scouts can become vestigial, and so on. It is not
practical to expect a recommender system to have no ratings in
bad roles. However, it suffices to see ratings in bad roles
either convert to good or vestigial roles. Similarly, observing
a large number of good roles become bad ones is a sign of
imminent failure of the system.
We employ the principle of non-overlapping episodes [6]
to count such transitions. A sequence such as:
[+, 0, 0] → [+, 0, 0] → [0, +, 0] → [0, 0, 0]
is interpreted as the transitions
[+, 0, 0] ; [0, +, 0] : 1
[+, 0, 0] ; [0, 0, 0] : 1
[0, +, 0] ; [0, 0, 0] : 1
instead of
[+, 0, 0] ; [0, +, 0] : 2
[+, 0, 0] ; [0, 0, 0] : 2
[0, +, 0] ; [0, 0, 0] : 1.
See [6] for further details about this counting procedure.
Thus, a rating can be in one of 27 possible states, and there
are about 272
possible transitions. We make a further
simplification and utilize only 9 states, indicating whether the
rating is a scout, promoter, or connector, and whether it
has a positive, negative, or negligible role. Ratings that
serve multiple purposes are counted using multiple episode
instantiations but the states themselves are not duplicated
beyond the 9 restricted states. In this model, a transition
such as [+, 0, +] ; [0, +, 0] : 1 is counted as
[scout+] ; [scout0] : 1
[scout+] ; [connector+] : 1
[scout+] ; [promoter0] : 1
[connector0] ; [scout0] : 1
[connector0] ; [scout+] : 1
[connector0] ; [promoter0] : 1
[promoter+] ; [scout0] : 1
[promoter+] ; [connector+] : 1
[promoter+] ; [promoter0] : 1
Of these, transitions like [pX] ; [q0] where p = q, X ∈
{+, 0, −} are considered uninteresting, and only the rest are
counted.
Fig. 9 depicts the major transitions counted while
processing the first 200,000 ratings from the MovieLens dataset.
Only transitions with frequency greater than or equal to 3%
are shown. The percentages for each state indicates the
number of ratings that were found to be in those states.
We consider transitions from any state to a good state as
healthy, from any state to a bad state as unhealthy, and
from any state to a vestigial state as decaying.
From Fig. 9, we can observe:
• The bulk of the ratings have negligible values,
irrespective of their role. The majority of the transitions
involve both good and bad ratings becoming
negligible.
Scout +
(2%)
Scout(1.5%)
Scout 0
(96.5%)
Connector +
(1.2%)
Connector(0.8%)
Connector 0
(98%)
Promoter +
(3%)
Promoter(2%)
Promoter 0
(95%)
84%
84%
81%
74%
10%
6%
11%
77%
8%
7%
8%
82%
4%
86%
4%
68%
15%
13%
5%
5%
77%
11%
7%
5%
4%
3%
3%
3%
Healthy
Unhealthy
Decaying
Figure 9: Transitions among rating roles.
• The number of good ratings is comparable to the bad
ratings, and ratings are seen to switch states often,
except in the case of scouts (see below).
• The negative and positive scout states are not
reachable through any transition, indicating that these
ratings must begin as such, and cannot be coerced into
these roles.
• Good promoters and good connectors have a much
longer survival period than scouts. Transitions that
recur to these states have frequencies of 10% and 15%
when compared to just 4% for scouts. Good
connectors are the slowest to decay whereas (good) scouts
decay the fastest.
• Healthy percentages are seen on transitions between
promoters and connectors. As indicated earlier, there
are hardly any transitions from promoters/connectors
to scouts. This indicates that, over the long run, a
user"s rating is more useful to others (movies or other
users) than to the user himself.
• The percentages of healthy transitions outweigh the
unhealthy ones - this hints that the system is healthy,
albeit only marginally.
Note that these results are conditioned by the static nature
of the dataset, which is a set of ratings over a fixed window
of time. However a diagram such as Fig. 9 is clearly useful
for monitoring the health of a recommender system. For
instance, acceptable limits can be imposed on different types
of transitions and, if a transition fails to meet the
threshold, the recommender system or a part of it can be brought
under closer scrutiny. Furthermore, the role state transition
diagram would also be the ideal place to study the effects of
shilling, a topic we will consider in future research.
5.6 Characterizing neighborhoods
Earlier we saw that we can characterize the neighborhood
of ratings involved in creating a recommendation list L for
258
a user. In our experiment, we consider lists of length 30,
and sample the lists of about 5% of users through the
evolution of the model (at intervals of 10,000 ratings each) and
compute their neighborhood characteristics. To simplify our
presentation, we consider the percentage of the sample that
fall into one of the following categories:
1. Inactive user: (SFN(u) = 0)
2. Good scouts, Good neighborhood:
(SFN(u) > 0) ∧ (CFN(u) > 0 ∧ PFN(u) > 0)
3. Good scouts, Bad neighborhood:
(SFN(u) > 0) ∧ (CFN(u) < 0 ∨ PFN(u) < 0)
4. Bad scouts, Good neighborhood:
(SFN(u) < 0) ∧ (CFN(u) > 0 ∧ PFN(u) > 0)
5. Bad scouts, Bad neighborhood:
(SFN(u) < 0) ∧ (CFN(u) < 0 ∨ PFN(u) < 0)
From our sample set of 561 users, we found that 476 users
were inactive. Of the remaining 85 users, we found 26 users
had good scouts and a good neighborhood, 6 had bad scouts
and a good neighborhood, 29 had good scouts and a bad
neighborhood, and 24 had bad scouts and a bad
neighborhood. Thus, we conjecture that 59 users (29+24+6) are in
danger of leaving the system.
As a remedy, users with bad scouts and a good
neighborhood can be asked to reconsider rating of some movies
hoping to improve the system"s recommendations. The
system can be expected to deliver more if they engineer some
good scouts. Users with good scouts and a bad
neighborhood are harder to address; this situation might entail
selectively removing some connector-promoter pairs that are
causing the damage. Handling users with bad scouts and
bad neighborhoods is a more difficult challenge.
Such a classification allows the use of different strategies
to better a user"s experience with the system depending on
his context. In future work, we intend to conduct field
studies and study the improvement in performance of different
strategies for different contexts.
6. CONCLUSIONS
To further recommender system acceptance and
deployment, we require new tools and methodologies to manage
an installed recommender and develop insights into the roles
played by ratings. A fine-grained characterization in terms
of rating roles such as scouts, promoters, and connectors,
as done here, helps such an endeavor. Although we have
presented results on only the item-based algorithm with list
rank accuracy as the metric, the same approach outlined
here applies to user-based algorithms and other metrics.
In future research, we plan to systematically study the
many algorithmic parameters, tolerances, and cutoff
thresholds employed here and reason about their effects on the
downstream conclusions. We also aim to extend our
formulation to other collaborative filtering algorithms, study
the effect of shilling in altering rating roles, conduct field
studies, and evaluate improvements in user experience by
tweaking ratings based on their role values. Finally, we plan
to develop the idea of mining the evolution of rating role
patterns into a reporting and tracking system for all aspects
of recommender system health.
7. REFERENCES
[1] Cosley, D., Lam, S., Albert, I., Konstan, J., and
Riedl, J. Is Seeing Believing?: How Recommender
System Interfaces Affect User"s Opinions. In Proc.
CHI (2001), pp. 585-592.
[2] Herlocker, J. L., Konstan, J. A., Borchers, A.,
and Riedl, J. An Algorithmic Framework for
Performing Collaborative Filtering. In Proc. SIGIR
(1999), pp. 230-237.
[3] Herlocker, J. L., Konstan, J. A., Terveen,
L. G., and Riedl, J. T. Evaluating Collaborative
Filtering Recommender Systems. ACM Transactions
on Information Systems Vol. 22, 1 (2004), pp. 5-53.
[4] Konstan, J. A. Personal communication. 2003.
[5] Lam, S. K., and Riedl, J. Shilling Recommender
Systems for Fun and Profit. In Proceedings of the 13th
International World Wide Web Conference (2004),
ACM Press, pp. 393-402.
[6] Laxman, S., Sastry, P. S., and Unnikrishnan,
K. P. Discovering Frequent Episodes and Learning
Hidden Markov Models: A Formal Connection. IEEE
Transactions on Knowledge and Data Engineering
Vol. 17, 11 (2005), 1505-1517.
[7] McLaughlin, M. R., and Herlocker, J. L. A
Collaborative Filtering Algorithm and Evaluation
Metric that Accurately Model the User Experience. In
Proceedings of the 27th Annual International ACM
SIGIR Conference on Research and Development in
Information Retrieval (2004), pp. 329 - 336.
[8] O"Mahony, M., Hurley, N. J., Kushmerick, N.,
and Silvestre, G. Collaborative Recommendation:
A Robustness Analysis. ACM Transactions on
Internet Technology Vol. 4, 4 (Nov 2004), pp. 344-377.
[9] Rashid, A. M., Albert, I., Cosley, D., Lam, S.,
McNee, S., Konstan, J. A., and Riedl, J. Getting
to Know You: Learning New User Preferences in
Recommender Systems. In Proceedings of the 2002
Conference on Intelligent User Interfaces (IUI 2002)
(2002), pp. 127-134.
[10] Rashid, A. M., Karypis, G., and Riedl, J.
Influence in Ratings-Based Recommender Systems:
An Algorithm-Independent Approach. In Proc. of the
SIAM International Conference on Data Mining
(2005).
[11] Resnick, P., Iacovou, N., Sushak, M.,
Bergstrom, P., and Riedl, J. GroupLens: An
Open Architecture for Collaborative Filtering of
Netnews. In Proceedings of the Conference on
Computer Supported Collaborative Work (CSCW"94)
(1994), ACM Press, pp. 175-186.
[12] Sarwar, B., Karypis, G., Konstan, J., and Reidl,
J. Item-Based Collaborative Filtering
Recommendation Algorithms. In Proceedings of the
Tenth International World Wide Web Conference
(WWW"10) (2001), pp. 285-295.
[13] Schein, A., Popescu, A., Ungar, L., and
Pennock, D. Methods and Metrics for Cold-Start
Recommendation. In Proc. SIGIR (2002),
pp. 253-260.
259 | opinion;list rank accuracy;recommender system;neighborhood;nearest neighbor;rating;recommender;promoter;aggregation process;scout;purchase;collaborative filtering algorithm;collaborative filter;user-base and item-base algorithm;connector |
train_J-45 | Empirical Mechanism Design: Methods, with Application to a Supply-Chain Scenario | Our proposed methods employ learning and search techniques to estimate outcome features of interest as a function of mechanism parameter settings. We illustrate our approach with a design task from a supply-chain trading competition. Designers adopted several rule changes in order to deter particular procurement behavior, but the measures proved insufficient. Our empirical mechanism analysis models the relation between a key design parameter and outcomes, confirming the observed behavior and indicating that no reasonable parameter settings would have been likely to achieve the desired effect. More generally, we show that under certain conditions, the estimator of optimal mechanism parameter setting based on empirical data is consistent. | 1. MOTIVATION
We illustrate our problem with an anecdote from a supply chain
research exercise: the 2003 and 2004 Trading Agent Competition
(TAC) Supply Chain Management (SCM) game. TAC/SCM [1]
defines a scenario where agents compete to maximize their profits
as manufacturers in a supply chain. The agents procure components
from the various suppliers and assemble finished goods for sale to
customers, repeatedly over a simulated year.
As it happened, the specified negotiation behavior of suppliers
provided a great incentive for agents to procure large quantities of
components on day 0: the very beginning of the simulation. During
the early rounds of the 2003 SCM competition, several agent
developers discovered this, and the apparent success led to most agents
performing the majority of their purchasing on day 0. Although
jockeying for day-0 procurement turned out to be an interesting
strategic issue in itself [19], the phenomenon detracted from other
interesting problems, such as adapting production levels to varying
demand (since component costs were already sunk), and dynamic
management of production, sales, and inventory. Several
participants noted that the predominance of day-0 procurement
overshadowed other key research issues, such as factory scheduling [2] and
optimizing bids for customer orders [13]. After the 2003
tournament, there was a general consensus in the TAC community that
the rules should be changed to deter large day-0 procurement.
The task facing game organizers can be viewed as a problem in
mechanism design. The designers have certain game features
under their control, and a set of objectives regarding game outcomes.
Unlike most academic treatments of mechanism design, the
objective is a behavioral feature (moderate day-0 procurement) rather
than an allocation feature like economic efficiency, and the allowed
mechanisms are restricted to those judged to require only an
incremental modification of the current game. Replacing the
supplychain negotiation procedures with a one-shot direct mechanism, for
example, was not an option. We believe that such operational
restrictions and idiosyncratic objectives are actually quite typical of
practical mechanism design settings, where they are perhaps more
commonly characterized as incentive engineering problems.
In response to the problem, the TAC/SCM designers adopted
several rule changes intended to penalize large day-0 orders. These
included modifications to supplier pricing policies and introduction
of storage costs assessed on inventories of components and finished
goods. Despite the changes, day-0 procurement was very high in
the early rounds of the 2004 competition. In a drastic measure, the
GameMaster imposed a fivefold increase of storage costs midway
through the tournament. Even this did not stem the tide, and day-0
procurement in the final rounds actually increased (by some
measures) from 2003 [9].
The apparent difficulty in identifying rule modifications that
effect moderation in day-0 procurement is quite striking. Although
the designs were widely discussed, predictions for the effects of
various proposals were supported primarily by intuitive arguments
or at best by back-of-the-envelope calculations. Much of the
difficulty, of course, is anticipating the agents" (and their
developers") responses without essentially running a gaming exercise for
this purpose. The episode caused us to consider whether new
ap306
proaches or tools could enable more systematic analysis of design
options. Standard game-theoretic and mechanism design methods
are clearly relevant, although the lack of an analytic description of
the game seems to be an impediment. Under the assumption that
the simulator itself is the only reliable source of outcome
computation, we refer to our task as empirical mechanism design.
In the sequel, we develop some general methods for empirical
mechanism design and apply them to the TAC/SCM redesign
problem. Our analysis focuses on the setting of storage costs (taking
other game modifications as fixed), since this is the most direct
deterrent to early procurement adopted. Our results confirm the basic
intuition that incentives for day-0 purchasing decrease as storage
costs rise. We also confirm that the high day-0 procurement
observed in the 2004 tournament is a rational response to the setting
of storage costs used. Finally, we conclude from our data that it is
very unlikely that any reasonable setting of storage costs would
result in acceptable levels of day-0 procurement, so a different design
approach would have been required to eliminate this problem.
Overall, we contribute a formal framework and a set of methods
for tackling indirect mechanism design problems in settings where
only a black-box description of players" utilities is available. Our
methods incorporate estimation of sets of Nash equilibria and
sample Nash equilibria, used in conjuction to support general claims
about the structure of the mechanism designer"s utility, as well as a
restricted probabilistic analysis to assess the likelihood of
conclusions. We believe that most realistic problems are too complex to
be amenable to exact analysis. Consequently, we advocate the
approach of gathering evidence to provide indirect support of specific
hypotheses.
2. PRELIMINARIES
A normal form game2
is denoted by [I, {Ri}, {ui(r)}], where I
refers to the set of players and m = |I| is the number of players.
Ri is the set of strategies available to player i ∈ I, with R =
R1 ×. . .×Rm representing the set of joint strategies of all players.
We designate the set of pure strategies available to player i by Ai,
and denote the joint set of pure strategies of all players by A =
A1 ×. . .×Am. It is often convenient to refer to a strategy of player
i separately from that of the remaining players. To accommodate
this, we use a−i to denote the joint strategy of all players other than
player i.
Let Si be the set of all probability distributions (mixtures) over
Ai and, similarly, S be the set of all distributions over A. An s ∈ S
is called a mixed strategy profile. When the game is finite (i.e., A
and I are both finite), the probability that a ∈ A is played under
s is written s(a) = s(ai, a−i). When the distribution s is not
correlated, we can simply say si(ai) when referring to the probability
player i plays ai under s.
Next, we define the payoff (utility) function of each player i by
ui : A1 ×· · ·×Am → R, where ui(ai, a−i) indicates the payoff to
player i to playing pure strategy ai when the remaining players play
a−i. We can extend this definition to mixed strategies by assuming
that ui are von Neumann-Morgenstern (vNM) utilities as follows:
ui(s) = Es[ui], where Es is the expectation taken with respect to
the probability distribution of play induced by the players" mixed
strategy s.
2
By employing the normal form, we model agents as playing a
single action, with decisions taken simultaneously. This is appropriate
for our current study, which treats strategies (agent programs) as
atomic actions. We could capture finer-grained decisions about
action over time in the extensive form. Although any extensive game
can be recast in normal form, doing so may sacrifice compactness
and blur relevant distinctions (e.g., subgame perfection).
Occasionally, we write ui(x, y) to mean that x ∈ Ai or Si and
y ∈ A−i or S−i depending on context. We also express the set of
utility functions of all players as u(·) = {u1(·), . . . , um(·)}.
We define a function, : R → R, interpreted as the maximum
benefit any player can obtain by deviating from its strategy in the
specified profile.
(r) = max
i∈I
max
ai∈Ai
[ui(ai, r−i) − ui(r)], (1)
where r belongs to some strategy set, R, of either pure or mixed
strategies.
Faced with a game, an agent would ideally play its best strategy
given those played by the other agents. A configuration where all
agents play strategies that are best responses to the others
constitutes a Nash equilibrium.
DEFINITION 1. A strategy profile r = (r1, . . . , rm) constitutes
a Nash equilibrium of game [I, {Ri}, {ui(r)}] if for every i ∈ I,
ri ∈ Ri, ui(ri, r−i) ≥ ui(ri, r−i).
When r ∈ A, the above defines a pure strategy Nash equilibrium;
otherwise the definition describes a mixed strategy Nash
equilibrium. We often appeal to the concept of an approximate, or -Nash
equilibrium, where is the maximum benefit to any agent for
deviating from the prescribed strategy. Thus, (r) as defined above (1)
is such that profile r is an -Nash equilibrium iff (r) ≤ .
In this study we devote particular attention to games that exhibit
symmetry with respect to payoffs, rendering agents strategically
identical.
DEFINITION 2. A game [I, {Ri}, {ui(r)}] is symmetric if for
all i, j ∈ I, (a) Ri = Rj and (b) ui(ri, r−i) = uj (rj, r−j)
whenever ri = rj and r−i = r−j
3. THE MODEL
We model the strategic interactions between the designer of the
mechanism and its participants as a two-stage game. The designer
moves first by selecting a value, θ, from a set of allowable
mechanism settings, Θ. All the participant agents observe the mechanism
parameter θ and move simultaneously thereafter. For example, the
designer could be deciding between a first-price and second-price
sealed-bid auction mechanisms, with the presumption that after the
choice has been made, the bidders will participate with full
awareness of the auction rules.
Since the participants play with full knowledge of the
mechanism parameter, we define a game between them in the second stage
as Γθ = [I, {Ri}, {ui(r, θ)}]. We refer to Γθ as a game induced
by θ. Let N(θ) be the set of strategy profiles considered solutions
of the game Γθ.3
Suppose that the goal of the designer is to optimize the value
of some welfare function, W (r, θ), dependent on the mechanism
parameter and resulting play, r. We define a pessimistic measure,
W ( ˆR, θ) = inf{W (r, θ) : r ∈ ˆR}, representing the worst-case
welfare of the game induced by θ, assuming that agents play some
joint strategy in ˆR. Typically we care about W (N(θ), θ), the
worst-case outcome of playing some solution.4
On some problems we can gain considerable advantage by
using an aggregation function to map the welfare outcome of a game
3
We generally adopt Nash equilibrium as the solution concept, and
thus take N(θ) to be the set of equilibria. However, much of the
methodology developed here could be employed with alternative
criteria for deriving agent behavior from a game definition.
4
Again, alternatives are available. For example, if one has a
probability distribution over the solution set N(θ), it would be natural
to take the expectation of W (r, θ) instead.
307
specified in terms of agent strategies to an equivalent welfare
outcome specified in terms of a lower-dimensional summary.
DEFINITION 3. A function φ : R1 × · · · × Rm → Rq
is an
aggregation function if m ≥ q and W (r, θ) = V (φ(r), θ) for
some function V .
We overload the function symbol to apply to sets of strategy
profiles: φ( ˆR) = {φ(r) : r ∈ ˆR}. For convenience of exposition, we
write φ∗
(θ) to mean φ(N(θ)).
Using an aggregation function yields a more compact
representation of strategy profiles. For example, suppose-as in our
application below-that an agent"s strategy is defined by a numeric
parameter. If all we care about is the total value played, we may
take φ(a) =
Pm
i=1 ai. If we have chosen our aggregator carefully,
we may also capture structure not obvious otherwise. For example,
φ∗
(θ) could be decreasing in θ, whereas N(θ) might have a more
complex structure.
Given a description of the solution correspondence N(θ)
(equivalently, φ∗
(θ)), the designer faces a standard optimization
problem. Alternatively, given a simulator that could produce an
unbiased sample from the distribution of W (N(θ), θ) for any θ, the
designer would be faced with another much appreciated problem
in the literature: simulation optimization [12].
However, even for a game Γθ with known payoffs it may be
computationally intractable to solve for Nash equilibria, particularly if
the game has large or infinite strategy sets. Additionally, we wish
to study games where the payoffs are not explicitly given, but must
be determined from simulation or other experience with the game.5
Accordingly, we assume that we are given a (possibly noisy) data
set of payoff realizations: Do = {(θ1
, a1
, U1
), . . . , (θk
, ak
, Uk
)},
where for every data point θi
is the observed mechanism parameter
setting, ai
is the observed pure strategy profile of the participants,
and Ui
is the corresponding realization of agent payoffs. We may
also have additional data generated by a (possibly noisy) simulator:
Ds = {(θk+1
, ak+1
, Uk+1
), . . . , (θk+l
, ak+l
, Uk+l
)}. Let D =
{Do, Ds} be the combined data set. (Either Do or Ds may be null
for a particular problem.)
In the remainder of this paper, we apply our modeling approach,
together with several empirical game-theoretic methods, in order to
answer questions regarding the design of the TAC/SCM scenario.
4. EMPIRICAL DESIGN ANALYSIS
Since our data comes in the form of payoff experience and not as
the value of an objective function for given settings of the control
variable, we can no longer rely on the methods for optimizing
functions using simulations. Indeed, a fundamental aspect of our design
problem involves estimating the Nash equilibrium correspondence.
Furthermore, we cannot rely directly on the convergence results
that abound in the simulation optimization literature, and must
establish probabilistic analysis methods tailored for our problem
setting.
4.1 TAC/SCM Design Problem
We describe our empirical design analysis methods by presenting
a detailed application to the TAC/SCM scenario introduced above.
Recall that during the 2004 tournament, the designers of the
supplychain game chose to dramatically increase storage costs as a
measure aimed at curbing day-0 procurement, to little avail. Here we
systematically explore the relationship between storage costs and
5
This is often the case for real games of interest, where natural
language or algorithmic descriptions may substitute for a formal
specification of strategy and payoff functions.
the aggregate quantity of components procured on day 0 in
equilibrium. In doing so, we consider several questions raised during and
after the tournament. First, does increasing storage costs actually
reduce day-0 procurement? Second, was the excessive day-0
procurement that was observed during the 2004 tournament rational?
And third, could increasing storage costs sufficiently have reduced
day-0 procurement to an acceptable level, and if so, what should
the setting of storage costs have been? It is this third question that
defines the mechanism design aspect of our analysis.6
To apply our methods, we must specify the agent strategy sets,
the designer"s welfare function, the mechanism parameter space,
and the source of data. We restrict the agent strategies to be a
multiplier on the quantity of the day-0 requests by one of the
finalists, Deep Maize, in the 2004 TAC/SCM tournament. We further
restrict it to the set [0,1.5], since any strategy below 0 is illegal
and strategies above 1.5 are extremely aggressive (thus unlikely to
provide refuting deviations beyond those available from included
strategies, and certainly not part of any desirable equilibrium). All
other behavior is based on the behavior of Deep Maize and is
identical for all agents. This choice can provide only an estimate of the
actual tournament behavior of a typical agent. However, we
believe that the general form of the results should be robust to changes
in the full agent behavior.
We model the designer"s welfare function as a threshold on the
sum of day-0 purchases. Let φ(a) =
P6
i=1 ai be the
aggregation function representing the sum of day-0 procurement of the six
agents participating in a particular supply-chain game (for mixed
strategy profiles s, we take expectation of φ with respect to the
mixture). The designer"s welfare function W (N(θ), θ) is then given by
I{sup{φ∗
(θ)} ≤ α}, where α is the maximum acceptable level of
day-0 procurement and I is the indicator function. The designer
selects a value θ of storage costs, expressed as an annual
percentage of the baseline value of components in the inventory (charged
daily), from the set Θ = R+
. Since the designer"s decision
depends only on φ∗
(θ), we present all of our results in terms of the
value of the aggregation function.
4.2 Estimating Nash Equilibria
The objective of TAC/SCM agents is to maximize profits
realized over a game instance. Thus, if we fix a strategy for each agent
at the beginning of the simulation and record the corresponding
profits at the end, we will have obtained a data point in the form
(a, U(a)). If we also have fixed the parameter θ of the simulator,
the resulting data point becomes part of our data set D. This data
set, then, contains data only in the form of pure strategies of
players and their corresponding payoffs, and, consequently, in order to
formulate the designer"s problem as optimization, we must first
determine or approximate the set of Nash equilibria of each game Γθ.
Thus, we need methods for approximating Nash equilibria for
infinite games. Below, we describe the two methods we used in our
study. The first has been explored empirically before, whereas the
second is introduced here as the method specifically designed to
approximate a set of Nash equilibria.
4.2.1 Payoff Function Approximation
The first method for estimating Nash equilibria based on data
uses supervised learning to approximate payoff functions of
mech6
We do not address whether and how other measures (e.g.,
constraining procurement directly) could have achieved design
objectives. Our approach takes as given some set of design options, in
this case defined by the storage cost parameter. In principle our
methods could be applied to a different or larger design space,
though with corresponding complexity growth.
308
anism participants from a data set of game experience [17]. Once
approximate payoff functions are available for all players, the Nash
equilibria may be either found analytically or approximated using
numerical techniques, depending on the learning model. In what
follows, we estimate only a sample Nash equilibrium using this
technique, although this restriction can be removed at the expense
of additional computation time.
One advantage of this method is that it can be applied to any
data set and does not require the use of a simulator. Thus, we can
apply it when Ds = ∅. If a simulator is available, we can generate
additional data to build confidence in our initial estimates.7
We tried the following methods for approximating payoff
functions: quadratic regression (QR), locally weighted average (LWA),
and locally weighted linear regression (LWLR). We also used
control variates to reduce the variance of payoff estimates, as in our
previous empirical game-theoretic analysis of TAC/SCM-03 [19].
The quadratic regression model makes it possible to compute
equilibria of the learned game analytically. For the other methods
we applied replicator dynamics [7] to a discrete approximation of
the learned game. The expected total day-0 procurement in
equilibrium was taken as the estimate of an outcome.
4.2.2 Search in Strategy Profile Space
When we have access to a simulator, we can also use directed
search through profile space to estimate the set of Nash equilibria,
which we describe here after presenting some additional notation.
DEFINITION 4. A strategic neighbor of a pure strategy profile
a is a profile that is identical to a in all but one strategy. We define
Snb(a, D) as the set of all strategic neighbors of a available in
the data set D. Similarly, we define Snb(a, ˜D) to be all strategic
neighbors of a not in D. Finally, for any a ∈ Snb(a, D) we define
the deviating agent as i(a, a ).
DEFINITION 5. The -bound, ˆ, of a pure strategy profile a is
defined as maxa ∈Snb(a,D) max{ui(a,a )(a )−ui(a,a )(a), 0}. We
say that a is a candidate δ-equilibrium for δ ≥ ˆ.
When Snb(a, ˜D) = ∅ (i.e., all strategic neighbors are represented
in the data), a is confirmed as an ˆ-Nash equilibrium.
Our search method operates by exploring deviations from
candidate equilibria. We refer to it as BestFirstSearch, as it selects
with probability one a strategy profile a ∈ Snb(a, ˜D) that has the
smallest ˆin D.
Finally we define an estimator for a set of Nash equilibria.
DEFINITION 6. For a set K, define Co(K) to be the convex
hull of K. Let Bδ be the set of candidates at level δ. We define
ˆφ∗
(θ) = Co({φ(a) : a ∈ Bδ}) for a fixed δ to be an estimator of
φ∗
(θ).
In words, the estimate of a set of equilibrium outcomes is the
convex hull of all aggregated strategy profiles with -bound below
some fixed δ. This definition allows us to exploit structure
arising from the aggregation function. If two profiles are close in terms
of aggregation values, they may be likely to have similar -bounds.
In particular, if one is an equilibrium, the other may be as well. We
present some theoretical support for this method of estimating the
set of Nash equilibria below.
Since the game we are interested in is infinite, it is necessary to
terminate BestFirstSearch before exploring the entire space of
strat7
For example, we can use active learning techniques [5] to improve
the quality of payoff function approximation. In this work, we
instead concentrate on search in strategy profile space.
egy profiles. We currently determine termination time in a
somewhat ad-hoc manner, based on observations about the current set of
candidate equilibria.8
4.3 Data Generation
Our data was collected by simulating TAC/SCM games on a
local version of the 2004 TAC/SCM server, which has a configuration
setting for the storage cost. Agent strategies in simulated games
were selected from the set {0, 0.3, 0.6, . . . , 1.5} in order to have
positive probability of generating strategic neighbors.9
A
baseline data set Do was generated by sampling 10 randomly generated
strategy profiles for each θ ∈ {0, 50, 100, 150, 200}. Between 5
and 10 games were run for each profile after discarding games that
had various flaws.10
We used search to generate a simulated data
set Ds, performing between 12 and 32 iterations of BestFirstSearch
for each of the above settings of θ. Since simulation cost is
extremely high (a game takes nearly 1 hour to run), we were able to
run a total of 2670 games over the span of more than six months.
For comparison, to get the entire description of an empirical game
defined by the restricted finite joint strategy space for each value
of θ ∈ {0, 50, 100, 150, 200} would have required at least 23100
games (sampling each profile 10 times).
4.4 Results
4.4.1 Analysis of the Baseline Data Set
We applied the three learning methods described above to the
baseline data set Do. Additionally, we generated an estimate of the
Nash equilibrium correspondence, ˆφ∗
(θ), by applying Definition 6
with δ =2.5E6. The results are shown in Figure 1. As we can see,
the correspondence ˆφ∗
(θ) has little predictive power based on Do,
and reveals no interesting structure about the game. In contrast, all
three learning methods suggest that total day-0 procurement is a
decreasing function of storage costs.
0
1
2
3
4
5
6
7
8
9
10
0 50 100 150 200
Storage Cost
TotalDay-0Procurement
LWA
LWLR
QR
BaselineMin
BaselineMax
Figure 1: Aggregate day-0 procurement estimates based on Do.
The correspondence ˆφ∗
(θ) is the interval between
BaselineMin and BaselineMax.
8
Generally, search is terminated once the set of candidate equilibria
is small enough to draw useful conclusions about the likely range
of equilibrium strategies in the game.
9
Of course, we do not restrict our Nash equilibrium estimates to
stay in this discrete subset of [0,1.5].
10
For example, if we detected that any agent failed during the
game (failures included crashes, network connectivity problems,
and other obvious anomalies), the game would be thrown out.
309
4.4.2 Analysis of Search Data
To corroborate the initial evidence from the learning methods,
we estimated ˆφ∗
(θ) (again, using δ =2.5E6) on the data set D =
{Do, Ds}, where Ds is data generated through the application of
BestFirstSearch. The results of this estimate are plotted against the
results of the learning methods trained on Do
11
in Figure 2. First,
we note that the addition of the search data narrows the range of
potential equilibria substantially. Furthermore, the actual point
predictions of the learning methods and those based on -bounds
after search are reasonably close. Combining the evidence gathered
from these two very different approaches to estimating the outcome
correspondence yields a much more compelling picture of the
relationship between storage costs and day-0 procurement than either
method used in isolation.
0
1
2
3
4
5
6
7
8
9
10
0 50 100 150 200
Storage Cost
TotayDay-0Procurement
LWA
LWLR
QR
SearchMin
SearchMax
Figure 2: Aggregate day-0 procurement estimates based on
search in strategy profile space compared to function
approximation techniques trained on Do. The correspondence ˆφ∗
(θ)
for D = {Do, Ds} is the interval between SearchMin and
SearchMax.
This evidence supports the initial intuition that day-0
procurement should be decreasing with storage costs. It also confirms that
high levels of day-0 procurement are a rational response to the 2004
tournament setting of average storage cost, which corresponds to
θ = 100. The minimum prediction for aggregate procurement at
this level of storage costs given by any experimental methods is
approximately 3. This is quite high, as it corresponds to an
expected commitment of 1/3 of the total supplier capacity for the
entire game. The maximum prediction is considerably higher at 4.5.
In the actual 2004 competition, aggregate day-0 procurement was
equivalent to 5.71 on the scale used here [9]. Our predictions
underestimate this outcome to some degree, but show that any rational
outcome was likely to have high day-0 procurement.
4.4.3 Extrapolating the Solution Correspondence
We have reasonably strong evidence that the outcome
correspondence is decreasing. However, the ultimate goal is to be able to
either set the storage cost parameter to a value that would curb day-0
procurement in equilibrium or conclude that this is not possible.
To answer this question directly, suppose that we set a
conservative threshold α = 2 on aggregate day-0 procurement.12
Linear
11
It is unclear how meaningful the results of learning would be if
Ds were added to the training data set. Indeed, the additional data
may actually increase the learning variance.
12
Recall that designer"s objective is to incentivize aggergate day-0
procurement that is below the threshold α. Our threshold here still
represents a commitment of over 20% of the suppliers" capacity for
extrapolation of the maximum of the outcome correspondence
estimated from D yields θ = 320.
The data for θ = 320 were collected in the same way as for other
storage cost settings, with 10 randomly generated profiles followed
by 33 iterations of BestFirstSearch. Figure 3 shows the detailed
-bounds for all profiles in terms of their corresponding values of
φ.
0.00E+00
5.00E+06
1.00E+07
1.50E+07
2.00E+07
2.50E+07
3.00E+07
3.50E+07
4.00E+07
4.50E+07
5.00E+07
2.1 2.4 2.7 3 3.3 3.6 3.9 4.2 4.5 4.8 5.1 5.4 5.7 6 6.3 6.6 6.9 7.2
Total Day-0 Procurement
ε−boundFigure 3: Values of ˆ for profiles explored using search when
θ = 320. Strategy profiles explored are presented in terms of
the corresponding values of φ(a). The gray region corresponds
to ˆφ∗
(320) with δ =2.5M.
The estimated set of aggregate day-0 outcomes is very close to
that for θ = 200, indicating that there is little additional benefit
to raising storage costs above 200. Observe, that even the lower
bound of our estimated set of Nash equilibria is well above the
target day-0 procurement of 2. Furthermore, payoffs to agents are
almost always negative at θ = 320. Consequently, increasing the
costs further would be undesirable even if day-0 procurement could
eventually be curbed. Since we are reasonably confident that φ∗
(θ)
is decreasing in θ, we also do not expect that setting θ somewhere
between 200 and 320 will achieve the desired result.
We conclude that it is unlikely that day-0 procurement could ever
be reduced to a desirable level using any reasonable setting of the
storage cost parameter. That our predictions tend to underestimate
tournament outcomes reinforces this conclusion. To achieve the
desired reduction in day-0 procurement requires redesigning other
aspects of the mechanism.
4.5 Probabilistic Analysis
Our empirical analysis has produced evidence in support of the
conclusion that no reasonable setting of storage cost was likely to
sufficiently curb excessive day-0 procurement in TAC/SCM "04.
All of this evidence has been in the form of simple interpolation and
extrapolation of estimates of the Nash equilibrium correspondence.
These estimates are based on simulating game instances, and are
subject to sampling noise contributed by the various stochastic
elements of the game. In this section, we develop and apply methods
for evaluating the sensitivity of our -bound calculations to such
stochastic effects.
Suppose that all agents have finite (and small) pure strategy sets,
A. Thus, it is feasible to sample the entire payoff matrix of the
game. Additionally, suppose that noise is additive with zero-mean
the entire game on average, so in practice we would probably want
the threshold to be even lower.
310
and finite variance, that is, Ui(a) = ui(a) + ˜ξi(a), where Ui(a) is
the observed payoff to i when a was played, ui(a) is the actual
corresponding payoff, and ˜ξi(a) is a mean-zero normal random
variable. We designate the known variance of ˜ξi(a) by σ2
i (a). Thus,
we assume that ˜ξi(a) is normal with distribution N(0, σ2
i (a)).
We take ¯ui(a) to be the sample mean over all Ui(a) in D, and
follow Chang and Huang [3] to assume that we have an improper
prior over the actual payoffs ui(a) and sampling was independent
for all i and a. We also rely on their result that ui(a)|¯ui(a) =
¯ui(a)−Zi(a)/[σi(a)/
p
ni(a)] are independent with posterior
distributions N(¯ui(a), σ2
i (a)/ni(a)), where ni(a) is the number of
samples taken of payoffs to i for pure profile a, and Zi(a) ∼
N(0, 1).
We now derive a generic probabilistic bound that a profile a ∈
A is an -Nash equilibrium. If ui(·)|¯ui(·) are independent for all
i ∈ I and a ∈ A, we have the following result (from this point on
we omit conditioning on ¯ui(·) for brevity):
PROPOSITION 1.
Pr
„
max
i∈I
max
b∈Ai
ui(b, a−i) − ui(a) ≤
«
=
=
Y
i∈I
Z
R
Y
b∈Ai\ai
Pr(ui(b, a−i) ≤ u + )fui(a)(u)du,
(2)
where fui(a)(u) is the pdf of N(¯ui(a), σi(a)).
The proofs of this and all subsequent results are in the Appendix.
The posterior distribution of the optimum mean of n samples,
derived by Chang and Huang [3], is
Pr (ui(a) ≤ c) = 1 − Φ
"p
ni(a)(¯ui(a) − c)
σi(a)
#
, (3)
where a ∈ A and Φ(·) is the N(0, 1) distribution function.
Combining the results (2) and (3), we obtain a probabilistic
confidence bound that (a) ≤ γ for a given γ.
Now, we consider cases of incomplete data and use the results
we have just obtained to construct an upper bound (restricted to
profiles represented in data) on the distribution of sup{φ∗
(θ)} and
inf{φ∗
(θ)} (assuming that both are attainable):
Pr{sup{φ∗
(θ)} ≤ x} ≤D
Pr{∃a ∈ D : φ(a) ≤ x ∧ a ∈ N(θ)} ≤
X
a∈D:φ(a)≤x
Pr{a ∈ N(θ)} =
X
a∈D:φ(a)≤x
Pr{ (a) = 0},
where x is a real number and ≤D indicates that the upper bound
accounts only for strategies that appear in the data set D. Since the
events {∃a ∈ D : φ(a) ≤ x ∧ a ∈ N(θ)} and {inf{φ∗
(θ)} ≤ x}
are equivalent, this also defines an upper bound on the
probability of {inf{φ∗
(θ)} ≤ x}. The values thus derived comprise the
Tables 1 and 2.
φ∗
(θ) θ = 0 θ = 50 θ = 100
<2.7 0.000098 0 0.146
<3 0.158 0.0511 0.146
<3.9 0.536 0.163 1
<4.5 1 1 1
Table 1: Upper bounds on the distribution of inf{φ∗
(θ)}
restricted to D for θ ∈ {0, 50, 100} when N(θ) is a set of Nash
equilibria.
φ∗
(θ) θ = 150 θ = 200 θ = 320
<2.7 0 0 0.00132
<3 0.0363 0.141 1
<3.9 1 1 1
<4.5 1 1 1
Table 2: Upper bounds on the distribution of inf{φ∗
(θ)}
restricted to D for θ ∈ {150, 200, 320} when N(θ) is a set of
Nash equilibria.
Tables 1 and 2 suggest that the existence of any equilibrium with
φ(a) < 2.7 is unlikely for any θ that we have data for, although
this judgment, as we mentioned, is only with respect to the profiles
we have actually sampled. We can then accept this as another piece
of evidence that the designer could not find a suitable setting of θ
to achieve his objectives-indeed, the designer seems unlikely to
achieve his objective even if he could persuade participants to play
a desirable equilibrium!
Table 1 also provides additional evidence that the agents in the
2004 TAC/SCM tournament were indeed rational in procuring large
numbers of components at the beginning fo the game. If we look at
the third column of this table, which corresponds to θ = 100, we
can gather that no profile a in our data with φ(a) < 3 is very likely
to be played in equilibrium.
The bounds above provide some general evidence, but ultimately
we are interested in a concrete probabilistic assessment of our
conclusion with respect to the data we have sampled. Particularly, we
would like to say something about what happens for the settings of
θ for which we have no data. To derive an approximate
probabilistic bound on the probability that no θ ∈ Θ could have achieved the
designer"s objective, let ∪J
j=1Θj, be a partition of Θ, and assume
that the function sup{φ∗
(θ)} satisfies the Lipschitz condition with
Lipschitz constant Aj on each subset Θj.13
Since we have
determined that raising the storage cost above 320 is undesirable due to
secondary considerations, we restrict attention to Θ = [0, 320]. We
now define each subset j to be the interval between two points for
which we have produced data. Thus,
Θ = [0, 50]
[
(50, 100]
[
(100, 150]
[
(150, 200]
[
(200, 320],
with j running between 1 and 5, corresponding to subintervals
above. We will further denote each Θj by (aj , bj].14
Then, the
following Proposition gives us an approximate upper bound15
on
the probability that sup{φ∗
(θ)} ≤ α.
PROPOSITION 2.
Pr{
_
θ∈Θ
sup{φ(θ)} ≤ α} ≤D
5X
j=1
X
y,z∈D:y+z≤cj
0
@
X
a:φ(a)=z
Pr{ (a) = 0}
1
A ×
×
0
@
X
a:φ(a)=y
Pr{ (a) = 0}
1
A ,
where cj = 2α + Aj(bj − aj) and ≤D indicates that the upper
bound only accounts for strategies that appear in the data set D.
13
A function that satisfies the Lipschitz condition is called Lipschitz
continuous.
14
The treatment for the interval [0,50] is identical.
15
It is approximate in a sense that we only take into account
strategies that are present in the data.
311
Due to the fact that our bounds are approximate, we cannot use
them as a conclusive probabilistic assessment. Instead, we take this
as another piece of evidence to complement our findings.
Even if we can assume that a function that we approximate from
data is Lipschitz continuous, we rarely actually know the Lipschitz
constant for any subset of Θ. Thus, we are faced with a task of
estimating it from data. Here, we tried three methods of doing
this. The first one simply takes the highest slope that the function
attains within the available data and uses this constant value for
every subinterval. This produces the most conservative bound, and
in many situations it is unlikely to be informative.
An alternative method is to take an upper bound on slope
obtained within each subinterval using the available data. This
produces a much less conservative upper bound on probabilities.
However, since the actual upper bound is generally greater for each
subinterval, the resulting probabilistic bound may be deceiving.
A final method that we tried is a compromise between the two
above. Instead of taking the conservative upper bound based on
data over the entire function domain Θ, we take the average of
upper bounds obtained at each Θj. The bound at an interval is then
taken to be the maximum of the upper bound for this interval and
the average upper bound for all intervals.
The results of evaluating the expression for
Pr{
_
θ∈Θ
sup{φ∗
(θ)} ≤ α}
when α = 2 are presented in Table 3. In terms of our claims in
maxj Aj Aj max{Aj ,ave(Aj)}
1 0.00772 0.00791
Table 3: Approximate upper bound on probability that some
setting of θ ∈ [0, 320] will satisfy the designer objective with
target α = 2. Different methods of approximating the upper
bound on slope in each subinterval j are used.
this work, the expression gives an upper bound on the probability
that some setting of θ (i.e., storage cost) in the interval [0,320] will
result in total day-0 procurement that is no greater in any
equilibrium than the target specified by α and taken here to be 2. As we
had suspected, the most conservative approach to estimating the
upper bound on slope, presented in the first column of the table,
provides us little information here. However, the other two
estimation approaches, found in columns two and three of Table 3,
suggest that we are indeed quite confident that no reasonable setting of
θ ∈ [0, 320] would have done the job. Given the tremendous
difficulty of the problem, this result is very strong.16
Still, we must be
very cautious in drawing too heroic a conclusion based on this
evidence. Certainly, we have not checked all the profiles but only
a small proportion of them (infinitesimal, if we consider the
entire continuous domain of θ and strategy sets). Nor can we expect
ever to obtain enough evidence to make completely objective
conclusions. Instead, the approach we advocate here is to collect as
much evidence as is feasible given resource constraints, and make
the most compelling judgment based on this evidence, if at all
possible.
5. CONVERGENCE RESULTS
At this point, we explore abstractly whether a design parameter
choice based on payoff data can be asymptotically reliable.
16
Since we did not have all the possible deviations for any profile
available in the data, the true upper bounds may be even lower.
As a matter of convenience, we will use notation un,i(a) to
refer to a payoff function of player i based on an average over n
i.i.d. samples from the distribution of payoffs. We also assume that
un,i(a) are independent for all a ∈ A and i ∈ I. We will use
the notation Γn to refer to the game [I, R, {ui,n(·)}], whereas Γ
will denote the underlying game, [I, R, {ui(·)}]. Similarly, we
define n(r) to be (r) with respect to the game Γn.
In this section, we show that n(s) → (s) a.s. uniformly on
the mixed strategy space for any finite game, and, furthermore, that
all mixed strategy Nash equilibria in empirical games eventually
become arbitrarily close to some Nash equilibrium strategies in the
underlying game. We use these results to show that under certain
conditions, the optimal choice of the design parameter based on
empirical data converges almost surely to the actual optimum.
THEOREM 3. Suppose that |I| < ∞, |A| < ∞. Then n(s) →
(s) a.s. uniformly on S.
Recall that N is a set of all Nash equilibria of Γ. If we define
Nn,γ = {s ∈ S : n(s) ≤ γ}, we have the following corollary to
Theorem 3:
COROLLARY 4. For every γ > 0, there is M such that ∀n ≥
M, N ⊂ Nn,γ a.s.
PROOF. Since (s) = 0 for every s ∈ N, we can find M large
enough such that Pr{supn≥M sups∈N n(s) < γ} = 1.
By the Corollary, for any game with a finite set of pure strategies
and for any > 0, all Nash equilibria lie in the set of empirical
-Nash equilibria if enough samples have been taken. As we now
show, this provides some justification for our use of a set of profiles
with a non-zero -bound as an estimate of the set of Nash equilibria.
First, suppose we conclude that for a particular setting of θ,
sup{ˆφ∗
(θ)} ≤ α. Then, since for any fixed > 0, N(θ) ⊂
Nn, (θ) when n is large enough,
sup{φ∗
(θ)} = sup
s∈N (θ)
φ(s) ≤
sup
s∈Nn, (θ)
φ(s) = sup{ˆφ∗
(θ)} ≤ α
for any such n. Thus, since we defined the welfare function of
the designer to be I{sup{φ∗
(θ)} ≤ α} in our domain of interest,
the empirical choice of θ satisfies the designer"s objective, thereby
maximizing his welfare function.
Alternatively, suppose we conclude that inf{ˆφ∗
(θ)} > α for
every θ in the domain. Then,
α < inf{ˆφ∗
(θ)} = inf
s∈Nn, (θ)
φ(s) ≤ inf
s∈N (θ)
φ(s) ≤
≤ sup
s∈N (θ)
φ(s) = sup{φ∗
(θ)},
for every θ, and we can conclude that no setting of θ will satisfy
the designer"s objective.
Now, we will show that when the number of samples is large
enough, every Nash equilibrium of Γn is close to some Nash
equilibrium of the underlying game. This result will lead us to consider
convergence of optimizers based on empirical data to actual
optimal mechanism parameter settings.
We first note that the function (s) is continuous in a finite game.
LEMMA 5. Let S be a mixed strategy set defined on a finite
game. Then : S → R is continuous.
312
For the exposition that follows, we need a bit of additional
notation. First, let (Z, d) be a metric space, and X, Y ⊂ Z and define
directed Hausdorff distance from X to Y to be
h(X, Y ) = sup
x∈X
inf
y∈Y
d(x, y).
Observe that U ⊂ X ⇒ h(U, Y ) ≤ h(X, Y ). Further, define
BS(x, δ) to be an open ball in S ⊂ Z with center x ∈ S and
radius δ. Now, let Nn denote all Nash equilibria of the game Γn
and let
Nδ =
[
x∈N
BS(x, δ),
that is, the union of open balls of radius δ with centers at Nash
equilibria of Γ. Note that h(Nδ, N) = δ.
We can then prove the following general result.
THEOREM 6. Suppose |I| < ∞ and |A| < ∞. Then almost
surely h(Nn, N) converges to 0.
We will now show that in the special case when Θ and A are
finite and each Γθ has a unique Nash equilibrium, the estimates
ˆθ of optimal designer parameter converge to an actual optimizer
almost surely.
Let ˆθ = arg maxθ∈Θ W (Nn(θ), θ), where n is the number of
times each pure profile was sampled in Γθ for every θ, and let θ∗
=
arg maxθ∈Θ W (N(θ), θ).
THEOREM 7. Suppose |N(θ)| = 1 for all θ ∈ Θ and suppose
that Θ and A are finite. Let W (s, θ) be continuous at the unique
s∗
(θ) ∈ N(θ) for each θ ∈ Θ. Then ˆθ is a consistent estimator of
θ∗
if W (N(θ), θ) is defined as a supremum, infimum, or
expectation over the set of Nash equilibria. In fact, ˆθ → θ∗
a.s. in each of
these cases.
The shortcoming of the above result is that, within our
framework, the designer has no way of knowing or ensuring that Γθ do,
indeed, have unique equilibria. However, it does lend some
theoretical justification for pursuing design in this manner, and, perhaps,
will serve as a guide for more general results in the future.
6. RELATED WORK
The mechanism design literature in Economics has typically
explored existence of a mechanism that implements a social choice
function in equilibrium [10]. Additionally, there is an extensive
literature on optimal auction design [10], of which the work by Roger
Myerson [11] is, perhaps, the most relevant. In much of this work,
analytical results are presented with respect to specific utility
functions and accounting for constraints such as incentive compatibility
and individual rationality.
Several related approaches to search for the best mechanism
exist in the Computer Science literature. Conitzer and Sandholm [6]
developed a search algorithm when all the relevant game
parameters are common knowledge. When payoff functions of players
are unknown, a search using simulations has been explored as an
alternative. One approach in that direction, taken in [4] and [15],
is to co-evolve the mechanism parameter and agent strategies,
using some notion of social utility and agent payoffs as fitness
criteria. An alternative to co-evolution explored in [16] was to
optimize a well-defined welfare function of the designer using genetic
programming. In this work the authors used a common learning
strategy for all agents and defined an outcome of a game induced
by a mechanism parameter as the outcome of joint agent learning.
Most recently, Phelps et al. [14] compared two mechanisms based
on expected social utility with expectation taken over an empirical
distribution of equilibria in games defined by heuristic strategies,
as in [18].
7. CONCLUSION
In this work we spent considerable effort developing general
tactics for empirical mechanism design. We defined a formal
gametheoretic model of interaction between the designer and the
participants of the mechanism as a two-stage game. We also described in
some generality the methods for estimating a sample Nash
equilibrium function when the data is extremely scarce, or a Nash
equilibrium correspondence when more data is available. Our techniques
are designed specifically to deal with problems in which both the
mechanism parameter space and the agent strategy sets are infinite
and only a relatively small data set can be acquired.
A difficult design issue in the TAC/SCM game which the TAC
community has been eager to address provides us with a setting
to test our methods. In applying empirical game analysis to the
problem at hand, we are fully aware that our results are inherently
inexact. Thus, we concentrate on collecting evidence about the
structure of the Nash equilibrium correspondence. In the end, we
can try to provide enough evidence to either prescribe a parameter
setting, or suggest that no setting is possible that will satisfy the
designer. In the case of TAC/SCM, our evidence suggests quite
strongly that storage cost could not have been effectively adjusted
in the 2004 tournament to curb excessive day-0 procurement
without detrimental effects on overall profitability. The success of our
analysis in this extremely complex environment with high
simulation costs makes us optimistic that our methods can provide
guidance in making mechanism design decisions in other challenging
domains. The theoretical results confirm some intuitions behind
the empirical mechanism design methods we have introduced, and
increases our confidence that our framework can be effective in
estimating the best mechanism parameter choice in relatively general
settings.
Acknowledgments
We thank Terence Kelly, Matthew Rudary, and Satinder Singh for
helpful comments on earlier drafts of this work. This work was
supported in part by NSF grant IIS-0205435 and the DARPA REAL
strategic reasoning program.
8. REFERENCES
[1] R. Arunachalam and N. M. Sadeh. The supply chain trading
agent competition. Electronic Commerce Research and
Applications, 4:63-81, 2005.
[2] M. Benisch, A. Greenwald, V. Naroditskiy, and M. Tschantz.
A stochastic programming approach to scheduling in TAC
SCM. In Fifth ACM Conference on Electronic Commerce,
pages 152-159, New York, 2004.
[3] Y.-P. Chang and W.-T. Huang. Generalized confidence
intervals for the largest value of some functions of
parameters under normality. Statistica Sinica, 10:1369-1383,
2000.
[4] D. Cliff. Evolution of market mechanism through a
continuous space of auction-types. In Congress on
Evolutionary Computation, 2002.
[5] D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active
learning with statistical models. Journal of Artificial
Intelligence Research, 4:129-145, 1996.
[6] V. Conitzer and T. Sandholm. An algorithm for automatically
designing deterministic mechanisms without payments. In
313
Third International Joint Conference on Autonomous Agents
and Multi-Agent Systems, pages 128-135, 2004.
[7] D. Friedman. Evolutionary games in economics.
Econometrica, 59(3):637-666, May 1991.
[8] R. Keener. Statistical Theory: A Medley of Core Topics.
University of Michigan Department of Statistics, 2004.
[9] C. Kiekintveld, Y. Vorobeychik, and M. P. Wellman. An
analysis of the 2004 supply chain management trading agent
competition. In IJCAI-05 Workshop on Trading Agent
Design and Analysis, Edinburgh, 2005.
[10] A. Mas-Colell, M. Whinston, and J. Green. Microeconomic
Theory. Oxford University Press, 1995.
[11] R. B. Myerson. Optimal auction design. Mathematics of
Operations Research, 6(1):58-73, February 1981.
[12] S. Olafsson and J. Kim. Simulation optimization. In
E. Yucesan, C.-H. Chen, J. Snowdon, and J. Charnes, editors,
2002 Winter Simulation Conference, 2002.
[13] D. Pardoe and P. Stone. TacTex-03: A supply chain
management agent. SIGecom Exchanges, 4(3):19-28, 2004.
[14] S. Phelps, S. Parsons, and P. McBurney. Automated agents
versus virtual humans: an evolutionary game theoretic
comparison of two double-auction market designs. In
Workshop on Agent Mediated Electronic Commerce VI,
2004.
[15] S. Phelps, S. Parsons, P. McBurney, and E. Sklar.
Co-evolution of auction mechanisms and trading strategies:
towards a novel approach to microeconomic design. In
ECOMAS 2002 Workshop, 2002.
[16] S. Phelps, S. Parsons, E. Sklar, and P. McBurney. Using
genetic programming to optimise pricing rules for a
double-auction market. In Workshop on Agents for Electronic
Commerce, 2003.
[17] Y. Vorobeychik, M. P. Wellman, and S. Singh. Learning
payoff functions in infinite games. In Nineteenth
International Joint Conference on Artificial Intelligence,
pages 977-982, 2005.
[18] W. E. Walsh, R. Das, G. Tesauro, and J. O. Kephart.
Analyzing complex strategic interactions in multi-agent
systems. In AAAI-02 Workshop on Game Theoretic and
Decision Theoretic Agents, 2002.
[19] M. P. Wellman, J. J. Estelle, S. Singh, Y. Vorobeychik,
C. Kiekintveld, and V. Soni. Strategic interactions in a supply
chain game. Computational Intelligence, 21(1):1-26,
February 2005.
APPENDIX
A. PROOFS
A.1 Proof of Proposition 1
Pr
„
max
i∈I
max
b∈Ai\ai
ui(b, a−i) − ui(a) ≤
«
=
=
Y
i∈I
Eui(a)
»
Pr( max
b∈Ai\ai
ui(b, a−i) − ui(a) ≤ |ui(a))
=
=
Y
i∈I
Z
R
Y
b∈Ai\ai
Pr(ui(b, a−i) ≤ u + )fui(a)(u)du.
A.2 Proof of Proposition 2
First, let us suppose that some function, f(x) defined on [ai, bi],
satisfy Lipschitz condition on (ai, bi] with Lipschitz constant Ai.
Then the following claim holds:
Claim: infx∈(ai,bi] f(x) ≥ 0.5(f(ai) + f(bi) − Ai(bi − ai).
To prove this claim, note that the intersection of lines at f(ai)
and f(bi) with slopes −Ai and Ai respectively will determine the
lower bound on the minimum of f(x) on [ai, bi] (which is a lower
bound on infimum of f(x) on (ai, bj ]). The line at f(ai) is
determined by f(ai) = −Aiai + cL and the line at f(bi) is determined
by f(bi) = Aibi +cR. Thus, the intercepts are cL = f(ai)+Aiai
and cR = f(bi) + Aibi respectively. Let x∗
be the point at which
these lines intersect. Then,
x∗
= −
f(x∗
) − cR
A
=
f(x∗
) − cL
A
.
By substituting the expressions for cR and cL, we get the desired
result.
Now, subadditivity gives us
Pr{
_
θ∈Θ
sup{φ∗
(θ)} ≤ α} ≤
5X
j=1
Pr{
_
θ∈Θj
sup{φ∗
(θ)} ≤ α},
and, by the claim,
Pr{
_
θ∈Θj
sup{φ∗
(θ)} ≤ α} =
1 − Pr{ inf
θ∈Θj
sup{φ∗
(θ)} > α} ≤
Pr{sup{φ∗
(aj)} + sup{φ∗
(bj)} ≤ 2α + Aj(bj − aj )}.
Since we have a finite number of points in the data set for each
θ, we can obtain the following expression:
Pr{sup{φ∗
(aj)} + sup{φ∗
(bj)} ≤ cj } =D
X
y,z∈D:y+z≤cj
Pr{sup{φ∗
(bj )} = y} Pr{sup{φ∗
(aj)} = z}.
We can now restrict attention to deriving an upper bound on
Pr{sup{φ∗
(θ)} = y} for a fixed θ. To do this, observe that
Pr{sup{φ∗
(θ)} = y} ≤D Pr{
_
a∈D:φ(a)=y
(a) = 0} ≤
X
a∈D:φ(a)=y
Pr{ (a) = 0}
by subadditivity and the fact that a profile a is a Nash equilibrium
if and only if (a) = 0.
Putting everything together yields the desired result.
A.3 Proof of Theorem 3
First, we will need the following fact:
Claim: Given a function fi(x) and a set X, | maxx∈X f1(x) −
maxx∈X f2(x)| ≤ maxx∈X |f1(x) − f2(x)|.
To prove this claim, observe that
| max
x∈X
f1(x) − max
x∈X
f2(x)| =
maxx f1(x) − maxx f2(x) if maxx f1(x) ≥ maxx f2(x)
maxx f2(x) − maxx f1(x) if maxx f2(x) ≥ maxx f1(x)
In the first case,
max
x∈X
f1(x) − max
x∈X
f2(x) ≤ max
x∈X
(f1(x) − f2(x)) ≤
≤ max
x∈X
|f1(x) − f2(x)|.
314
Similarly, in the second case,
max
x∈X
f2(x) − max
x∈X
f1(x) ≤ max
x∈X
(f2(x) − f1(x)) ≤
≤ max
x∈X
|f2(x) − f1(x)| = max
x∈X
|f1(x) − f2(x)|.
Thus, the claim holds.
By the Strong Law of Large Numbers, un,i(a) → ui(a) a.s. for
all i ∈ I, a ∈ A. That is,
Pr{ lim
n→∞
un,i(a) = ui(a)} = 1,
or, equivalently [8], for any α > 0 and δ > 0, there is M(i, a) > 0
such that
Pr{ sup
n≥M(i,a)
|un,i(a) − ui(a)| <
δ
2|A|
} ≥ 1 − α.
By taking M = maxi∈I maxa∈A M(i, a), we have
Pr{max
i∈I
max
a∈A
sup
n≥M
|un,i(a) − ui(a)| <
δ
2|A|
} ≥ 1 − α.
Thus, by the claim, for any n ≥ M,
sup
n≥M
| n(s) − (s)| ≤
max
i∈I
max
ai∈Ai
sup
n≥M
|un,i(ai, s−i) − ui(ai, s−i)|+
+ sup
n≥M
max
i∈I
|un,i(s) − ui(s)| ≤
max
i∈I
max
ai∈Ai
X
b∈A−i
sup
n≥M
|un,i(ai, b) − ui(ai, b)|s−i(b)+
+ max
i∈I
X
b∈A
sup
n≥M
|un,i(b) − ui(b)|s(b) ≤
max
i∈I
max
ai∈Ai
X
b∈A−i
sup
n≥M
|un,i(ai, b) − ui(ai, b)|+
+ max
i∈I
X
b∈A
sup
n≥M
|un,i(b) − ui(b)| <
max
i∈I
max
ai∈Ai
X
b∈A−i
(
δ
2|A|
) + max
i∈I
X
b∈A
(
δ
2|A|
) ≤ δ
with probability at least 1 − α. Note that since s−i(a) and s(a)
are bounded between 0 and 1, we were able to drop them from
the expressions above to obtain a bound that will be valid
independent of the particular choice of s. Furthermore, since the above
result can be obtained for an arbitrary α > 0 and δ > 0, we have
Pr{limn→∞ n(s) = (s)} = 1 uniformly on S.
A.4 Proof of Lemma 5
We prove the result using uniform continuity of ui(s) and
preservation of continuity under maximum.
Claim: A function f : Rk
→ R defined by f(t) =
Pk
i=1 ziti,
where zi are constants in R, is uniformly continuous in t.
The claim follows because |f(t)−f(t )| = |
Pk
i=1 zi(ti−ti)| ≤
Pk
i=1 |zi||ti − ti|. An immediate result of this for our purposes is
that ui(s) is uniformly continuous in s and ui(ai, s−i) is uniformly
continuous in s−i.
Claim: Let f(a, b) be uniformly continuous in b ∈ B for every
a ∈ A, with |A| < ∞. Then V (b) = maxa∈A f(a, b) is uniformly
continuous in b.
To show this, take γ > 0 and let b, b ∈ B such that b − b <
δ(a) ⇒ |f(a, b) − f(a, b )| < γ. Now take δ = mina∈A δ(a).
Then, whenever b − b < δ,
|V (b) − V (b )| = | max
a∈A
f(a, b) − max
a∈A
f(a, b )| ≤
max
a∈A
|f(a, b) − f(a, b )| < γ.
Now, recall that (s) = maxi[maxai∈Ai ui(ai, s−i) − ui(s)]. By
the claims above, maxai∈Ai ui(ai, s−i) is uniformly continuous in
s−i and ui(s) is uniformly continuous in s. Since the difference of
two uniformly continuous functions is uniformly continuous, and
since this continuity is preserved under maximum by our second
claim, we have the desired result.
A.5 Proof of Theorem 6
Choose δ > 0. First, we need to ascertain that the following
claim holds:
Claim: ¯ = mins∈S\Nδ
(s) exists and ¯ > 0.
Since Nδ is an open subset of compact S, it follows that S \
Nδ is compact. As we had also proved in Lemma 5 that (s) is
continuous, existence follows from the Weierstrass theorem. That
¯ > 0 is clear since (s) = 0 if and only if s is a Nash equilibrium
of Γ.
Now, by Theorem 3, for any α > 0 there is M such that
Pr{ sup
n≥M
sup
s∈S
| n(s) − (s)| < ¯} ≥ 1 − α.
Consequently, for any δ > 0,
Pr{ sup
n≥M
h(Nn, Nδ) < δ} ≥ Pr{∀n ≥ M Nn ⊂ Nδ} ≥
Pr{ sup
n≥M
sup
s∈N
(s) < ¯} ≥
Pr{ sup
n≥M
sup
s∈S
| n(s) − (s)| < ¯} ≥ 1 − α.
Since this holds for an arbitrary α > 0 and δ > 0, the desired result
follows.
A.6 Proof of Theorem 7
Fix θ and choose δ > 0. Since W (s, θ) is continuous at s∗
(θ),
given > 0 there is δ > 0 such that for every s that is within δ of
s∗
(θ), |W (s , θ) − W (s∗
(θ), θ)| < . By Theorem 6, we can find
M(θ) large enough such that all s ∈ Nn are within δ of s∗
(θ) for
all n ≥ M(θ) with probability 1. Consequently, for any > 0 we
can find M(θ) large enough such that with probability 1 we have
supn≥M(θ) sups ∈Nn
|W (s , θ) − W (s∗
(θ), θ)| < .
Let us assume without loss of generality that there is a unique
optimal choice of θ. Now, since the set Θ is finite, there is also the
second-best choice of θ (if there is only one θ ∈ Θ this discussion
is moot anyway):
θ∗∗
= arg max
Θ\θ∗
W (s∗
(θ), θ).
Suppose w.l.o.g. that θ∗∗
is also unique and let
∆ = W (s∗
(θ∗
), θ∗
) − W (s∗
(θ∗∗
), θ∗∗
).
Then if we let < ∆/2 and let M = maxθ∈Θ M(θ), where each
M(θ) is large enough such that supn≥M(θ) sups ∈Nn
|W (s , θ)−
W (s∗
(θ), θ)| < a.s., the optimal choice of θ based on any
empirical equilibrium will be θ∗
with probability 1. Thus, in
particular, given any probability distribution over empirical equilibria, the
best choice of θ will be θ∗
with probability 1 (similarly, if we take
supremum or infimum of W (Nn(θ), θ) over the set of empirical
equilibria in constructing the objective function).
315 | analysis;nash equilibrium;participant;game theory;observed behavior;outcome feature of interest;parameter setting;player;gametheoretic model;supply-chain trading;two-stage game;empirical mechanism design;interest outcome feature;empirical mechanism |
train_J-47 | On the Computational Power of Iterative Auctions∗ | We embark on a systematic analysis of the power and limitations of iterative combinatorial auctions. Most existing iterative combinatorial auctions are based on repeatedly suggesting prices for bundles of items, and querying the bidders for their demand under these prices. We prove a large number of results showing the boundaries of what can be achieved by auctions of this kind. We first focus on auctions that use a polynomial number of demand queries, and then we analyze the power of different kinds of ascending-price auctions. | 1. INTRODUCTION
Combinatorial auctions have recently received a lot of
attention. In a combinatorial auction, a set M of m
nonidentical items are sold in a single auction to n competing
bidders. The bidders have preferences regarding the bundles
of items that they may receive. The preferences of bidder i
are specified by a valuation function vi : 2M
→ R+
, where
vi(S) denotes the value that bidder i attaches to winning the
bundle of items S. We assume free disposal, i.e., that the
vi"s are monotone non-decreasing. The usual goal of the
auctioneer is to optimize the social welfare
P
i vi(Si), where the
allocation S1...Sn must be a partition of the items.
Applications include many complex resource allocation problems
and, in fact, combinatorial auctions may be viewed as the
common abstraction of many complex resource allocation
problems. Combinatorial auctions face both economic and
computational difficulties and are a central problem in the
recently active border of economic theory and computer
science. A forthcoming book [11] addresses many of the issues
involved in the design and implementation of combinatorial
auctions.
The design of a combinatorial auction involves many
considerations. In this paper we focus on just one central
issue: the communication between bidders and the allocation
mechanism - preference elicitation. Transferring all
information about bidders" preferences requires an infeasible
(exponential in m) amount of communication. Thus,
direct revelation auctions in which bidders simply declare
their preferences to the mechanism are only practical for
very small auction sizes or for very limited families of bidder
preferences. We have therefore seen a multitude of suggested
iterative auctions in which the auction protocol repeatedly
interacts with the different bidders, aiming to adaptively
elicit enough information about the bidders" preferences as
to be able to find a good (optimal or close to optimal)
allocation.
Most of the suggested iterative auctions proceed by
maintaining temporary prices for the bundles of items and
repeatedly querying the bidders as to their preferences between the
bundles under the current set of prices, and then updating
the set of bundle prices according to the replies received
(e.g., [22, 12, 17, 37, 3]). Effectively, such an iterative
auction accesses the bidders" preferences by repeatedly making
the following type of demand query to bidders: Query to
bidder i: a vector of bundle prices p = {p(S)}S⊆M ; Answer:
a bundle of items S ⊆ M that maximizes vi(S) − p(S)..
These types of queries are very natural in an economic
setting as they capture the revealed preferences of the
bidders. Some auctions, called item-price or linear-price
auctions, specify a price pi for each item, and the price of any
given bundle S is always linear, p(S) =
P
i∈S pi. Other
auctions, called bundle-price auctions, allow specifying
arbitrary (non-linear) prices p(S) for bundles. Another
important differentiation between models of iterative auctions is
29
based on whether they use anonymous or non-anonymous
prices: In some auctions the prices that are presented to the
bidders are always the same (anonymous prices). In other
auctions (non-anonymous), different bidders may face
different (discriminatory) vectors of prices. In ascending-price
auctions, forcing prices to be anonymous may be a
significant restriction.
In this paper, we embark on a systematic analysis of the
computational power of iterative auctions that are based
on demand queries. We do not aim to present auctions for
practical use but rather to understand the limitations and
possibilities of these kinds of auctions. In the first part of
this paper, our main question is what can be done using a
polynomial number of these types of queries? That is,
polynomial in the main parameters of the problem: n, m and the
number of bits t needed for representing a single value vi(S).
Note that from an algorithmic point of view we are talking
about sub-linear time algorithms: the input size here is
really n(2m
− 1) numbers - the descriptions of the valuation
functions of all bidders. There are two aspects to
computational efficiency in these settings: the first is the
communication with the bidders, i.e., the number of queries made, and
the second is the usual computational tractability. Our
lower bounds will depend only on the number of
queriesand hold independently of any computational assumptions
like P = NP. Our upper bounds will always be
computationally efficient both in terms of the number of queries
and in terms of regular computation. As mentioned, this
paper concentrates on the single aspect of preference elicitation
and on its computational consequences and does not address
issues of incentives. This strengthens our lower bounds, but
means that the upper bounds require evaluation from this
perspective also before being used in any real combinatorial
auction.1
The second part of this paper studies the power of
ascending -price auctions. Ascending auctions are iterative
auctions where the published prices cannot decrease in time. In
this work, we try to systematically analyze what do the
differences between various models of ascending auctions mean.
We try to answer the following questions: (i) Which models
of ascending auctions can find the optimal allocation, and for
which classes of valuations? (ii) In cases where the optimal
allocation cannot be determined by ascending auctions, how
well can such auctions approximate the social welfare? (iii)
How do the different models for ascending auctions compare?
Are some models computationally stronger than others?
Ascending auctions have been extensively studied in the
literature (see the recent survey by Parkes [35]). Most of this
work presented "upper bounds", i.e., proposed mechanisms
with ascending prices and analyzed their properties. A
result which is closer in spirit to ours, is by Gul and Stacchetti
[17], who showed that no item-price ascending auction can
always determine the VCG prices, even for substitutes
valuations.2
Our framework is more general than the traditional
line of research that concentrates on the final allocation and
1
We do observe however that some weak incentive
property comes for free in demand-query auctions since myopic
players will answer all demand queries truthfully. We also
note that in some cases (but not always!) the incentives
issues can be handled orthogonally to the preference
elicitation issues, e.g., by using Vickrey-Clarke-Groves (VCG)
prices (e.g., [4, 34]).
2
We further discuss this result in Section 5.3.
Iterative auctions
Demand auctions
Item-price
auctions
Anonymous
price auctions
Ascending
auctions
1
2
3
4
5
6
97
8
10
Figure 1: The diagram classifies the following auctions
according to their properties:
(1) The adaptation [12] for Kelso & Crawford"s [22]
auction.
(2) The Proxy Auction [3] by Ausubel & Milgrom.
(3) iBundle(3) by Parkes & Ungar [34].
(4) iBundle(2) by Parkes & Ungar [37].
(5) Our descending adaptation for the 2-approximation
for submodular valuations by [25] (see Subsection 5.4).
(6) Ausubel"s [4] auction for substitutes valuations.
(7) The adaptation by Nisan & Segal [32] of the O(
√
m)
approximation by [26].
(8) The duplicate-item auction by [5].
(9) Auction for Read-Once formulae by [43].
(10) The AkBA Auction by Wurman & Wellman [42].
payments and in particular, on reaching "Walrasian
equilibria" or "Competitive equilibria". A Walrasian equilibrium3
is known to exist in the case of Substitutes valuations, and
is known to be impossible for any wider class of valuations
[16]. This does not rule out other allocations by ascending
auctions: in this paper we view the auctions as a
computational process where the outcome - both the allocation
and the payments - can be determined according to all the
data elicited throughout the auction; This general
framework strengthens our negative results.4
We find the study of ascending auctions appealing for
various reasons. First, ascending auctions are widely used in
many real-life settings from the FCC spectrum auctions [15]
to almost any e-commerce website (e.g., [2, 1]). Actually,
this is maybe the most straightforward way to sell items: ask
the bidders what would they like to buy under certain prices,
and increase the prices of over-demanded goods. Ascending
auctions are also considered more intuitive for many bidders,
and are believed to increase the trust of the bidders in the
auctioneer, as they see the result gradually emerging from
the bidders" responses. Ascending auctions also have other
desired economic properties, e.g., they incur smaller
information revelation (consider, for example, English auctions
vs. second-price sealed bid auctions).
1.1 Extant Work
Many iterative combinatorial auction mechanisms rely on
demand queries (see the survey in [35]). Figure 1
summa3
A Walrasian equilibrium is vector of item prices for which
all the items are sold when each bidder receives a bundle in
his demand set.
4
In few recent auction designs (e.g., [4, 28]) the payments
are not necessarily the final prices of the auctions.
30
Valuation family Upper bound Reference Lower bound Reference
General min(n, O(
√
m)) [26], Section 4.2 min(n, m1/2−
) [32]
Substitutes 1 [32]
Submodular 2 [25], 1+ 1
2m
, 1-1
e
(*) [32],[23]
Subadditive O(logm) [13] 2 [13]
k-duplicates O(m1/k+1
) [14] O(m1/k+1
) [14]
Procurement ln m [32] (log m)/2 [29, 32]
Figure 2: The best approximation factors currently achievable by computationally-efficient combinatorial auctions,
for several classes of valuations. All lower bounds in the table apply to all iterative auctions (except the one marked
by *); all upper bounds in the table are achieved with item-price demand queries.
rizes the basic classes of auctions implied by combinations
of the above properties and classifies some of the auctions
proposed in the literature according to this classification.
For our purposes, two families of these auctions serve as
the main motivating starting points: the first is the
ascending item-price auctions of [22, 17] that with computational
efficiency find an optimal allocation among (gross)
substitutes valuations, and the second is the ascending
bundleprice auctions of [37, 3] that find an optimal allocation
among general valuations - but not necessarily with
computational efficiency. The main lower bound in this area,
due to [32], states that indeed, due to inherent
communication requirements, it is not possible for any iterative auction
to find the optimal allocation among general valuations with
sub-exponentially many queries. A similar exponential lower
bound was shown in [32] also for even approximating the
optimal allocation to within a factor of m1/2−
. Several lower
bounds and upper bounds for approximation are known for
some natural classes of valuations - these are summarized
in Figure 2.
In [32], the universal generality of demand queries is also
shown: any non-deterministic communication protocol for
finding an allocation that optimizes the social welfare can
be converted into one that only uses demand queries (with
bundle prices). In [41] this was generalized also to
nondeterministic protocols for finding allocations that satisfy
other natural types of economic requirements (e.g.,
approximate social efficiency, envy-freeness). However, in [33] it was
demonstrated that this completeness of demand queries
holds only in the nondeterministic setting, while in the usual
deterministic setting, demand queries (even with bundle
prices) may be exponentially weaker than general
communication.
Bundle-price auctions are a generalization of (the more
natural and intuitive) item-price auctions. It is known that
indeed item-price auctions may be exponentially weaker: a
nice example is the case of valuations that are a XOR of k
bundles5
, where k is small (say, polynomial). Lahaie and
Parkes [24] show an economically-efficient bundle-price
auction that uses a polynomial number of queries whenever k is
polynomial. In contrast, [7] show that there exist valuations
that are XORs of k =
√
m bundles such that any item-price
auction that finds an optimal allocation between them
requires exponentially many queries. These results are part
of a recent line of research ([7, 43, 24, 40]) that study the
preference elicitation problem in combinatorial auctions
and its relation to the full elicitation problem (i.e.,
learn5
These are valuations where bidders have values for k
specific packages, and the value of each bundle is the maximal
value of one of these packages that it contains.
ing the exact valuations of the bidders). These papers adapt
methods from machine-learning theory to the
combinatorialauction setting. The preference elicitation problem and the
full elicitation problem relate to a well studied problem in
microeconomics known as the integrability problem (see, e.g.,
[27]). This problem studies if and when one can derive the
utility function of a consumer from her demand function.
Paper organization: Due to the relatively large
number of results we present, we start with a survey of our new
results in Section 2. After describing our formal model in
Section 3, we present our results concerning the power of
demand queries in Section 4. Then, we describe the power of
item-price ascending auctions (Section 5) and bundle-price
ascending auctions (Section 6). Readers who are mainly
interested in the self-contained discussion of ascending
auctions can skip Section 4.
Missing proofs from Section 4 can be found in part I of
the full paper ([8]). Missing proofs from Sections 5 and 6
can be found in part II of the full paper ([9]).
2. A SURVEY OF OUR RESULTS
Our systematic analysis is composed of the combination
of a rather large number of results characterizing the power
and limitations of various classes of auctions. In this section,
we will present an exposition describing our new results. We
first discuss the power of demand-query iterative auctions,
and then we turn our attention to ascending auctions.
Figure 3 summarizes some of our main results.
2.1 Demand Queries
Comparison of query types
We first ask what other natural types of queries could
we imagine iterative auctions using? Here is a list of such
queries that are either natural, have been used in the
literature, or that we found useful.
1. Value query: The auctioneer presents a bundle S, the
bidder reports his value v(S) for this bundle.
2. Marginal-value query: The auctioneer presents a
bundle A and an item j, the bidder reports how much he
is willing to pay for j, given that he already owns A,
i.e., v(j|A) = v(A ∪ {j}) − v(A).
3. Demand query (with item prices): The auctioneer
presents a vector of item prices p1...pm; the bidder reports
his demand under these prices, i.e., some set S that
maximizes v(S) −
P
i∈S pi.6
6
A tie breaking rule should be specified. All of our results
31
Communication Constraint
Can find an
optimal allocation?
Upper bound for
welfare approx.
Lower bound for
welfare approx.
Item-Price Demand Queries Yes 1 1
Poly. Communication No [32] min(n, O(m1/2
)) [26] min(n, m1/2−
) [32]
Poly. Item-Price Demand Queries No [32] min(n, O(m1/2
)) min(n, m1/2−
) [32]
Poly. Value Queries No [32] O( m√
log m
) [19] O( m
log m
)
Anonymous Item-Price AA No - min(O(n), O(m1/2−
))
Non-anonymous Item-Price AA No
-Anonymous Bundle-Price AA No - min(O(n), O(m1/2−
))
Non-anonymous Bundle-Price AA Yes [37] 1 1
Poly Number of Item-Price AA No min(n, O(m1/2
))Figure 3: This paper studies the economic efficiency of auctions that follow certain communication constraints. For
each class of auctions, the table shows whether the optimal allocation can be achieved, or else, how well can it be
approximated (both upper bounds and lower bounds). New results are highlighted.
Abbreviations: Poly. (Polynomial number/size), AA (ascending auctions). - means that nothing is currently
known except trivial solutions.
4. Indirect-utility query: The auctioneer presents a set of
item prices p1...pm, and the bidder responds with his
indirect-utility under these prices, that is, the
highest utility he can achieve from a bundle under these
prices: maxS⊆M (v(S) −
P
i∈S pi).7
5. Relative-demand query: the auctioneer presents a set
of non-zero prices p1...pm, and the bidder reports the
bundle that maximizes his value per unit of money,
i.e., some set that maximizes v(S)P
i∈S pi
.8
Theorem: Each of these queries can be efficiently (i.e., in
time polynomial in n, m, and the number of bits of precision
t needed to represent a single value vi(S)) simulated by a
sequence of demand queries with item prices.
In particular this shows that demand queries can elicit all
information about a valuation by simulating all 2m
−1 value
queries. We also observe that value queries and
marginalvalue queries can simulate each other in polynomial time
and that demand queries and indirect-utility queries can also
simulate each other in polynomial time. We prove that
exponentially many value queries may be needed in order to
simulate a single demand query. It is interesting to note
that for the restricted class of substitutes valuations,
demand queries may be simulated by polynomial number of
value queries [6].
Welfare approximation
The next question that we ask is how well can a
computationally-efficient auction that uses only demand queries
approximate the optimal allocation? Two separate
obstacles are known: In [32], a lower bound of min(n, m1/2−
),
for any fixed > 0, was shown for the approximation factor
apply for any fixed tie breaking rule.
7
This is exactly the utility achieved by the bundle which
would be returned in a demand query with the same prices.
This notion relates to the Indirect-utility function studied
in the Microeconomic literature (see, e.g., [27]).
8
Note that when all the prices are 1, the bidder actually
reports the bundle with the highest per-item price. We found
this type of query useful, for example, in the design of the
approximation algorithm described in Figure 5 in Section
4.2.
obtained using any polynomial amount of communication.
A computational bound with the same value applies even for
the case of single-minded bidders, but under the assumption
of NP = ZPP [39]. As noted in [32], the
computationallyefficient greedy algorithm of [26] can be adapted to become
a polynomial-time iterative auction that achieves a nearly
matching approximation factor of min(n, O(
√
m)). This
iterative auction may be implemented with bundle-price
demand queries but, as far as we see, not as one with item
prices. Since in a single bundle-price demand query an
exponential number of prices can be presented, this algorithm
can have an exponential communication cost. In Section
4.2, we describe a different item-price auction that achieves
the same approximation factor with a polynomial number
of queries (and thus with a polynomial communication).
Theorem: There exists a computationally-efficient
iterative auction with item-price demand queries that finds an
allocation that approximates the optimal welfare between
arbitrary valuations to within a factor of min(n, O(
√
m)).
One may then attempt obtaining such an approximation
factor using iterative auctions that use only the weaker value
queries. However, we show that this is impossible:
Theorem: Any iterative auction that uses a polynomial
(in n and m) number of value queries can not achieve an
approximation factor that is better than O( m
log m
).9
Note however that auctions with only value queries are not
completely trivial in power: the bundling auctions of
Holzman et al. [19] can easily be implemented by a polynomial
number of value queries and can achieve an approximation
factor of O( m√
log m
) by using O(log m) equi-sized bundles.
We do not know how to close the (tiny) gap between this
upper bound and our lower bound.
Representing bundle-prices
We then deal with a critical issue with bundle-price
auctions that was side-stepped by our model, as well as by all
previous works that used bundle-price auctions: how are
9
This was also proven independently by Shahar Dobzinski
and Michael Schapira.
32
the bundle prices represented? For item-price auctions this
is not an issue since a query needs only to specify a small
number, m, of prices. In bundle-price auctions that
situation is more difficult since there are exponentially many
bundles that require pricing. Our basic model (like all
previous work that used bundle prices, e.g., [37, 34, 3]), ignores
this issue, and only requires that the prices be determined,
somehow, by the protocol. A finer model would fix a
specific language for denoting bundle prices, force the protocol
to represent the bundle-prices in this language, and require
that the representations of the bundle-prices also be
polynomial.
What could such a language for denoting prices for all
bundles look like? First note that specifying a price for
each bundle is equivalent to specifying a valuation. Second,
as noted in [31], most of the proposed bidding languages
are really just languages for representing valuations, i.e., a
syntactic representation of valuations - thus we could use
any of them. This point of view opens up the general issue
of which language should be used in bundle-price auctions
and what are the implications of this choice.
Here we initiate this line of investigation. We consider
bundle-price auctions where the prices must be given as a
XOR-bid, i.e., the protocol must explicitly indicate the price
of every bundle whose value is different than that of all of
its proper subsets. Note that all bundle-price auctions that
do not explicitly specify a bidding language must implicitly
use this language or a weaker one, since without a specific
language one would need to list prices for all bundles,
perhaps except for trivial ones (those with value 0, or more
generally, those with a value that is determined by one of
their proper subsets.) We show that once the
representation length of bundle prices is taken into account (using the
XOR-language), bundle-price auctions are no more strictly
stronger than item-price auctions. Define the cost of an
iterative auction as the total length of the queries and answers
used throughout the auction (in the worst case).
Theorem: For some class of valuations, bundle price
auctions that use the XOR-language require an exponential cost
for finding the optimal allocation. In contrast, item-price
auctions can find the optimal allocation for this class within
polynomial cost.10
This put doubts on the applicability of bundle-price
auctions like [3, 37], and it may justify the use of hybrid
pricing methods such as Ausubel, Cramton and Milgrom"s
Clock-Proxy auction ([10]).
Demand queries and linear programs
The winner determination problem in combinatorial
auctions may be formulated as an integer program. In many
cases solving the linear-program relaxation of this integer
program is useful: for some restricted classes of valuations
it finds the optimum of the integer program (e.g., substitute
valuations [22, 17]) or helps approximating the optimum
(e.g., by randomized rounding [13, 14]). However, the linear
program has an exponential number of variables. Nisan and
Segal [32] observed the surprising fact that despite the
ex10
Our proof relies on the sophisticated known lower bounds
for constant depth circuits. We were not able to find an
elementary proof.
ponential number of variables, this linear program may be
solved within polynomial communication. The basic idea is
to solve the dual program using the Ellipsoid method (see,
e.g., [20]). The dual program has a polynomial number of
variables, but an exponential number of constraints. The
Ellipsoid algorithm runs in polynomial time even on such
programs, provided that a separation oracle is given for
the set of constraints. Surprisingly, such a separation oracle
can be implemented using a single demand query (with item
prices) to each of the bidders.
The treatment of [32] was somewhat ad-hoc to the
problem at hand (the case of substitute valuations). Here we
give a somewhat more general form of this important
observation. Let us call the following class of linear programs
generalized-winner-determination-relaxation (GWDR)
LPs:
Maximize
X
i∈N,S⊆M
wi xi,S vi(S)
s.t.
X
i∈N, S|j∈S
xi,S ≤ qj ∀j ∈ M
X
S⊆M
xi,S ≤ di ∀i ∈ N
xi,S ≥ 0 ∀i ∈ N, S ⊆ M
The case where wi = 1, di = 1, qj = 1 (for every i, j)
is the usual linear relaxation of the winner determination
problem. More generally, wi may be viewed as the weight
given to bidder i"s welfare, qj may be viewed as the quantity
of units of good j, and di may be viewed as duplicity of the
number of bidders of type i.
Theorem: Any GWDR linear program may be solved in
polynomial time (in n, m, and the number of bits of precision
t) using only demand queries with item prices.11
2.2 Ascending Auctions
Ascending item-price auctions:
It is well known that the item-price ascending auctions
of Kelso and Crawford [22] and its variants [12, 16] find
the optimal allocation as long as all players" valuations have
the substitutes property. The obvious question is whether
the optimal allocation can be found for a larger class of
valuations.
Our main result here is a strong negative result:
Theorem: There is a 2-item 2-player problem where no
ascending item-price auction can find the optimal allocation.
This is in contrast to both the power of bundle-price
ascending auctions and to the power of general item-price
demand queries (see above), both of which can always find the
optimal allocation and in fact even provide full preference
elicitation. The same proof proves a similar impossibility
result for other types of auctions (e.g., descending auctions,
non-anonymous auctions). More extension of this result:
• Eliciting some classes of valuations requires an
exponential number of ascending item-price trajectories.
11
The produced optimal solution will have polynomial
support and thus can be listed fully.
33
• At least k − 1 ascending item-price trajectories are
needed to elicit XOR formulae with k terms. This
result is in some sense tight, since we show that any
k-term XOR formula can be fully elicited by k−1
nondeterministic (i.e., when some exogenous teacher
instructs the auctioneer on how to increase the prices)
ascending auctions.12
We also show that item-price ascending auctions and
iterative auctions that are limited to a polynomial number
of queries (of any kind, not necessarily ascending) are
incomparable in their power: ascending auctions, with small
enough increments, can elicit the preferences in cases where
any polynomial number of queries cannot.
Motivated by several recent papers that studied the
relation between eliciting and fully-eliciting the preferences in
combinatorial auctions (e.g., [7, 24]), we explore the
difference between these problems in the context of ascending
auctions. We show that although a single ascending auction can
determine the optimal allocation among any number of
bidders with substitutes valuations, it cannot fully-elicit such a
valuation even for a single bidder. While it was shown in [25]
that the set of substitutes valuations has measure zero in the
space of general valuations, its dimension is not known, and
in particular it is still open whether a polynomial amount
of information suffices to describe a substitutes valuation.
While our result may be a small step in that direction (a
polynomial full elicitation may still be possible with other
communication protocols), we note that our impossibility
result also holds for valuations in the class OXS defined by
[25], valuations that we are able to show have a compact
representation.
We also give several results separating the power of
different models for ascending combinatorial auctions that use
item-prices: we prove, not surprisingly, that adaptive
ascending auctions are more powerful than oblivious
ascending auctions and that non-deterministic ascending auctions
are more powerful than deterministic ascending auctions.
We also compare different kinds of non-anonymous auctions
(e.g., simultaneous or sequential), and observe that
anonymous bundle-price auctions and non-anonymous item-price
auctions are incomparable in their power. Finally,
motivated by Dutch auctions, we consider descending auctions,
and how they compare to ascending ones; we show classes of
valuations that can be elicited by ascending item-price
auctions but not by descending item-price auctions, and vice
versa.
Ascending bundle-price auctions:
All known ascending bundle-price auctions that are able
to find the optimal allocation between general valuations
(with free disposal) use non-anonymous prices.
Anonymous ascending-price auctions (e.g., [42, 21, 37]) are only
known to be able to find the optimal allocation among
superadditive valuations or few other simple classes ([36]). We
show that this is no mistake:
Theorem: No ascending auction with anonymous prices
can find the optimal allocation between general valuations.
12
Non-deterministic computation is widely used in CS and
also in economics (e.g, a Walrasian equilibrium or [38]). In
some settings, deterministic and non-deterministic models
have equal power (e.g., computation with finite automata).
This bound is regardless of the running time, and it also
holds for descending auctions and non-deterministic
auctions.
We strengthen this result significantly by showing that
anonymous ascending auctions cannot produce a better than
O(
√
m) approximation - the approximation ratio that can
be achieved with a polynomial number of queries ([26, 32])
and, as mentioned, with a polynomial number of item-price
demand queries. The same lower bound clearly holds for
anonymous item-price ascending auctions since such
auctions can be simulated by anonymous bundle-price
ascending auctions. We currently do not have any lower bound on
the approximation achievable by non-anonymous item-price
ascending auctions.
Finally, we study the performance of the existing
computationally-efficient ascending auctions. These protocols ([37,
3]) require exponential time in the worst case, and this is
unavoidable as shown by [32]. However, we also observe that
these auctions, as well as the whole class of similar
ascending bundle-price auctions, require an exponential time even
for simple additive valuations. This is avoidable and indeed
the ascending item-price auctions of [22] can find the
optimal allocation for these simple valuations with polynomial
communication.
3. THE MODEL
3.1 Discrete Auctions for Continuous Values
Our model aims to capture iterative auctions that operate
on real-valued valuations. There is a slight technical
difficulty here in bridging the gap between the discrete nature
of an iterative auction, and the continuous nature of the
valuations. This is exactly the same problem as in modeling
a simple English auction. There are three standard formal
ways to model it:
1. Model the auction as a continuous process and study
its trajectory in time. For example, the so-called Japanese
auction is basically a continuous model of an English
model.13
2. Model the auction as discrete and the valuations as
continuously valued. In this case we introduce a
parameter and usually require the auction to produce
results that are -close to optimal.
3. Model the valuations as discrete. In this case we will
assume that all valuations are integer multiples of some
small fixed quantity δ, e.g., 1 penny. All
communication in this case is then naturally finite.
In this paper we use the latter formulation and assume
that all values are multiples of some δ. Thus, in some parts
of the paper we assume without loss of generality that δ = 1,
hence all valuations are integral. Almost all (if not all) of
our results can be translated to the other two models with
little effort.
3.2 Valuations
A single auctioneer is selling m indivisible
non-homogeneous items in a single auction, and let M be the set of these
13
Another similar model is the moving knives model in the
cake-cutting literature.
34
items and N be the set of bidders. Each one of the n
bidders in the auction has a valuation function vi : 2m
→
{0, δ, 2δ, ..., L}, where for every bundle of items S ⊆ M,
vi(S) denotes the value of bidder i for the bundle S and is
a multiple of δ in the range 0...L. We will sometimes
denote the number of bits needed to represent such values in
the range 0...L by t = log L. We assume free disposal, i.e.,
S ⊆ T implies vi(S) ≤ vi(T) and that vi(∅) = 0 for all
bidders.
We will mention the following classes of valuations:
• A valuation is called sub-modular if for all sets of items
A and B we have that v(A ∪ B) + v(A ∩ B) ≤ v(A) +
v(B).
• A valuation is called super-additive if for all disjoint
sets of items A and B we have that v(A∪B) ≥ v(A)+
v(B).
• A valuation is called a k-bundle XOR if it can be
represented as a XOR combination of at most k atomic
bids [30], i.e., if there are at most k bundles Si and
prices pi such that for all S, v(S) = maxi|S⊇Si
pi. Such
valuations will be denoted by v = (S1 : p1) ⊕ (S2 :
p2) ⊕ . . . ⊕ (Sk : pk).14
3.3 Iterative Auctions
The auctioneer sets up a protocol (equivalently an
algorithm), where at each stage of the protocol some
information q - termed the query - is sent to some bidder i,
and then bidder i replies with some reply that depends on
the query as well as on his own valuation. In this paper,
we assume that we have complete control over the bidders"
behavior, and thus the protocol also defines a reply function
ri(q, vi) that specifies bidder i"s reply to query q. The
protocol may be adaptive: the query value as well as the queried
bidder may depend on the replies received for past queries.
At the end of the protocol, an allocation S1...Sn must be
declared, where Si ∩ Sj = ∅ for i = j.
We say that the auction finds an optimal allocation if
it finds the allocation that maximizes the social welfareP
i vi(Si). We say that it finds a c-approximation if
P
i vi(Si)
≥
P
i vi(Ti)/c where T1...Tn is an optimal allocation. The
running time of the auction on a given instance of the
bidders" valuations is the total number of queries made on this
instance. The running time of a protocol is the worst case
cost over all instances. Note that we impose no
computational limitations on the protocol or on the players.15
This
of course only strengthens our hardness results. Yet, our
positive results will not use this power and will be efficient
also in the usual computational sense.
Our goal will be to design computationally-efficient
protocols. We will deem a protocol computationally-efficient if
its cost is polynomial in the relevant parameters: the
number of bidders n, the number of items m, and t = log L,
where L is the largest possible value of a bundle. However,
when we discuss ascending-price auctions and their variants,
a computationally-efficient protocol will be required to be
14
For example, v = (abcd : 5) ⊕ (ab : 3) ⊕ (c : 4) denotes the
XOR valuation with the terms abcd, ab, c and prices 5, 3, 4
respectively. For this valuation, v(abcd) = 5, v(abd) = 3,
v(abc) = 4.
15
The running time really measures communication costs and
not computational running time.
pseudo-polynomial, i.e., it should ask a number of queries
which is polynomial in m, n and L
δ
. This is because that
ascending auctions can usually not achieve such running times
(consider even the English auction on a single item).16
Note
that all of our results give concrete bounds, where the
dependence on the parameters is given explicitly; we use the
standard big-Oh notation just as a shorthand.
We say than an auction elicits some class V of valuations,
if it determines the optimal allocation for any profile of
valuations drawn from V ; We say that an auction fully elicits
some class of valuations V , if it can fully learn any single
valuation v ∈ V (i.e., learn v(S) for every S).
3.4 Demand Queries and Ascending Auctions
Most of the paper will be concerned with a common
special case of iterative auctions that we term demand
auctions. In such auctions, the queries that are sent to bidders
are demand queries: the query specifies a price p(S) ∈ +
for each bundle S. The reply of bidder i is simply the set
most desired - demanded - under these prices. Formally,
a set S that maximizes vi(S) − p(S). It may happen that
more than one set S maximizes this value. In which case,
ties are broken according to some fixed tie-breaking rule,
e.g., the lexicographically first such set is returned. All of
our results hold for any fixed tie-breaking rule.
Ascending auctions are iterative auctions with
non-decreasing prices:
Definition 1. In an ascending auction, the prices in the
queries to the same bidder can only increase in time.
Formally, let p be a query made for bidder i, and q be a query
made for bidder i at a later stage in the protocol. Then for
all sets S, q(S) ≥ p(S). A similar variant, which we also
study and that is also common in real life, is descending
auctions, in which prices can only decrease in time.
Note that the term ascending auction refers to an
auction with a single ascending trajectory of prices. It may
be useful to define multi-trajectory ascending auctions, in
which the prices maybe reset to zero a number of times (see,
e.g., [4]).
We consider two main restrictions on the types of allowed
demand queries:
Definition 2. Item Prices: The prices in each query
are given by prices pj for each item j. The price of a set S
is additive: p(S) =
P
j∈S pj.
Definition 3. Anonymous prices: The prices seen by
the bidders at any stage in the auction are the same, i.e.
whenever a query is made to some bidder, the same query is
also made to all other bidders (with the prices unchanged).
In auctions with non-anonymous (discriminatory) prices,
each bidder i has personalized prices denoted by pi
(S).17
In this paper, all auctions are anonymous unless otherwise
specified.
Note that even though in our model valuations are integral
(or multiples of some δ), we allow the demand query to
16
Most of the auctions we present may be adapted to run in
time polynomial in log L, using a binary-search-like
procedure, losing their ascending nature.
17
Note that a non-anonymous auction can clearly be
simulated by n parallel anonymous auctions.
35
use arbitrary real numbers in +. That is, we assume that
the increment we use in the ascending auctions may be
significantly smaller than δ. All our hardness results hold
for any , even for continuous price increments. A practical
issue here is how will the query be specified: in the general
case, an exponential number of prices needs to be sent in a
single query. Formally, this is not a problem as the model
does not limit the length of queries in any way - the protocol
must simply define what the prices are in terms of the replies
received for previous queries. We look into this issue further
in Section 4.3.
4. THE POWER OF DEMAND QUERIES
In this section, we study the power of iterative auctions
that use demand queries (not necessarily ascending). We
start by comapring demand queries to other types of queries.
Then, we discuss how well can one approximate the optimal
welfare using a polynomial number of demand queries. We
also initiate the study of the representation of bundle-price
demand queries, and finally, we show how demand queries
help solving the linear-programming relaxation of
combinatorial auctions in polynomial time.
4.1 The Power of Different Types of Queries
In this section we compare the power of the various types
of queries defined in Section 2. We will present
computationally -efficient simulations of these query types using
item-price demand queries. In Section 5.1 we show that
these simulations can also be done using item-price
ascending auctions.
Lemma 4.1. A value query can be simulated by m
marginalvalue queries. A marginal-value query can be simulated by
two value queries.
Lemma 4.2. A value query can be simulated by mt
demand queries (where t = log L is the number of bits needed
to represent a single bundle value).18
As a direct corollary we get that demand auctions can
always fully elicit the bidders" valuations by simulating all
possible 2m
− 1 queries and thus elicit enough information
for determining the optimal allocation. Note, however, that
this elicitation may be computationally inefficient.
The next lemma shows that demand queries can be
exponentially more powerful than value queries.
Lemma 4.3. An exponential number of value queries may
be required for simulating a single demand query.
Indirect utility queries are, however, equivalent in power
to demand queries:
Lemma 4.4. An indirect-utility query can be simulated by
mt + 1 demand queries. A demand query can be simulated
by m + 1 indirect-utility queries.
Demand queries can also simulate relative-demand queries:19
18
Note that t bundle-price demand queries can easily
simulate a value query by setting the prices of all the bundles
except S (the bundle with the unknown value) to be L, and
performing a binary search on the price of S.
19
Note: although in our model values are integral (our
multiples of δ), we allow the query prices to be arbitrary real
numV MV D IU RD
V 1 2 exp exp exp
MV m 1 exp exp exp
D mt poly 1 mt+1 poly
IU 1 2 m+1 1 poly
RD - - - - 1
Figure 4: Each entry in the table specifies how many
queries of this row are needed to simulate a query from
the relevant column.
Abbreviations: V (value query), MV (marginal-value
query), D (demand query), IU (Indirect-utility query),
RD (relative demand query).
Lemma 4.5. Relative-demand queries can be simulated by
a polynomial number of demand queries.
According to our definition of relative-demand queries,
they clearly cannot simulate even value queries. Figure 4
summarizes the relations between these query types.
4.2 Approximating the Social Welfare with
Value and Demand Queries
We know from [32] that iterative combinatorial auctions
that only use a polynomial number of queries can not find
an optimal allocation among general valuations and in fact
can not even approximate it to within a factor better than
min{n, m1/2−
}. In this section we ask how well can this
approximation be done using demand queries with item prices,
or using the weaker value queries. We show that, using
demand queries, the lower bound can be matched, while value
queries can only do much worse.
Figure 5 describes a polynomial-time algorithm that achieves
a min(n, O(
√
m)) approximation ratio. This algorithm
greedily picks the bundles that maximize the bidders" per-item
value (using relative-demand queries, see Section 4.1). As
a final step, it allocates all the items to a single bidder if
it improves the social welfare (this can be checked using
value queries). Since both value queries and relative-demand
queries can be simulated by a polynomial number of
demand queries with item prices (Lemmas 4.2 and 4.5), this
algorithm can be implemented by a polynomial number of
demand queries with item prices.20
Theorem 4.6. The auction described in Figure 5 can be
implemented by a polynomial number of demand queries and
achieves a min{n, 4
√
m}-approximation for the social
welfare.
We now ask how well can the optimal welfare be
approximated by a polynomial number of value queries. First we
note that value queries are not completely powerless: In
[19] it is shown that if the m items are split into k fixed
bundles of size m/k each, and these fixed bundles are
auctioned as though each was indivisible, then the social welfare
bers, thus we may have bundles with arbitrarily close
relative demands. In this sense the simulation above is only up
to any given (and the number of queries is O(log L+log 1
)).
When the relative-demand query prices are given as rational
numbers, exact simulation is implied when log is linear in
the input length.
20
In the full paper [8], we observe that this algorithm can be
implemented by two descending item-price auctions (where
we allow removing items along the auction).
36
generated by such an auction is at least m√
k
-approximation
of that possible in the original auction. Notice that such
an auction can be implemented by 2k
− 1 value queries to
each bidder - querying the value of each bundle of the fixed
bundles. Thus, if we choose k = log m bundles we get an
m√
log m
-approximation while still using a polynomial number
of queries.
The following lemma shows that not much more is possible
using value queries:
Lemma 4.7. Any iterative auction that uses only value
queries and distinguishes between k-tuples of 0/1 valuations
where the optimal allocation has value 1, and those where the
optimal allocation has value k requires at least 2
m
k queries.
Proof. Consider the following family of valuations: for
every S, such that |S| > m/2, v(S) = 1, and there exists a
single set T, such that for |S| ≤ m/2, v(S) = 1 iff T ⊆ S
and v(S) = 0 otherwise. Now look at the behavior of the
protocol when all valuations vi have T = {1...m}. Clearly
in this case the value of the best allocation is 1 since no set
of size m
2
or lower has non-zero value for any player. Fix the
sequence of queries and answers received on this k-tuple of
valuations.
Now consider the k-tuple of valuations chosen at random
as follows: a partition of the m items into k sets T1...Tk
each of size m
k
each is chosen uniformly at random among
all such partitions. Now consider the k-tuple of valuations
from our family that correspond to this partition - clearly
Ti can be allocated to i, for each i, getting a total value of k.
Now look at the protocol when running on these valuations
and compare its behavior to the original case. Note that the
answer to a query S to player i differs between the case of
Ti and the original case of T = {1...m} only if |S| ≤ m
2
and
Ti ⊆ S. Since Ti is distributed uniformly among all sets of
size exactly m
k
, we have that for any fixed query S to player
i, where |S| ≤ m
2
,
Pr[Ti ⊆ S] ≤
„
|S|
m
«|Ti|
≤ 2− m
k
Using the union-bound, if the original sequence of queries
was of length less than 2
m
k , then with positive probability
none of the queries in the sequence would receive a different
answer than for the original input tuple. This is forbidden
since the protocol must distinguish between this case and
the original case - which cannot happen if all queries receive
the same answer. Hence there must have been at least 2
m
k
queries for the original tuple of valuations.
We conclude that a polynomial time protocol that uses
only value queries cannot obtain a better than O( m
log m
)
approximation of the welfare:
Theorem 4.8. An iterative auction that uses a
polynomial number of value queries cannot achieve better than
O( m
log m
)-approximation for the social welfare.
Proof. Immediate from Lemma 4.7: achieving any
approximation ratio k which is asymptotically greater than
m
log m
needs an exponential number of value queries.
An Approximation Algorithm:
Initialization: Let T ← M be the current items for sale.
Let K ← N be the currently participating bidders.
Let S∗
1 ← ∅, ..., S∗
n ← ∅ be the provisional allocation.
Repeat until T = ∅ or K = ∅:
Ask each bidder i in K for the bundle Si that maximizes
her per-item value, i.e., Si ∈ argmaxS⊆T
vi(S)
|S|
.
Let i be the bidder with the maximal per-item value,
i.e., i ∈ argmaxi∈K
vi(Si)
|Si|
.
Set: s∗
i = si, K = K \ i, M = M \ Si
Finally: Ask the bidders for their values vi(M) for the
grand bundle. If allocating all the items to some bidder i
improves the social welfare achieved so far
(i.e., ∃i ∈ N such that vi(M) >
P
i∈N vi(S∗
i )),
then allocate all items to this bidder i.
Figure 5: This algorithm achieves a min{n, 4
√
m}approximation for the social welfare, which is
asymptotically the best worst-case approximation possible with
polynomial communication. This algorithm can be
implemented with a polynomial number of demand queries.
4.3 The Representation of Bundle Prices
In this section we explicitly fix the language in which
bundle prices are presented to the bidders in bundle-price
auctions. This language requires the algorithm to explicitly list
the price of every bundle with a non-trivial price.
Trivial in this context is a price that is equal to that of one of
its proper subsets (which was listed explicitly). This
representation is equivalent to the XOR-language for expressing
valuations. Formally, each query q is given by an
expression: q = (S1 : p1) ⊕ (S2 : p2) ⊕ ... ⊕ (Sl : pl). In this
representation, the price demanded for every set S is simply
p(S) = max{k=1...l|Sk⊆S}pk.
Definition 4. The length of the query q = (S1 : p1) ⊕
(S2 : p2) ⊕ ... ⊕ (Sl : pl) is l. The cost of an algorithm is the
sum of the lengths of the queries asked during the operation
of the algorithm on the worst case input.
Note that under this definition, bundle-price auctions are
not necessarily stronger than item-price auctions. An
itemprice query that prices each item for 1, is translated to an
exponentially long bundle-price query that needs to specify
the price |S| for each bundle S. But perhaps bundle-price
auctions can still find optimal allocations whenever
itemprice auction can, without directly simulating such queries?
We show that this is not the case: indeed, when the
representation length is taken into account, bundle price auctions
are sometimes seriously inferior to item price auctions.
Consider the following family of valuations: Each item is
valued at 3, except that for some single set S, its value is a
bit more: 3|S| + b, where b ∈ {0, 1, 2}. Note that an item
price auction can easily find the optimal allocation between
any two such valuations: Set the prices of each item to 3+ ;
if the demand sets of the two players are both empty, then
b = 0 for both valuations, and an arbitrary allocation is fine.
If one of them is empty and the other non-empty, allocate
the non-empty demand set to its bidder, and the rest to the
other. If both demand sets are non-empty then, unless they
form an exact partition, we need to see which b is larger,
which we can do by increasing the price of a single item in
each demand set.
37
We will show that any bundle-price auction that uses only
the XOR-language to describe bundle prices requires an
exponential cost (which includes the sum of all description
lengths of prices) to find an optimal allocation between any
two such valuations.
Lemma 4.9. Every bundle-price auction that uses
XORexpressions to denote bundle prices requires 2Ω(
√
m)
cost in
order to find the optimal allocation among two valuations
from the above family.
The complication in the proof stems from the fact that
using XOR-expressions, the length of the price description
depends on the number of bundles whose price is strictly
larger than each of their subsets - this may be significantly
smaller than the number of bundles that have a non-zero
price. (The proof becomes easy if we require the protocol
to explicitly name every bundle with non-zero price.) We
do not know of any elementary proof for this lemma
(although we believe that one can be found). Instead we
reduce the problem to a well known lower bound in boolean
circuit complexity [18] stating that boolean circuits of depth
3 that compute the majority function on m variables require
2Ω(
√
m)
size.
4.4 Demand Queries and Linear Programming
Consider the following linear-programming relaxation for
the generalized winner-determination problem in
combinatorial auctions (the primal program):
Maximize
X
i∈N,S⊆M
wi xi,S vi(S)
s.t.
X
i∈N, S|j∈S
xi,S ≤ qj ∀j ∈ M
X
S⊆M
xi,S ≤ di ∀i ∈ N
xi,S ≥ 0 ∀i ∈ N, S ⊆ M
Note that the primal program has an exponential number
of variables. Yet, we will be able to solve it in polynomial
time using demand queries to the bidders. The solution will
have a polynomial size support (non-zero values for xi,S),
and thus we will be able to describe it in polynomial time.
Here is its dual:
Minimize
X
j∈M
qjpj +
X
i∈N
diui
s.t. ui +
X
j∈S
pj ≥ wivi(S) ∀i ∈ N, S ⊆ M
pi ≥ 0, uj ≥ 0 ∀i ∈ M, j ∈ N
Notice that the dual problem has exactly n + m variables
but an exponential number of constraints. Thus, the dual
can be solved using the Ellipsoid method in polynomial time
- if a separation oracle can be implemented in polynomial
time. Recall that a separation oracle, when given a possible
solution, either confirms that it is a feasible solution, or
responds with a constraint that is violated by the possible
solution.
We construct a separation oracle for solving the dual
program, using a single demand query to each of the bidders.
Consider a possible solution (u, p) for the dual program. We
can re-write the constraints of the dual program as:
ui/wi ≥ vi(S) −
X
j∈S
pj/wi
Now a demand query to bidder i with prices pj/wi reveals
exactly the set S that maximizes the RHS of the previous
inequality. Thus, in order to check whether (u, p) is
feasible it suffices to (1) query each bidder i for his demand
Di under the prices pj/wi; (2) check only the n constraints
ui +
P
j∈Di
pj ≥ wivi(Di) (where vi(Di) can be simulated
using a polynomial sequence of demand queries as shown in
Lemma 4.2). If none of these is violated then we are assured
that (u, p) is feasible; otherwise we get a violated constraint.
What is left to be shown is how the primal program can
be solved. (Recall that the primal program has an
exponential number of variables.) Since the Ellipsoid algorithm
runs in polynomial time, it encounters only a polynomial
number of constraints during its operation. Clearly, if all
other constraints were removed from the dual program, it
would still have the same solution (adding constraints can
only decrease the space of feasible solutions). Now take the
reduced dual where only the constraints encountered
exist, and look at its dual. It will have the same solution as the
original dual and hence of the original primal. However, look
at the form of this dual of the reduced dual. It is just a
version of the primal program with a polynomial number of
variables - those corresponding to constraints that remained
in the reduced dual. Thus, it can be solved in polynomial
time, and this solution clearly solves the original primal
program, setting all other variables to zero.
5. ITEM-PRICE ASCENDING AUCTIONS
In this section we characterize the power of ascending
item-price auctions. We first show that this power is not
trivial: such auctions can in general elicit an exponential
amount of information. On the other hand, we show that
the optimal allocation cannot always be determined by a
single ascending auction, and in some cases, nor by an
exponential number of ascending-price trajectories. Finally,
we separate the power of different models of ascending
auctions.
5.1 The Power of Item-Price Ascending
Auctions
We first show that if small enough increments are allowed,
a single ascending trajectory of item-prices can elicit
preferences that cannot be elicited with polynomial
communication. As mentioned, all our hardness results hold for any
increment, even infinitesimal.
Theorem 5.1. Some classes of valuations can be elicited
by item-price ascending auctions, but cannot be elicited by a
polynomial number of queries of any kind.
Proof. (sketch) Consider two bidders with v(S) = 1 if
|S| > n
2
, v(S) = 0 if |S| < n
2
and every S such that |S| = n
2
has an unknown value from {0, 1}. Due to [32], determining
the optimal allocation here requires exponential
communication in the worst case. Nevertheless, we show (see [9]) that
an item-price ascending auction can do it, as long as it can
use exponentially small increments.
We now describe another positive result for the power of
item-price ascending auctions. In section 4.1, we showed
38
v(ab) v(a) v(b)
Bidder 1 2 α ∈ (0, 1) β ∈ (0, 1)
Bidder 2 2 2 2
Figure 6: No item-price ascending auctions can
determine the optimal allocation for this class of valuations.
that a value query can be simulated with a (truly)
polynomial number of item-price demand queries. Here, we show
that every value query can be simulated by a (pseudo)
polynomial number of ascending item-price demand queries. (In
the next subsection, we show that we cannot always
simulate even a pair of value queries using a single item-price
ascending auction.) In the full paper (part II,[9]), we show
that we can simulate other types of queries using item-price
ascending auctions.
Proposition 5.2. A value query can be simulated by an
item-price ascending auction. This simulation requires a
polynomial number of queries.
Actually, the proof for Proposition 5.2 proves a stronger
useful result regarding the information elicited by iterative
auctions. It says that in any iterative auction in which the
changes of prices are small enough in each stage
(pseudocontinuous auctions), the value of all bundles demanded
during the auction can be computed. The basic idea is that
when the bidder moves from demanding some bundle Ti to
demanding another bundle Ti+1, there is a point in which she
is indifferent between these two bundles. Thus, knowing the
value of some demanded bundle (e.g., the empty set) enables
computing the values of all other demanded bundles.
We say that an auction is pseudo-continuous, if it only
uses demand queries, and in each step, the price of at most
one item is changed by (for some ∈ (0, δ]) with respect
to the previous query.
Proposition 5.3. Consider any pseudo-continuous
auction (not necessarily ascending), in which bidder i demands
the empty set at least once along the auction. Then, the
value of every bundle demanded by bidder i throughout the
auction can be calculated at the end of the auction.
5.2 Limitations of Item-Price Ascending
Auctions
Although we observed that demand queries can solve any
combinatorial auction problem, when the queries are
restricted to be ascending, some classes of valuations cannot
be elicited nor fully-elicited. An example for such class of
valuations is given in Figure 6.
Theorem 5.4. There are classes of valuations that
cannot be elicited nor fully elicited by any item-price ascending
auction.
Proof. Let bidder 1 have the valuation described in the
first row of Figure 6, where α and β are unknown values in
(0, 1). First, we prove that this class cannot be fully elicited
by a single ascending auction. Specifically, an ascending
auction cannot reveal the values of both α and β.
As long as pa and pb are both below 1, the bidder will
always demand the whole bundle ab: her utility from ab is
strictly greater than the utility from either a or b separately.
For example, we show that u1(ab) > u1(a):
u1(ab) = 2 − (pa + pb) = 1 − pa + 1 − pb
> vA(a) − pa + 1 − pb > u1(a)
Thus, in order to gain any information about α or β, the
price of one of the items should become at least 1, w.l.o.g.
pa ≥ 1. But then, the bundle a will not be demanded by
bidder 1 throughout the auction, thus no information at all
will be gained about α.
Now, assume that bidder 2 is known to have the valuation
described in the second row of Figure 6. The optimal
allocation depends on whether α is greater than β (in bidder 1"s
valuation), and we proved that an ascending auction cannot
determine this.
The proof of the theorem above shows that for an
unknown value to be revealed, the price of one item should be
greater than 1, and the other price should be smaller than
1. Therefore, in a price-monotonic trajectory of prices, only
one of these values can be revealed. An immediate
conclusion is that this impossibility result also holds for item-price
descending auctions. Since no such trajectory exists, then
the same conclusion even holds for non-deterministic
itemprice auctions (in which exogenous data tells us how to
increase the prices). Also note that since the hardness stems
from the impossibility to fully-elicit a valuation of a single
bidder, this result also holds for non-anonymous ascending
item-price auctions.
5.3 Limitations of Multi-Trajectory
Ascending Auctions
According to Theorem 5.4, no ascending item-price
auction can always elicit the preferences (we prove a similar
result for bundle prices in section 6). But can two
ascending trajectories do the job? Or a polynomial number of
ascending trajectories? We give negative answers for such
suggestions.
We define a k-trajectory ascending auction as a
demandquery iterative auction in which the demand queries can be
partitioned to k sets of queries, where the prices published
in each set only increase in time. Note that we use a general
definition; It allows the trajectories to run in parallel or
sequentially, and to use information elicited in some
trajectories for determining the future queries in other trajectories.
The power of multiple-trajectory auctions can be
demonstrated by the negative result of Gul and Stacchetti [17] who
showed that even for an auction among substitutes
valuations, an anonymous ascending item-price auction cannot
compute VCG prices for all players.21
Ausubel [4] overcame
this impossibility result and designed auctions that do
compute VCG prices by organizing the auction as a sequence of
n + 1 ascending auctions. Here, we prove that one cannot
elicit XOR valuations with k terms by less than k − 1
ascending trajectories. On the other hand, we show that an
XOR formula can be fully elicited by k−1 non-deterministic
ascending auctions (or by k−1 deterministic ascending
auctions if the auctioneer knows the atomic bundles).22
21
A recent unpublished paper by Mishra and Parkes
extends this result, and shows that non-anonymous prices with
bundle-prices are necessary in order that an ascending
auction will end up with a universal-competitive-equilibrium
(that leads to VCG payments).
22
This result actually separates the power of deterministic
39
Proposition 5.5. XOR valuations with k terms cannot
be elicited (or fully elicited) by any (k-2)-trajectory
itemprice ascending auction, even when the atomic bundles are
known to the elicitor. However, these valuations can be
elicited (and fully elicited) by (k-1)-trajectory
non-deterministic non-anonymous item-price ascending auctions.
Moreover, an exponential number of trajectories is
required for eliciting some classes of valuations:
Proposition 5.6. Elicitation and full-elicitation of some
classes of valuations cannot be done by any k-trajectory
itemprice ascending auction, where k = o(2m
).
Proof. (sketch) Consider the following class of
valuations: For |S| < m
2
, v(S) = 0 and for |S| > m
2
, v(S) = 2;
every bundle S of size m
2
has some unknown value in (0, 1).
We show ([9]) that a single item-price ascending auction
can reveal the value of at most one bundle of size n
2
, and
therefore an exponential number of ascending trajectories is
needed in order to elicit such valuations.
We observe that the algorithm we presented in Section
4.2 can be implemented by a polynomial number of
ascending auctions (each item-price demand query can be
considered as a separate ascending auction), and therefore a
min(n, 4
√
m)-approximation can be achieved by a
polynomial number of ascending auctions. We do not currently
have a better upper bound or any lower bound.
5.4 Separating the Various Models of
Ascending Auctions
Various models for ascending auctions have been suggested
in the literature. In this section, we compare the power of
the different models. As mentioned, all auctions are
considered anonymous and deterministic, unless specified
otherwise.
Ascending vs. Descending Auctions: We begin the
discussion of the relation between ascending auctions and
descending auctions with an example. The algorithm by
Lehmann, Lehmann and Nisan [25] can be implemented by
a simple item-price descending auction (see the full paper for
details [9]). This algorithm guarantees at least half of the
optimal efficiency for submodular valuations. However, we are
not familiar with any ascending auction that guarantees a
similar fraction of the efficiency. This raises a more general
question: can ascending auctions solve any
combinatorialauction problem that is solvable using a descending auction
(and vice versa)? We give negative answers to these
questions. The idea behind the proofs is that the information
that the auctioneer can get for free at the beginning of
each type of auction is different.23
and non-deterministic iterative auctions: our proof shows
that a non-deterministic iterative auction can elicit the
kterm XOR valuations with a polynomial number of demand
queries, and [7] show that this elicitation must take an
exponential number of demand queries.
23
In ascending auctions, the auctioneer can reveal the most
valuable bundle (besides M) before she starts raising the
prices, thus she can use this information for adaptively
choose the subsequent queries. In descending auctions, one
can easily find the bundle with the highest average per-item
price, keeping all other bundles with non-positive utilities,
and use this information in the adaptive price change.
Proposition 5.7. There are classes that cannot be
elicited (fully elicited) by ascending item-price auctions, but can
be elicited (resp. fully elicited) with a descending item-price
auction.
Proposition 5.8. There are classes that cannot be
elicited (fully elicited) by item-price descending auctions, but can
be elicited (resp. fully elicited) by item-price ascending
auctions.
Deterministic vs. Non-Deterministic Auctions:
Nondeterministic ascending auctions can be viewed as auctions
where some benevolent teacher that has complete
information guides the auctioneer on how she should raise the prices.
That is, preference elicitation can be done by a
non-deterministic ascending auction, if there is some ascending trajectory
that elicits enough information for determining the optimal
allocation (and verifying that it is indeed optimal). We show
that non-deterministic ascending auctions are more powerful
than deterministic ascending auctions:
Proposition 5.9. Some classes can be elicited (fully
elicited) by an item-price non-deterministic ascending
auction, but cannot be elicited (resp. fully elicited) by item-price
deterministic ascending auctions.
Anonymous vs. Non-Anonymous Auctions: As will
be shown in Section 6, the power of anonymous and
nonanonymous bundle-price ascending auctions differs
significantly. Here, we show that a difference also exists for
itemprice ascending auctions.
Proposition 5.10. Some classes cannot be elicited by
anonymous item-price ascending auctions, but can be elicited
by a non-anonymous item-price ascending auction.
Sequential vs. Simultaneous Auctions: A
non-anonymous auction is called simultaneous if at each stage, the price
of some item is raised by for every bidder. The auctioneer
can use the information gathered until each stage, in all the
personalized trajectories, to determine the next queries.
A non-anonymous auction is called sequential if the
auctioneer performs an auction for each bidder separately, in
sequential order. The auctioneer can determine the next
query based on the information gathered in the trajectories
completed so far and on the history of the current trajectory.
Proposition 5.11. There are classes that cannot be
elicited by simultaneous non-anonymous item-price ascending
auctions, but can be elicited by a sequential non-anonymous
item-price ascending auction.
Adaptive vs. Oblivious Auctions: If the auctioneer
determines the queries regardless of the bidders" responses (i.e.,
the queries are predefined) we say that the auction is
oblivious. Otherwise, the auction is adaptive. We prove that an
adaptive behaviour of the auctioneer may be beneficial.
Proposition 5.12. There are classes that cannot be
elicited (fully elicited) using oblivious item-price ascending
auctions, but can be elicited (resp. fully elicited) by an adaptive
item-price ascending auction.
40
5.5 Preference Elicitation vs. Full Elicitation
Preference elicitation and full elicitation are closely
related problems. If full elicitation is easy (e.g., in
polynomial time) then clearly elicitation is also easy (by a
nonanonymous auction, simply by learning all the valuations
separately24
). On the other hand, there are examples where
preference elicitation is considered easy but learning is
hard (typically, elicitation requires smaller amount of
information; some examples can be found in [7]).
The tatonnement algorithms by [22, 12, 16] end up with
the optimal allocation for substitutes valuations.25
We prove
that we cannot fully elicit substitutes valuations (or even
their sub-class of OXS valuations defined in [25]), even for a
single bidder, by an item-price ascending auction (although
the optimal allocation can be found by an ascending auction
for any number of bidders!).
Theorem 5.13. Substitute valuations cannot be fully
elicited by ascending item-price auctions. Moreover, they
cannot be fully elicited by any m
2
ascending trajectories (m > 3).
Whether substitutes valuations have a compact
representation (i.e., polynomial in the number of goods) is an
important open question. As a step in this direction, we show
that its sub-class of OXS valuations does have a compact
representation: every OXS valuation can be represented by
at most m2
values.26
Lemma 5.14. Any OXS valuation can be represented by
no more than m2
values.
6. BUNDLE-PRICE ASCENDING
AUCTIONS
All the ascending auctions in the literature that are proved
to find the optimal allocation for unrestricted valuations are
non-anonymous bundle-price auctions (iBundle(3) by Parkes
and Ungar [37] and the Proxy Auction by Ausubel and
Milgrom [3]). Yet, several anonymous ascending auctions
have been suggested (e.g., AkBA [42], [21] and iBundle(2)
[37]). In this section, we prove that anonymous bundle-price
ascending auctions achieve poor results in the worst-case.
We also show that the family of non-anonymous
bundleprice ascending auctions can run exponentially slower than
simple item-price ascending auctions.
6.1 Limitations of Anonymous Bundle-Price
Ascending Auctions
We present a class of valuations that cannot be elicited
by anonymous bundle-price ascending auctions. These
valuations are described in Figure 7. The basic idea: for
determining some unknown value of one bidder we must raise
24
Note that an anonymous ascending auction cannot
necessarily elicit a class that can be fully elicited by an ascending
auction.
25
Substitute valuations are defined, e.g., in [16]. Roughly
speaking, a bidder with a substitute valuation will continue
demand a certain item after the price of some other items
was increased. For completeness, we present in the full paper
[9] a proof for the efficiency of such auctions for substitutes
valuations.
26
A unit-demand valuation is an XOR valuation in which
all the atomic bundles are singletons. OXS valuations can
be interpreted as an aggregation (OR) of any number of
unit-demand bidders.
Bid. 1 v1(ac) = 2 v1(bd) = 2 v1(cd) = α ∈ (0, 1)
Bid. 2 v2(ab) = 2 v2(cd) = 2 v2(bd) = β ∈ (0, 1)
Figure 7: Anonymous ascending bundle-price auctions
cannot determine the optimal allocation for this class of
valuations.
a price of a bundle that should be demanded by the other
bidder in the future.
Theorem 6.1. Some classes of valuations cannot be
elicited by anonymous bundle-price ascending auctions.
Proof. Consider a pair of XOR valuations as described
in Figure 7. For finding the optimal allocation we must know
which value is greater between α and β.27
However, we
cannot learn the value of both α and β by a single ascending
trajectory: assume w.l.o.g. that bidder 1 demands cd before
bidder 2 demands bd (no information will be elicited if none
of these happens). In this case, the price for bd must be
greater than 1 (otherwise, bidder 1 prefers bd to cd). Thus,
bidder 2 will never demand the bundle bd, and no
information will be elicited about β.
The valuations described in the proof of Theorem 6.1 can
be easily elicited by a non-anonymous item-price ascending
auction. On the other hand, the valuations in Figure 6 can
be easily elicited by an anonymous bundle-price ascending
auction. We conclude that the power of these two families
of ascending auctions is incomparable.
We strengthen the impossibility result above by showing
that anonymous bundle-price auctions cannot even achieve
better than a min{O(n), O(
√
m)}-approximation for the
social welfare. This approximation ratio can be achieved with
polynomial communication, and specifically with a
polynomial number of item-price demand queries.28
Theorem 6.2. An anonymous bundle-price ascending
auction cannot guarantee better than a min{ n
2
,
√
m
2
}
approximation for the optimal welfare.
Proof. (Sketch) Assume we have n bidders and n2
items
for sale, and that n is prime. We construct n2
distinct
bundles with the following properties: for each bidder, we define
a partition Si
= (Si
1, ..., Si
n) of the n2
items to n bundles,
such that any two bundles from different partitions intersect.
In the full paper, part II [9] we show an explicit construction
using the properties of linear functions over finite fields. The
rest of the proof is independent of the specific construction.
Using these n2
bundles we construct a hard-to-elicit
class. Every bidder has an atomic bid, in his XOR valuation,
for each of these n2
bundles. A bidder i has a value of 2 for
any bundle Si
j in his partition. For all bundles in the other
partitions, he has a value of either 0 or of 1 − δ, and these
values are unknown to the auctioneer. Since every pair of
bundles from different partitions intersect, only one bidder
can receive a bundle with a value of 2.
27
If α > β, the optimal allocation will allocate cd to bidder
1 and ab to bidder 2. Otherwise, we give bd to bidder 2 and
ac to bidder 1. Note that both bidders cannot gain a value
of 2 in the same allocation, due to the intersections of the
high-valued bundles.
28
Note that bundle-price queries may use exponential
communication, thus the lower bound of [32] does not hold.
41
Non-anonymous Bundle-Price Economically-Efficient
Ascending Auctions:
Initialization: All prices are initialized to zero
(non-anonymous bundle prices).
Repeat: - Each bidder submits a bundle that maximizes his
utility under his current personalized prices.
- The auctioneer calculates a provisional allocation that
maximizes his revenue under the current prices.
- The prices of bundles that were demanded by losing
bidders are increased by .
Finally: Terminate when the provisional allocation assigns
to each bidder the bundle he demanded.
Figure 8: Auctions from this family (denoted by NBEA
auctions) are known to achieve the optimal welfare.
No bidder will demand a low-valued bundle, as long as the
price of one of his high-valued bundles is below 1 (and thus
gain him a utility greater than 1). Therefore, for eliciting
any information about the low-valued bundles, the
auctioneer should first arbitrarily choose a bidder (w.l.o.g bidder
1) and raise the prices of all the bundles (S1
1 , ..., S1
n) to be
greater than 1. Since the prices cannot decrease, the other
bidders will clearly never demand these bundles in future
stages. An adversary may choose the values such that the
low values of all the bidders for the bundles not in bidder 1"s
partition are zero (i.e., vi(S1
j ) = 0 for every i = 1 and every
j), however, allocating each bidder a different bundle from
bidder 1"s partition, might achieve a welfare of n+1−(n−1)δ
(bidder 1"s valuation is 2, and 1 − δ for all other bidders);
If these bundles were wrongly allocated, only a welfare of
2 might be achieved (2 for bidder 1"s high-valued bundle, 0
for all other bidders). At this point, the auctioneer cannot
have any information about the identity of the bundles with
the non-zero values. Therefore, an adversary can choose the
values of the bundles received by bidders 2, ..., n in the final
allocation to be zero. We conclude that anonymous
bundleprice auctions cannot guarantee a welfare greater than 2 for
this class, where the optimal welfare can be arbitrarily close
to n + 1.
6.2 Bundle Prices vs. Item Prices
The core of the auctions in [37, 3] is the scheme described
in Figure 8 (in the spirit of [35]) for auctions with
nonanonymous bundle prices. Auctions from this scheme end
up with the optimal allocation for any class of valuations.
We denote this family of ascending auctions as NBEA
auctions29
.
NBEA auctions can elicit k-term XOR valuations by a
polynomial (in k) number of steps , although the elicitation
of such valuations may require an exponential number of
item-price queries ([7]), and item-price ascending auctions
cannot do it at all (Theorem 5.4). Nevertheless, we show
that NBEA auctions (and in particular, iBundle(3) and the
proxy auction) are sometimes inferior to simple item-price
demand auctions. This may justify the use of hybrid
auctions that use both linear and non-linear prices (e.g., the
clock-proxy auction [10]). We show that auctions from this
29
Non-anonymous Bundle-price economically Efficient
Ascending auctions. For completeness, we give in the full
paper [9] a simple proof for the efficiency (up to an ) of
auctions of this scheme .
family may use an exponential number of queries even for
determining the optimal allocation among two bidders with
additive valuations30
, where such valuations can be elicited
by a simple item-price ascending auction. We actually prove
this property for a wider class of auctions we call
conservative auctions. We also observe that in conservative auctions,
allowing the bidders to submit all the bundles in their
demand sets ensures that the auction runs a polynomial
number of steps - if L is not too high (but with exponential
communication, of course).
An ascending auction is called conservative if it is
nonanonymous, uses bundle prices initialized to zero and at
every stage the auctioneer can only raise prices of bundles
demanded by the bidders until this stage. In addition, each
bidder can only receive bundles he demanded during the
auction. Note that NBEA auctions are by definition
conservative.
Proposition 6.3. If every bidder demands a single
bundle in each step of the auction, conservative auctions may
run for an exponential number of steps even for additive
valuations. If the bidders are allowed to submit all the bundles
in their demand sets in each step, then conservative auctions
can run in a polynomial number of steps for any profile of
valuations, as long as the maximal valuation L is polynomial
in m, n and 1
δ
.
Acknowledgments:
The authors thank Moshe Babaioff, Shahar Dobzinski, Ron
Lavi, Daniel Lehmann, Ahuva Mu"alem, David Parkes, Michael
Schapira and Ilya Segal for helpful discussions. Supported
by grants from the Israeli Academy of Sciences and the
USAIsrael Binational Science Foundation.
7. REFERENCES
[1] amazon. Web Page: http://www.amazon.com.
[2] ebay. Web Page: http://www.ebay.com.
[3] L. M. Ausubel and P. R. Milgrom. Ascending auctions
with package bidding. Frontiers of Theoretical
Economics, 1:1-42, 2002.
[4] Lawrence Ausubel. An efficient dynamic auction for
heterogeneous commodities, 2000. Working paper,
University of Maryland.
[5] Yair Bartal, Rica Gonen, and Noam Nisan. Incentive
compatible multi unit combinatorial auctions. In
TARK 03, 2003.
[6] Alejandro Bertelsen. Substitutes valuations and
m -concavity. M.Sc. Thesis, The Hebrew University
of Jerusalem, 2005.
[7] Avrim Blum, Jeffrey C. Jackson, Tuomas Sandholm,
and Martin A. Zinkevich. Preference elicitation and
query learning. Journal of Machine Learning
Research, 5:649-667, 2004.
[8] Liad Blumrosen and Noam Nisan. On the
computational power of iterative auctions I: demand
queries. Working paper, The Hebrew University of
30
Valuations are called additive if for any disjoint bundles A
and B, v(A ∪ B) = v(A) + v(B). Additive valuations are
both sub-additive and super-additive and are determined by
the m values assigned for the singletons.
42
Jerusalem. Available from
http://www.cs.huji.ac.il/˜noam/mkts.html.
[9] Liad Blumrosen and Noam Nisan. On the
computational power of iterative auctions II:
ascending auctions. Working paper, The Hebrew
University of Jerusalem. Available from
http://www.cs.huji.ac.il/˜noam/mkts.html.
[10] P. Cramton, L.M. Ausubel, and P.R. Milgrom. In P.
Cramton and Y. Shoham and R. Steinberg (Editors),
Combinatorial Auctions. Chapter 5. The Clock-Proxy
Auction: A Practical Combinatorial Auction Design.
MIT Press. Forthcoming, 2005.
[11] P. Cramton, Y. Shoham, and R. Steinberg (Editors).
Combinatorial Auctions. MIT Press. Forthcoming,
2005.
[12] G. Demange, D. Gale, and M. Sotomayor. Multi-item
auctions. Journal of Political Economy, 94:863-872,
1986.
[13] Shahar Dobzinski, Noam Nisan, and Michael Schapira.
Approximation algorithms for cas with
complement-free bidders. In The 37th ACM
symposium on theory of computing (STOC)., 2005.
[14] Shahar Dobzinski and Michael Schapira. Optimal
upper and lower approximation bounds for
k-duplicates combinatorial auctions. Working paper,
the Hebrew University.
[15] Combinatorial bidding conference. Web Page:
http://wireless.fcc.gov/auctions/conferences/combin2003.
[16] Faruk Gul and Ennio Stacchetti. Walrasian
equilibrium with gross substitutes. Journal of
Economic Theory, 87:95 - 124, 1999.
[17] Faruk Gul and Ennio Stacchetti. The english auction
with differentiated commodities. Journal of Economic
Theory, 92(3):66 - 95, 2000.
[18] J. Hastad. Almost optimal lower bounds for small
depth circuits. In 18th STOC, pages 6-20, 1986.
[19] Ron Holzman, Noa Kfir-Dahav, Dov Monderer, and
Moshe Tennenholtz. Bundling equilibrium in
combinatrial auctions. Games and Economic
Behavior, 47:104-123, 2004.
[20] H. Karloff. Linear Programming. Birkh¨auser Verlag,
1991.
[21] Frank Kelly and Richard Steinberg. A combinatorial
auction with multiple winners for universal service.
Management Science, 46:586-596, 2000.
[22] A.S. Kelso and V.P. Crawford. Job matching,
coalition formation, and gross substitute.
Econometrica, 50:1483-1504, 1982.
[23] Subhash Khot, Richard J. Lipton, Evangelos
Markakis, and Aranyak Mehta. Inapproximability
results for combinatorial auctions with submodular
utility functions. In Working paper., 2004.
[24] Sebastien Lahaie and David C. Parkes. Applying
learning algorithms to preference elicitation. In EC 04.
[25] Benny Lehmann, Daniel Lehmann, and Noam Nisan.
Combinatorial auctions with decreasing marginal
utilities. In ACM conference on electronic commerce.
To appear, Games and Economic Behaviour., 2001.
[26] D. Lehmann, L. O"Callaghan, and Y. Shoham. Truth
revelation in approximately efficient combinatorial
auctions. JACM, 49(5):577-602, Sept. 2002.
[27] A. Mas-Collel, W. Whinston, and J. Green.
Microeconomic Theory. Oxford university press, 1995.
[28] Debasis Mishra and David Parkes. Ascending price
vickrey auctions using primal-dual algorithms., 2004.
Working paper, Harvard University.
[29] Noam Nisan. The communication complexity of
approximate set packing and covering. In ICALP 2002.
[30] Noam Nisan. Bidding and allocation in combinatorial
auctions. In ACM Conference on Electronic
Commerce, 2000.
[31] Noam Nisan. In P. Cramton and Y. Shoham and R.
Steinberg (Editors), Combinatorial Auctions. Chapter
1. Bidding Languages. MIT Press. Forthcoming, 2005.
[32] Noam Nisan and Ilya Segal. The communication
requirements of efficient allocations and supporting
prices, 2003. Working paper. Available from
http://www.cs.huji.ac.il/˜noam/mkts.html
Forthcoming in the Journal of Economic Theory.
[33] Noam Nisan and Ilya Segal. Exponential
communication inefficiency of demand queries, 2004.
Working paper. Available from
http://www.stanford.edu/ isegal/queries1.pdf.
[34] D. C. Parkes and L. H. Ungar. An ascending-price
generalized vickrey auction. Tech. Rep., Harvard
University, 2002.
[35] David Parkes. In P. Cramton and Y. Shoham and R.
Steinberg (Editors), Combinatorial Auctions. Chapter
3. Iterative Combinatorial Auctions. MIT Press.
Forthcoming, 2005.
[36] David C. Parkes. Iterative combinatorial auctions:
Achieving economic and computational efficiency.
Ph.D. Thesis, Department of Computer and
Information Science, University of Pennsylvania.,
2001.
[37] David C. Parkes and Lyle H. Ungar. Iterative
combinatorial auctions: Theory and practice. In
AAAI/IAAI, pages 74-81, 2000.
[38] Ariel Rubinstein. Why are certain properties of binary
relations relatively more common in natural
languages. Econometrica, 64:343-356, 1996.
[39] Tuomas Sandholm. Algorithm for optimal winner
determination in combinatorial auctions. In Artificial
Intelligence, volume 135, pages 1-54, 2002.
[40] P. Santi, V. Conitzer, and T. Sandholm. Towards a
characterization of polynomial preference elicitation
with value queries in combinatorial auctions. In The
17th Annual Conference on Learning Theory, 2004.
[41] Ilya Segal. The communication requirements of social
choice rules and supporting budget sets, 2004.
Working paper. Available from
http://www.stanford.edu/ isegal/rules.pdf.
[42] P.R. Wurman and M.P. Wellman. Akba: A
progressive, anonymous-price combinatorial auction.
In Second ACM Conference on Electronic Commerce,
2000.
[43] Martin A. Zinkevich, Avrim Blum, and Tuomas
Sandholm. On polynomial-time preference elicitation
with value queries. In ACM Conference on Electronic
Commerce, 2003.
43 | bound;ascend auction;combinatorial auction;price;optimal allocation;ascending-price auction;approximation factor;demand query;bidder;polynomial demand;preference elicitation;communication complexity |
train_J-49 | Information Markets vs. Opinion Pools: An Empirical Comparison | In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people"s subjective probability judgments on 2003 US National Football League games and compare with the market probabilities given by two different information markets on exactly the same events. We combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions. Prices in information markets are used to derive market predictions. Our results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments. In screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions. The results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods. | 1. INTRODUCTION
Forecasting is a ubiquitous endeavor in human societies.
For decades, scientists have been developing and exploring
various forecasting methods, which can be roughly divided
into statistical and non-statistical approaches. Statistical
approaches require not only the existence of enough
historical data but also that past data contains valuable
information about the future event. When these conditions can
not be met, non-statistical approaches that rely on
judgmental information about the future event could be better
choices. One widely used non-statistical method is to elicit
opinions from experts. Since experts are not generally in
agreement, many belief aggregation methods have been
proposed to combine expert opinions together and form a
single prediction. These belief aggregation methods are called
opinion pools, which have been extensively studied in
statistics [20, 24, 38], and management sciences [8, 9, 30, 31], and
applied in many domains such as group decision making [29]
and risk analysis [12].
With the fast growth of the Internet, information markets
have recently emerged as a promising non-statistical
forecasting tool. Information markets (sometimes called
prediction markets, idea markets, or event markets) are
markets designed for aggregating information and making
predictions about future events. To form the predictions,
information markets tie payoffs of securities to outcomes of
events. For example, in an information market to predict
the result of a US professional National Football League
(NFL) game, say New England vs Carolina, the security
pays a certain amount of money per share to its holders if
and only if New England wins the game. Otherwise, it pays
off nothing. The security price before the game reflects the
consensus expectation of market traders about the
probability of New England winning the game. Such markets
are becoming very popular. The Iowa Electronic Markets
(IEM) [2] are real-money futures markets to predict
economic and political events such as elections. The
Hollywood Stock Exchange (HSX) [3] is a virtual (play-money)
exchange for trading securities to forecast future box office
proceeds of new movies, and the outcomes of entertainment
awards, etc. TradeSports.com [7], a real-money betting
exchange registered in Ireland, hosts markets for sports,
political, entertainment, and financial events. The Foresight
Exchange (FX) [4] allows traders to wager play money on
unresolved scientific questions or other claims of public
interest, and NewsFutures.com"s World News Exchange [1] has
58
popular sports and financial betting markets, also grounded
in a play-money currency.
Despite the popularity of information markets, one of the
most important questions to ask is: how accurately can
information markets predict? Previous research in general
shows that information markets are remarkably accurate.
The political election markets at IEM predict the election
outcomes better than polls [16, 17, 18, 19]. Prices in HSX
and FX have been found to give as accurate or more
accurate predictions than judgment of individual experts [33,
34, 37]. However, information markets have not been
calibrated against opinion pools, except for Servan-Schreiber
et. al [36], in which the authors compare two information
markets against arithmetic average of expert opinions. Since
information markets, in nature, offer an adaptive and
selforganized mechanism to aggregate opinions of market
participants, it is interesting to compare them with existing
opinion pooling methods, to evaluate the performance of
information markets from another perspective. The
comparison will provide beneficial guidance for practitioners to
choose the most appropriate method for their needs.
This paper contributes to the literature in two ways: (1)
As an initial attempt to compare information markets with
opinion pools of multiple experts, it leads to a better
understanding of information markets and their promise as an
alternative institution for obtaining accurate forecasts; (2)
In screening opinion pools to be used in the comparison, we
cast insights into relative performances of different opinion
pools. In terms of prediction accuracy, we compare two
information markets with several linear and logarithmic
opinion pools (LinOP and LogOP) at predicting the results of
NFL games. Our results show that at the same time point
ahead of the game, information markets provide as accurate
predictions as our carefully selected opinion pools. In
selecting the opinion pools to be used in our comparison, we
find that arithmetic average is a robust and efficient pooling
function; weighting expert assessments according to their
past performances does not improve the prediction accuracy
of opinion pools; and LogOP offers bolder predictions than
LinOP. The remainder of the paper is organized as follows.
Section 2 reviews popular opinion pooling methods.
Section 3 introduces the basics of information markets. Data
sets and our analysis methods are described in Section 4.
We present results and analysis in Section 5, followed by
conclusions in Section 6.
2. REVIEW OF OPINION POOLS
Clemen and Winkler [12] classify opinion pooling
methods into two broad categories: mathematical approaches
and behavioral approaches. In mathematical approaches,
the opinions of individual experts are expressed as
subjective probability distributions over outcomes of an
uncertain event. They are combined through various
mathematical methods to form an aggregated probability
distribution. Genest and Zidek [24] and French [20] provide
comprehensive reviews of mathematical approaches. Mathematical
approaches can be further distinguished into axiomatic
approaches and Bayesian approaches. Axiomatic approaches
apply prespecified functions that map expert opinions,
expressed as a set of individual probability distributions, to
a single aggregated probability distribution. These
pooling functions are justified using axioms or certain desirable
properties. Two of the most common pooling functions are
the linear opinion pool (LinOP) and the logarithmic opinion
pool (LogOP). Using LinOP, the aggregate probability
distribution is a weighted arithmetic mean of individual
probability distributions:
p(θ) =
n
i=1
wipi(θ), (1)
where pi(θ) is expert i"s probability distribution of uncertain
event θ, p(θ) represents the aggregate probability
distribution, wi"s are weights for experts, which are usually
nonnegative and sum to 1, and n is the number of experts. Using
LogOP, the aggregate probability distribution is a weighted
geometric mean of individual probability distributions:
p(θ) = k
n
i=1
pi(θ)wi
, (2)
where k is a normalization constant to ensure that the pooled
opinion is a probability distribution. Other axiomatic
pooling methods often are extensions of LinOP [22], LogOP [23],
or both [13]. Winkler [39] and Morris [29, 30] establish the
early framework of Bayesian aggregation methods. Bayesian
approaches assume as if there is a decision maker who has a
prior probability distribution over event θ and a likelihood
function over expert opinions given the event. This decision
maker takes expert opinions as evidence and updates its
priors over the event and opinions according to Bayes rule. The
resulted posterior probability distribution of θ is the pooled
opinion.
Behavioral approaches have been widely studied in the
field of group decision making and organizational
behavior. The important assumption of behavioral approaches is
that, through exchanging opinions or information, experts
can eventually reach an equilibrium where further
interaction won"t change their opinions. One of the best known
behavioral approaches is the Delphi technique [28].
Typically, this method and its variants do not allow open
discussion, but each expert has chance to judge opinions of other
experts, and is given feedback. Experts then can reassess
their opinions and repeat the process until a consensus or a
smaller spread of opinions is achieved. Some other
behavioral methods, such as the Nominal Group technique [14],
promote open discussions in controlled environments.
Each approach has its pros and cons. Axiomatic
approaches are easy to use. But they don"t have a normative
basis to choose weights. In addition, several impossibility
results (e.g., Genest [21]) show that no aggregation
function can satisfy all desired properties of an opinion pool,
unless the pooled opinion degenerates to a single individual
opinion, which effectively implies a dictator. Bayesian
approaches are nicely based on the normative Bayesian
framework. However, it is sometimes frustratingly difficult to
apply because it requires either (1) constructing an obscenely
complex joint prior over the event and opinions (often
impractical even in terms of storage / space complexity, not
to mention from an elicitation standpoint) or (2) making
strong assumptions about the prior, like conditional
independence of experts. Behavior approaches allow experts to
dynamically improve their information and revise their
opinions during interactions, but many of them are not fixed or
completely specified, and can"t guarantee convergence or
repeatability.
59
3. HOW INFORMATION MARKETS WORK
Much of the enthusiasm for information markets stems
from Hayek hypothesis [26] and efficient market
hypothesis [15]. Hayek, in his classic critique of central planning in
1940"s, claims that the price system in a competitive market
is a very efficient mechanism to aggregate dispersed
information among market participants. The efficient market
hypothesis further states that, in an efficient market, the
price of a security almost instantly incorporates all
available information. The market price summarizes all relevant
information across traders, hence is the market participants"
consensus expectation about the future value of the security.
Empirical evidence supports both hypotheses to a large
extent [25, 27, 35]. Thus, when associating the value of a
security with the outcome of an uncertain future event, market
price, by revealing the consensus expectation of the security
value, can indirectly predict the outcome of the event. This
idea gives rise to information markets.
For example, if we want to predict which team will win
the NFL game between New England and Carolina, an
information market can trade a security $100 if New England
defeats Carolina, whose payoff per share at the end of the
game is specified as follow:
$100 if New England wins the game;
$0 otherwise.
The security price should roughly equal the expected payoff
of the security in an efficient market. The time value of
money usually can be ignored because durations of most
information markets are short. Assuming exposure to risk is
roughly equal for both outcomes, or that there are sufficient
effectively risk-neutral speculators in the market, the price
should not be biased by the risk attitudes of various players
in the market. Thus,
p = Pr(Patriots win) × 100 + [1 − Pr(Patriots win)] × 0,
where p is the price of the security $100 if New England
defeats Carolina and Pr(Patriots win) is the probability
that New England will win the game. Observing the security
price p before the game, we can derive Pr(Patriots win),
which is the market participants" collective prediction about
how likely it is that New England will win the game.
The above security is a winner-takes-all contract. It is
used when the event to be predicted is a discrete random
variable with disjoint outcomes (in this case binary). Its
price predicts the probability that a specific outcome will be
realized. When the outcome of a prediction problem can be
any value in a continuous interval, we can design a security
that pays its holder proportional to the realized value. This
kind of security is what Wolfers and Zitzewitz [40] called
an index contract. It predicts the expected value of a
future outcome. Many other aspects of a future event such as
median value of outcome can also be predicted in
information markets by designing and trading different securities.
Wolfers and Zitzewitz [40] provide a summary of the main
types of securities traded in information markets and what
statistical properties they can predict. In practice,
conceiving a security for a prediction problem is only one of the
many decisions in designing an effective information
market. Spann and Skiera [37] propose an initial framework for
designing information markets.
4. DESIGN OF ANALYSIS
4.1 Data Sets
Our data sets cover 210 NFL games held between
September 28th, 2003 and December 28th, 2003. NFL games are
very suitable for our purposes because: (1) two online
exchanges and one online prediction contest already exist that
provide data on both information markets and the
opinions of self-identified experts for the same set of games; (2)
the popularity of NFL games in the United States provides
natural incentives for people to participate in information
markets and/or the contest, which increases liquidity of
information markets and improves the quality and number
of opinions in the contest; (3) intense media coverage and
analysis of the profiles and strengths of teams and
individual players provide the public with much information so that
participants of information markets and the contest can be
viewed as knowledgeable regarding to the forecasting goal.
Information market data was acquired, by using a
specially designed crawler program, from TradeSports.com"s
Football-NFL markets [7] and NewsFutures.com"s Sports
Exchange [1]. For each NFL game, both TradeSports and
NewsFutures have a winner-takes-all information market to
predict the game outcome. We introduce the design of the
two markets according to Spann and Skiera"s three steps for
designing an information market [37] as below.
• Choice of forecasting goal: Markets at both
TradeSports and NewsFutures aim at predicting which one
of the two teams will win a NFL football game. They
trade similar winner-takes-all securities that pay off
100 if a team wins the game and 0 if it loses the game.
Small differences exist in how they deal with ties. In
the case of a tie, TradeSports will unwind all trades
that occurred and refund all exchange fees, but the
security is worth 50 in NewsFutures. Since the
probability of a tie is usually very low (much less the 1%),
prices at both markets effectively represent the market
participants" consensus assessment of the probability
that the team will win.
• Incentive for participation and information
revelation: TradeSports and NewsFutures use different
incentives for participation and information revelation.
TradeSports is a real-money exchange. A trader needs
to open and fund an account with a minimum of $100
to participate in the market. Both profits and losses
can occur as a result of trading activity. On the
contrary, a trader can register at NewsFutures for free and
get 2000 units of Sport Exchange virtual money at the
time of registration. Traders at NewsFutures will not
incur any real financial loss. They can accumulate
virtual money by trading securities. The virtual money
can then be used to bid for a few real prizes at
NewsFutures" online shop.
• Financial market design: Both markets at
TradeSports and NewsFutures use the continuous double
auction as their trading mechanism. TradeSports charges
a small fee on each security transaction and expiry,
while NewsFutures does not.
We can see that the main difference between two information
markets is real money vs. virtual money. Servan-Schreiber
60
et. al [36] have compared the effect of money on the
performance of the two information markets and concluded that
the prediction accuracy of the two markets are at about the
same level. Not intending to compare these two markets,
we still use both markets in our analysis to ensure that our
findings are not accidental.
We obtain the opinions of 1966 self-identified experts for
NFL games from the ProbabilityFootball online contest [5],
one of several ProbabilitySports contests [6]. The contest is
free to enter. Participants of the contest are asked to enter
their subjective probability that a team will win a game
by noon on the day of the game. Importantly, the contest
evaluates the participants" performance via the quadratic
scoring rule:
s = 100 − 400 × Prob Lose2
, (3)
where s represents the score that a participant earns for the
game, and Prob Lose is the probability that the participant
assigns to the actual losing team. The quadratic score is
one of a family of so-called proper scoring rules that have
the property that an expert"s expected score is maximized
when the expert reports probabilities truthfully. For
example, for a game team A vs. team B, if a player assigns 0.5 to
both team A and B, his/her score for the game is 0 no matter
which team wins. If he/she assigns 0.8 to team A and 0.2 to
team B, showing that he is confident in team A"s winning,
he/she will score 84 points for the game if team A wins, and
lose 156 points if team B wins. This quadratic scoring rule
rewards bold predictions that are right, but penalizes bold
predictions that turn out to be wrong. The top players,
measured by accumulated scores over all games, win the prizes
of the contest. The suggested strategy at the contest
website is to make picks for each game that match, as closely
as possible, the probabilities that each team will win. This
strategy is correct if the participant seeks to maximize
expected score. However, as prizes are awarded only to the top
few winners, participants" goals are to maximize the
probability of winning, not maximize expected score, resulting in a
slightly different and more risk-seeking optimization.1
Still,
as far as we are aware, this data offer the closest thing
available to true subjective probability judgments from so many
people over so many public events that have corresponding
information markets.
4.2 Methods of Analysis
In order to compare the prediction accuracy of
information markets and that of opinion pools, we proceed to derive
predictions from market data of TradeSports and
NewsFutures, form pooled opinions using expert data from
ProbabilityFootball contest, and specify the performance measures
to be used.
4.2.1 Deriving Predictions
For information markets, deriving predictions is
straightforward. We can take the security price and divide it by
100 to get the market"s prediction of the probability that
a team will win. To match the time when participants at
the ProbabilityFootball contest are required to report their
probability assessments, we derive predictions using the last
trade price before noon on the day of the game. For more
1
Ideally, prizes would be awarded by lottery in proportion
to accumulated score.
than half of the games, this time is only about an hour
earlier than the game starting time, while it is several hours
earlier for other games. Two sets of market predictions are
derived:
• NF: Prediction equals NewsFutures" last trade price
before noon of the game day divided by 100.
• TS: Prediction equals TradeSports" last trade price
before noon of the game day divided by 100.
We apply LinOP and LogOP to ProbabilityFootball data
to obtain aggregate expert predictions. The reason that we
do not consider other aggregation methods include: (1) data
from ProbabilityFootball is only suitable for mathematical
pooling methods-we can rule out behavioral approaches,
(2) Bayesian aggregation requires us to make assumptions
about the prior probability distribution of game outcomes
and the likelihood function of expert opinions: given the
large number of games and participants, making reasonable
assumptions is difficult, and (3) for axiomatic approaches,
previous research has shown that simpler aggregation
methods often perform better than more complex methods [12].
Because the output of LogOP is indeterminate if there are
probability assessments of both 0 and 1 (and because
assessments of 0 and 1 are dictatorial using LogOP), we add
a small number 0.01 to an expert opinion if it is 0, and
subtract 0.01 from it if it is 1.
In pooling opinions, we consider two influencing factors:
weights of experts and number of expert opinions to be
pooled. For weights of experts, we experiment with equal
weights and performance-based weights. The
performancebased weights are determined according to previous
accumulated score in the contest. The score for each game is
calculated according to equation 3, the scoring rule used in
the ProbabilityFootball contest. For the first week, since no
previous scores are available, we choose equal weights. For
later weeks, we calculate accumulated past scores for each
player. Because the cumulative scores can be negative, we
shift everyone"s score if needed to ensure the weights are
non-negative. Thus,
wi =
cumulative scorei + shift
n
j=1(cumulative scorej + shift)
. (4)
where shift equals 0 if the smallest cumulative scorej is
non-negative, and equals the absolute value of the
smallest cumulative scorej otherwise. For simplicity, we call
performance-weighted opinion pool as weighted, and equally
weighted opinion pool as unweighted. We will use them
interchangeably in the remaining of the paper.
As for the number of opinions used in an opinion pool,
we form different opinion pools with different number of
experts. Only the best performing experts are selected. For
example, to form an opinion pool with 20 expert opinions,
we choose the top 20 participants. Since there is no
performance record for the first week, we use opinions of all
participants in the first week. For week 2, we select opinions of
20 individuals whose scores in the first week are among the
top 20. For week 3, 20 individuals whose cumulative scores
of week 1 and 2 are among the top 20s are selected. Experts
are chosen in a similar way for later weeks. Thus, the top
20 participants can change from week to week.
The possible opinion pools, varied in pooling functions,
weighting methods, and number of expert opinions, are shown
61
Table 1: Pooled Expert Predictions
# Symbol Description
1 Lin-All-u Unweighted (equally weighted) LinOP
of all experts.
2 Lin-All-w Weighted (performance-weighted)
LinOP of all experts.
3 Lin-n-u Unweighted (equally weighted) LinOP
with n experts.
4 Lin-n-w Weighted (performance-weighted)
LinOP with n experts.
5 Log-All-u Unweighted (equally weighted) LogOP
of all experts.
6 Log-All-w Weighted (performance-weighted)
LogOP of all experts.
7 Log-n-u Unweighted (equally weighted) LogOP
with n experts.
8 Log-n-w Weighted (performance-weighted)
LogOP with n experts.
in Table 1. Lin represents linear, and Log represents
Logarithmic. n is the number of expert opinions that are
pooled, and All indicates that all opinions are combined.
We use u to symbolize unweighted (equally weighted)
opinion pools. w is used for weighted (performance-weighted)
opinion pools. Lin-All-u, the equally weighted LinOP with
all participants, is basically the arithmetic mean of all
participants" opinions. Log-All-u is simply the geometric mean
of all opinions.
When a participant did not enter a prediction for a
particular game, that participant was removed from the opinion
pool for that game. This contrasts with the
ProbabilityFootball average reported on the contest website and used
by Servan-Schreiber et. al [36], where unreported
predictions were converted to 0.5 probability predictions.
4.2.2 Performance Measures
We use three common metrics to assess prediction
accuracy of information markets and opinion pools. These
measures have been used by Servan-Schreiber et. al [36] in
evaluating the prediction accuracy of information markets.
1. Absolute Error = Prob Lose,
where Prob Lose is the probability assigned to the
eventual losing team. Absolute error simply measures
the difference between a perfect prediction (1 for
winning team) and the actual prediction. A prediction
with lower absolute error is more accurate.
2. Quadratic Score = 100 − 400 × (Prob Lose2
).
Quadratic score is the scoring function that is used in
the ProbabilityFootball contest. It is a linear
transformation of squared error, Prob Lose2
, which is one of
the mostly used metrics in evaluating forecasting
accuracy. Quadratic score can be negative. A prediction
with higher quadratic score is more accurate.
3. Logarithmic Score = log(Prob W in),
where Prob W in is the probability assigned to the
eventual winning team. The logarithmic score, like
the quadratic score, is a proper scoring rule. A
prediction with higher (less negative) logarithmic score is
more accurate.
5. EMPIRICAL RESULTS
5.1 Performance of Opinion Pools
Depending on how many opinions are used, there can be
numerous different opinion pools. We first examine the
effect of number of opinions on prediction accuracy by forming
opinion pools with the number of expert opinions varying
from 1 to 960. In the ProbabilityFootball Competition, not
all 1966 registered participants provide their probability
assessments for every game. 960 is the smallest number of
participants for all games. For each game, we sort experts
according to their accumulated quadratic score in previous
weeks. Predictions of the best performing n participants are
picked to form an opinion pool with n experts.
Figure 1 shows the prediction accuracy of LinOP and
LogOP in terms of mean values of the three performance
measures across all 210 games. We can see the following trends
in the figure.
1. Unweighted opinion pools and performance-weighted
opinion pools have similar levels of prediction
accuracy, especially for LinOP.
2. For LinOP, increasing the number of experts in
general increases or keeps the same the level of prediction
accuracy. When there are more than 200 experts, the
prediction accuracy of LinOP is stable regarding the
number of experts.
3. LogOP seems more accurate than LinOP in terms of
mean absolute error. But, using all other performance
measures, LinOP outperforms LogOP.
4. For LogOP, increasing the number of experts increases
the prediction accuracy at the beginning. But the
curves (including the points with all experts) for mean
quadratic score, and mean logarithmic score have slight
bell-shapes, which represent a decrease in prediction
accuracy when the number of experts is very large.
The curves for mean absolute error, on the other hand,
show a consistent increase of accuracy.
The first and second trend above imply that when using
LinOP, the simplest way, which has good prediction
accuracy, is to average the opinions of all experts. Weighting
does not seem to improve performance. Selecting experts
according to past performance also does not help. It is a
very interesting observation that even if many participants
of the ProbabilityFootball contest do not provide accurate
individual predictions (they have negative quadratic scores
in the contest), including their opinions into the opinion pool
still increases the prediction accuracy. One explanation of
this phenomena could be that biases of individual judgment
can offset with each other when opinions are diverse, which
makes the pooled prediction more accurate.
The third trend presents a controversy. The relative
prediction accuracy of LogOP and LinOP flips when using
different accuracy measures. To investigate this disagreement,
we plot the absolute error of Log-All-u and Lin-All-u for each
game in Figure 2. When the absolute error of an opinion
62
0 100 200 300 400 500 600 700 800 900 All
0.4
0.405
0.41
0.415
0.42
0.425
0.43
0.435
0.44
0.445
0.45
Number of Expert Opinions
MeanAbsoluteError
Unweighted Linear
Weighted Linear
Unweighted Logarithmic
Weighted Logarithmic
Lin−All−u
Lin−All−w
Log−All−u
Log−All−w
(a) Mean Absolute Error
0 100 200 300 400 500 600 700 800 900 All
4
5
6
7
8
9
10
11
12
13
14
Number of Expert Opinions
MeanQuadraticScore
Unweighted Linear
Weighted Linear
Unweighted Logarithmic
Weighted Logarithmic
Lin−All−u
Lin−All−w
Log−All−u
Log−All−w
0 100 200 300 400 500 600 700 800 900 All
−0.71
−0.7
−0.69
−0.68
−0.67
−0.66
−0.65
−0.64
−0.63
−0.62
Number of Expert Opinions
MeanLogarithmicScore
Unweighted Linear
Weighted Linear
Unweighted Logarithmic
Weighted Logarithmic
Lin−All−u
Lin−All−w
Log−All−u
Log−All−w
(b) Mean Quadratic Score (c) Mean Logarithmic Score
Figure 1: Prediction Accuracy of Opinion Pools
pool for a game is less than 0.5, it means that the team
favored by the opinion pool wins the game. If it is greater than
0.5, the underdog wins. Compared with Lin-All-u, Log-All-u
has lower absolute error when it is less than 0.5, and greater
absoluter error when it is greater than 0.5, which indicates
that predictions of Log-All-u are bolder, more close to 0 or 1,
than those of Lin-All-u. This is due to the nature of linear
and logarithmic aggregating functions. Because quadratic
score and logarithmic score penalize bold predictions that
are wrong, LogOP is less accurate when measured in these
terms.
Similar reasoning accounts for the fourth trend. When
there are more than 500 experts, increasing number of
experts used in LogOP improves the prediction accuracy
measured by absolute error, but worsens the accuracy measured
by the other two metrics. Examining expert opinions, we
find that participants who rank lower are more frequent in
offering extreme predictions (0 or 1) than those ranking high
in the list. When we increase the number of experts in an
opinion pool, we are incorporating more extreme predictions
into it. The resulting LogOP is bolder, and hence has lower
mean quadratic score and mean logarithmic score.
5.2 Comparison of Information Markets and
Opinion Pools
Through the first screening of various opinion pools, we
select Lin-All-u, Log-All-u, Log-All-w, and Log-200-u to
compare with predictions from information markets. Lin-All-u
as shown in Figure 1 can represent what LinOP can achieve.
However, the performance of LogOP is not consistent when
evaluated using different metrics. Log-All-u and Log-All-w
offer either the best or the worst predictions. Log-200-u, the
LogOP with the 200 top performing experts, provides more
stable predictions. We use all of the three to stand for the
performance of LogOP in our later comparison.
If a prediction of the probability that a team will win a
game, either from an opinion pool or an information
market, is higher than 0.5, we say that the team is the predicted
favorite for the game. Table 2 presents the number and
percentage of games that predicted favorites actually win, out
of a total of 210 games. All four opinion pools correctly
predict a similar number and percentage of games as NF and
TS. Since NF, TS, and the four opinion pools form their
predictions using information available at noon of the game
63
Table 2: Number and Percentage of Games that Predicted Favorites Win
NF TS Lin-All-u Log-All-u Log-All-w Log-200-u
Number 142 137 144 144 143 141
Percentage 67.62% 65.24% 68.57% 68.57% 68.10% 67.14%
Table 3: Mean of Prediction Accuracy Measures
Absolute Error Quadratic Score Logarithmic Score
NF
0.4253 15.4352 -0.6136
(0.0121) (4.6072) (0.0258)
TS
0.4275 15.2739 -0.6121
(0.0118) (4.3982) (0.0241)
Lin-All-u
0.4292 13.0525 -0.6260
(0.0126) (4.8088) (0.0268)
Log-All-u
0.4024 10.0099 -0.6546
(0.0173) (6.6594) (0.0418)
Log-All-w
0.4059 10.4491 -0.6497
(0.0168) (6.4440) (0.0398)
Log-200-u
0.4266 12.3868 -0.6319
(0.0133) (5.0764) (0.0295)
*Numbers in parentheses are standard errors.
*Best value for each metric is shown in bold.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Absolute Error of Lin−All−u
AbsoluteErrorofLog−All−u
45 Degree Line
Absolute Error
Figure 2: Absolute Error: Lin-All-u vs. Log-All-u
day, information markets and opinion pools have
comparable potential at the same time point.
We then take a closer look at prediction accuracy of
information markets and opinion pools using the three
performance measures. Table 3 displays mean values of these
measures over 210 games. Numbers in parentheses are
standard errors, which estimate the standard deviation of the
mean. To take into consideration of skewness of
distributions, we also report median values of accuracy measures in
Table 4. Judged by the mean values of accuracy measures
in Table 3, all methods have similar accuracy levels, with
NF and TS slightly better than the opinion pools. However,
the median values of accuracy measures indicate that
LogAll-u and Log-All-w opinion pools are more accurate than
all other predictions.
We employ the randomization test [32] to study whether
the differences in prediction accuracy presented in Table 3
and Table 4 are statistically significant. The basic idea of
randomization test is that, by randomly swapping
predictions of two methods numerous times, an empirical
distribution for the difference of prediction accuracy can be
constructed. Using this empirical distribution, we are then able
to evaluate that at what confidence level the observed
difference reflects a real difference. For example, the mean
absolute error of NF is higher than that of Log-All-u by
0.0229, as shown in Table 3. To test whether this
difference is statistically significant, we shuffle predictions from
two methods, randomly label half of predictions as NF and
the other half as Log-All-u, and compute the difference of
mean absolute error of the newly formed NF and Log-All-u
data. The above procedure is repeated 10,000 times. The
10,000 differences of mean absolute error results in an
empirical distribution of the difference. Comparing our observed
difference, 0.0229, with this distribution, we find that the
observed difference is greater than 75.37% of the empirical
differences. This leads us to conclude that the difference of
mean absolute error between NF and Log-All-u is not
statistically significant, if we choose the level of significance to
be 0.05.
Table 5 and Table 6 are results of randomization test for
mean and median differences respectively. Each cell of the
table is for two different prediction methods, represented by
name of the row and name of the column. The first lines
of table cells are results for absolute error. The second and
third lines are dedicated to quadratic score and logarithmic
score respectively. We can see that, in terms of mean values
of accuracy measures, the differences of all methods are not
statistically significant to any reasonable degree. When it
64
Table 4: Median of Prediction Accuracy Measures
Absolute Error Quadratic Score Logarithmic Score
NF 0.3800 42.2400 -0.4780
TS 0.4000 36.0000 -0.5108
Lin-All-u 0.3639 36.9755 -0.5057
Log-All-u 0.3417 53.2894 -0.4181
Log-All-w 0.3498 51.0486 -0.4305
Log-200-u 0.3996 36.1300 -0.5101
*Best value for each metric is shown in bold.
Table 5: Statistical Confidence of Mean Differences in Prediction Accuracy
TS Lin-All-u Log-All-u Log-All-w Log-200-u
NF
8.92% 22.07% 75.37% 66.47% 7.76%
2.38% 26.60% 50.74% 44.26% 32.24%
2.99% 22.81% 59.35% 56.21% 33.26%
TS
10.13% 77.79% 68.15% 4.35%
27.25% 53.65% 44.90% 28.30%
32.35% 57.89% 60.69% 38.84%
Lin-All-u
82.19% 68.86% 9.75%
28.91% 23.92% 6.81%
44.17% 43.01% 17.36%
Log-All-u
11.14% 72.49%
3.32% 18.89%
5.25% 39.06%
Log-All-w
69.89%
18.30%
30.23%
*In each table cell, row 1 accounts for absolute error, row 2 for quadratic score,
and row 3 for logarithmic score.
comes to median values of prediction accuracy, Log-All-u
outperforms Lin-All-u at a high confidence level.
These results indicate that differences in prediction
accuracy between information markets and opinion pools are not
statistically significant. This may seem to contradict the
result of Servan-Schreiber et. al [36], in which NewsFutures"s
information markets have been shown to provide
statistically significantly more accurate predictions than the
(unweighted) average of all ProbabilityFootball opinions. The
discrepancy emerges in dealing with missing data. Not all
1966 registered ProbabilityFootball participants offer
probability assessments for each game. When a participant does
not provide a probability assessment for a game, the
contest considers their prediction as 0.5.. This makes sense in
the context of the contest, since 0.5 always yields 0 quadratic
score. The ProbabilityFootball average reported on the
contest website and used by Servan-Schreiber et. al includes
these 0.5 estimates. Instead, we remove participants from
games that they do not provide assessments, pooling only
the available opinions together. Our treatment increases the
prediction accuracy of Lin-All-u significantly.
6. CONCLUSIONS
With the fast growth of the Internet, information markets
have recently emerged as an alternative tool for predicting
future events. Previous research has shown that information
markets give as accurate or more accurate predictions than
individual experts and polls. However, information
markets, as an adaptive mechanism to aggregate different
opinions of market participants, have not been calibrated against
many belief aggregation methods. In this paper, we compare
prediction accuracy of information markets with linear and
logarithmic opinion pools (LinOP and LogOP) using
predictions from two markets and 1966 individuals regarding the
outcomes of 210 American football games during the 2003
NFL season. In screening for representative opinion pools to
compare with information markets, we investigate the effect
of weights and number of experts on prediction accuracy.
Our results on both the comparison of information markets
and opinion pools and the relative performance of different
opinion pools are summarized as below.
1. At the same time point ahead of the events,
information markets offer as accurate predictions as our
selected opinion pools.
We have selected four opinion pools to represent the
prediction accuracy level that LinOP and LogOP can
achieve. With all four performance metrics, our two
information markets obtain similar prediction accuracy
as the four opinion pools.
65
Table 6: Statistical Confidence of Median Differences in Prediction Accuracy
TS Lin-All-u Log-All-u Log-All-w Log-200-u
NF
48.85% 47.3% 84.8% 77.9% 65.36%
45.26% 44.55% 85.27% 75.65% 66.75%
44.89% 46.04% 84.43% 77.16% 64.78%
TS
5.18% 94.83% 94.31% 0%
5.37% 92.08% 92.53% 0%
7.41% 95.62% 91.09% 0%
Lin-All-u
95.11% 91.37% 7.31%
96.10% 92.69% 9.84%
95.45% 95.12% 7.79%
Log-All-u
23.47% 95.89%
26.68% 93.85%
22.47% 96.42%
Log-All-w
91.3%
91.4%
90.37%
*In each table cell, row 1 accounts for absolute error, row 2 for quadratic score,
and row 3 for logarithmic score.
*Confidence above 95% is shown in bold.
2. The arithmetic average of all opinions (Lin-All-u) is a
simple, robust, and efficient opinion pool.
Simply averaging across all experts seems resulting in
better predictions than individual opinions and
opinion pools with a few experts. It is quite robust in the
sense that even if the included individual predictions
are less accurate, averaging over all opinions still gives
better (or equally good) predictions.
3. Weighting expert opinions according to past
performance does not seem to significantly improve
prediction accuracy of either LinOP or LogOP.
Comparing performance-weighted opinion pools with
equally weighted opinion pools, we do not observe much
difference in terms of prediction accuracy. Since we
only use one performance-weighting method,
calculating the weights according to past accumulated quadratic
score that participants earned, this might due to the
weighting method we chose.
4. LogOP yields bolder predictions than LinOP.
LogOP yields predictions that are closer to the
extremes, 0 or 1.
An information markets is a self-organizing mechanism
for aggregating information and making predictions.
Compared with opinion pools, it is less constrained by space
and time, and can eliminate the efforts to identify experts
and decide belief aggregation methods. But the advantages
do not compromise their prediction accuracy to any extent.
On the contrary, information markets can provide real-time
predictions, which are hardly achievable through resorting
to experts. In the future, we are interested in further
exploring:
• Performance comparison of information markets with
other opinion pools and mathematical aggregation
procedures.
In this paper, we only compare information markets
with two simple opinion pools, linear and logarithmic.
It will be meaningful to investigate their relative
prediction accuracy with other belief aggregation methods
such as Bayesian approaches. There are also a number
of theoretical expert algorithms with proven worst-case
performance bounds [10] whose average-case or
practical performance would be instructive to investigate.
• Whether defining expertise more narrowly can improve
predictions of opinion pools.
In our analysis, we broadly treat participants of the
ProbabilityFootball contest as experts in all games. If
we define expertise more narrowly, selecting experts
in certain football teams to predict games involving
these teams, will the predictions of opinion pools be
more accurate?
• The possibility of combining information markets with
other forecasting methods to achieve better prediction
accuracy.
Chen, Fine, and Huberman [11] use an information
market to determine the risk attitude of participants,
and then perform a nonlinear aggregation of their
predictions based on their risk attitudes. The nonlinear
aggregation mechanism is shown to outperform both
the market and the best individual participants. It
is worthy of more attention whether information
markets, as an alternative forecasting method, can be used
together with other methods to improve our
predictions.
7. ACKNOWLEDGMENTS
We thank Brian Galebach, the owner and operator of the
ProbabilitySports and ProbabilityFootball websites, for
providing us with such unique and valuable data. We thank
Varsha Dani, Lance Fortnow, Omid Madani, Sumit
Sang66
hai, and the anonymous reviewers for useful insights and
pointers.
The authors acknowledge the support of The Penn State
eBusiness Research Center.
8. REFERENCES
[1] http://us.newsfutures.com
[2] http://www.biz.uiowa.edu/iem/
[3] http://www.hsx.com/
[4] http://www.ideosphere.com/fx/
[5] http://www.probabilityfootball.com/
[6] http://www.probabilitysports.com/
[7] http://www.tradesports.com/
[8] A. H. Ashton and R. H. Ashton. Aggregating
subjective forecasts: Some empirical results.
Management Science, 31:1499-1508, 1985.
[9] R. P. Batchelor and P. Dua. Forecaster diversity and
the benefits of combining forecasts. Management
Science, 41:68-75, 1995.
[10] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P.
Helmbold, R. E. Schapire, and M. K. Warmuth. How
to use expert advice. Journal of the ACM,
44(3):427-485, 1997.
[11] K. Chen, L. Fine, and B. Huberman. Predicting the
future. Information System Frontier, 5(1):47-61, 2003.
[12] R. T. Clemen and R. L. Winkler. Combining
probability distributions from experts in risk analysis.
Risk Analysis, 19(2):187-203, 1999.
[13] R. M. Cook. Experts in Uncertainty: Opinion and
Subjective Probability in Science. Oxford University
Press, New York, 1991.
[14] A. L. Delbecq, A. H. Van de Ven, and D. H.
Gustafson. Group Techniques for Program Planners:
A Guide to Nominal Group and Delphi Processes.
Scott Foresman and Company, Glenview, IL, 1975.
[15] E. F. Fama. Efficient capital market: A review of
theory and empirical work. Journal of Finance,
25:383-417, 1970.
[16] R. Forsythe and F. Lundholm. Information
aggregation in an experimental market. Econometrica,
58:309-47, 1990.
[17] R. Forsythe, F. Nelson, G. R. Neumann, and
J. Wright. Forecasting elections: A market alternative
to polls. In T. R. Palfrey, editor, Contemporary
Laboratory Experiments in Political Economy, pages
69-111. University of Michigan Press, Ann Arbor, MI,
1991.
[18] R. Forsythe, F. Nelson, G. R. Neumann, and
J. Wright. Anatomy of an experimental political stock
market. American Economic Review, 82(5):1142-1161,
1992.
[19] R. Forsythe, T. A. Rietz, and T. W. Ross. Wishes,
expectations, and actions: A survey on price formation
in election stock markets. Journal of Economic
Behavior and Organization, 39:83-110, 1999.
[20] S. French. Group consensus probability distributions:
a critical survey. Bayesian Statistics, 2:183-202, 1985.
[21] C. Genest. A conflict between two axioms for
combining subjective distributions. Journal of the
Royal Statistical Society, 46(3):403-405, 1984.
[22] C. Genest. Pooling operators with the marginalization
property. Canadian Journal of Statistics,
12(2):153-163, 1984.
[23] C. Genest, K. J. McConway, and M. J. Schervish.
Characterization of externally Bayesian pooling
operators. Annals of Statistics, 14(2):487-501, 1986.
[24] C. Genest and J. V. Zidek. Combining probability
distributions: A critique and an annotated
bibliography. Statistical Science, 1(1):114-148, 1986.
[25] S. J. Grossman. An introduction to the theory of
rational expectations under asymmetric information.
Review of Economic Studies, 48(4):541-559, 1981.
[26] F. A. Hayek. The use of knowledge in society.
American Economic Review, 35(4):519-530, 1945.
[27] J. C. Jackwerth and M. Rubinstein. Recovering
probability distribution from options prices. Journal
of Finance, 51(5):1611-1631, 1996.
[28] H. A. Linstone and M. Turoff. The Delphi Method:
Techniques and Applications. Addison-Wesley,
Reading, MA, 1975.
[29] P. A. Morris. Decision analysis expert use.
Management Science, 20(9):1233-1241, 1974.
[30] P. A. Morris. Combining expert judgments: A
bayesian approach. Management Science,
23(7):679-693, 1977.
[31] P. A. Morris. An axiomatic approach to expert
resolution. Management Science, 29(1):24-32, 1983.
[32] E. W. Noreen. Computer-Intensive Methods for
Testing Hypotheses: An Introduction. Wiley and Sons,
Inc., New York, 1989.
[33] D. M. Pennock, S. Lawrence, C. L. Giles, and F. A.
Nielsen. The real power of artificial markets. Science,
291:987-988, February 2002.
[34] D. M. Pennock, S. Lawrence, F. A. Nielsen, and C. L.
Giles. Extracting collective probabilistic forecasts from
web games. In Proceedings of the 7th ACM SIGKDD
International Conference on Knowledge Discovery and
Data Mining, pages 174-183, San Francisco, CA, 2001.
[35] C. Plott and S. Sunder. Rational expectations and the
aggregation of diverse information in laboratory
security markets. Econometrica, 56:1085-118, 1988.
[36] E. Servan-Schreiber, J. Wolfers, D. M. Pennock, and
B. Galebach. Prediction markets: Does money
matter? Electronic Markets, 14(3):243-251, 2004.
[37] M. Spann and B. Skiera. Internet-based virtual stock
markets for business forecasting. Management Science,
49(10):1310-1326, 2003.
[38] M. West. Bayesian aggregation. Journal of the Royal
Statistical Society. Series A. General, 147(4):600-607,
1984.
[39] R. L. Winkler. The consensus of subjective probability
distributions. Management Science, 15(2):B61-B75,
1968.
[40] J. Wolfers and E. Zitzewitz. Prediction markets.
Journal of Economic Perspectives, 18(2):107-126,
2004.
67 | prediction accuracy;pooled prediction;future event;expertise;price;contract;information market;market probability;opinion pool;expert aggregation;forecast;expert opinion |
train_J-50 | Communication Complexity of Common Voting Rules∗ | We determine the communication complexity of the common voting rules. The rules (sorted by their communication complexity from low to high) are plurality, plurality with runoff, single transferable vote (STV), Condorcet, approval, Bucklin, cup, maximin, Borda, Copeland, and ranked pairs. For each rule, we first give a deterministic communication protocol and an upper bound on the number of bits communicated in it; then, we give a lower bound on (even the nondeterministic) communication requirements of the voting rule. The bounds match for all voting rules except STV and maximin. | 1. INTRODUCTION
One key factor in the practicality of any preference
aggregation rule is its communication burden. To successfully
aggregate the agents" preferences, it is usually not necessary
for all the agents to report all of their preference information.
Clever protocols that elicit the agents" preferences partially
and sequentially have the potential to dramatically reduce
the required communication. This has at least the following
advantages:
• It can make preference aggregation feasible in settings
where the total amount of preference information is
too large to communicate.
• Even when communicating all the preference
information is feasible, reducing the communication
requirements lessens the burden placed on the agents. This is
especially true when the agents, rather than knowing
all their preferences in advance, need to invest effort
(such as computation or information gathering) to
determine their preferences [16].
• It preserves (some of) the agents" privacy.
Most of the work on reducing the communication burden
in preference aggregation has focused on resource allocation
settings such as combinatorial auctions, in which an
auctioneer auctions off a number of (possibly distinct) items
in a single event. Because in a combinatorial auction,
bidders can have separate valuations for each of an
exponential number of possible bundles of items, this is a setting
in which reducing the communication burden is especially
crucial. This can be accomplished by supplementing the
auctioneer with an elicitor that incrementally elicits parts
of the bidders" preferences on an as-needed basis, based on
what the bidders have revealed about their preferences so
far, as suggested by Conen and Sandholm [5]. For example,
the elicitor can ask for a bidder"s value for a specific bundle
(value queries), which of two bundles the bidder prefers
(order queries), which bundle he ranks kth or what the rank of
a given bundle is (rank queries), which bundle he would
purchase given a particular vector of prices (demand queries),
etc.-until (at least) the final allocation can be determined.
Experimentally, this yields drastic savings in preference
revelation [11]. Furthermore, if the agents" valuation functions
are drawn from certain natural subclasses, the elicitation
problem can be solved using only polynomially many queries
even in the worst case [23, 4, 13, 18, 14]. For a review
of preference elicitation in combinatorial auctions, see [17].
Ascending combinatorial auctions are a well-known special
form of preference elicitation, where the elicitor asks demand
queries with increasing prices [15, 21, 1, 9]. Finally, resource
78
allocation problems have also been studied from a
communication complexity viewpoint, thereby deriving lower bounds
on the required communication. For example, Nisan and
Segal show that exponential communication is required even
to obtain a surplus greater than that obtained by
auctioning off all objects as a single bundle [14]. Segal also studies
social choice rules in general, and shows that for a large
class of social choice rules, supporting budget sets must be
revealed such that if every agent prefers the same outcome
in her budget set, this proves the optimality of that
outcome. Segal then uses this characterization to prove bounds
on the communication required in resource allocation as well
as matching settings [20].
In this paper, we will focus on the communication
requirements of a generally applicable subclass of social choice
rules, commonly known as voting rules. In a voting setting,
there is a set of candidate outcomes over which the voters
express their preferences by submitting a vote (typically,
a ranking of the candidates), and the winner (that is, the
chosen outcome) is determined based on these votes. The
communication required by voting rules can be large either
because the number of voters is large (such as, for example,
in national elections), or because the number of candidates
is large (for example, the agents can vote over allocations of
a number of resources), or both. Prior work [8] has studied
elicitation in voting, studying how computationally hard it
is to decide whether a winner can be determined with the
information elicited so far, as well as how hard it is to find the
optimal sequence of queries given perfect suspicions about
the voters" preferences. In addition, that paper discusses
strategic (game-theoretic) issues introduced by elicitation.
In contrast, in this paper, we are concerned with the
worst-case number of bits that must be communicated to
execute a given voting rule, when nothing is known in advance
about the voters" preferences. We determine the
communication complexity of the common voting rules. For each rule,
we first give an upper bound on the (deterministic)
communication complexity by providing a communication protocol
for it and analyzing how many bits need to be transmitted
in this protocol. (Segal"s results [20] do not apply to most
voting rules because most voting rules are not
intersectionmonotonic (or even monotonic).1
) For many of the voting
rules under study, it turns out that one cannot do better
than simply letting each voter immediately communicate all
her (potentially relevant) information. However, for some
rules (such as plurality with runoff, STV and cup) there is
a straightforward multistage communication protocol that,
with some analysis, can be shown to significantly outperform
the immediate communication of all (potentially relevant)
information. Finally, for some rules (such as the Condorcet
and Bucklin rules), we need to introduce a more complex
communication protocol to achieve the best possible upper
1
For two of the rules that we study that are
intersectionmonotonic, namely the approval and Condorcet rules,
Segal"s results can in fact be used to give alternative proofs
of our lower bounds. We only give direct proofs for these
rules here because 1) these direct proofs are among the
easier ones in this paper, 2) the alternative proofs are nontrivial
even given Segal"s results, and 3) a space constraint applies.
However, we hope to also include the alternative proofs in a
later version.
bound. After obtaining the upper bounds, we show that
they are tight by giving matching lower bounds on (even the
nondeterministic) communication complexity of each voting
rule. There are two exceptions: STV, for which our upper
and lower bounds are apart by a factor log m; and maximin,
for which our best deterministic upper bound is also a factor
log m above the (nondeterministic) lower bound, although
we give a nondeterministic upper bound that matches the
lower bound.
2. REVIEW OF VOTING RULES
In this section, we review the common voting rules that
we study in this paper. A voting rule2
is a function
mapping a vector of the n voters" votes (i.e. preferences over
candidates) to one of the m candidates (the winner) in the
candidate set C. In some cases (such as the Condorcet rule),
the rule may also declare that no winner exists. We do not
concern ourselves with what happens in case of a tie
between candidates (our lower bounds hold regardless of how
ties are broken, and the communication protocols used for
our upper bounds do not attempt to break the ties). All of
the rules that we study are rank-based rules, which means
that a vote is defined as an ordering of the candidates (with
the exception of the plurality rule, for which a vote is a
single candidate, and the approval rule, for which a vote is a
subset of the candidates).
We will consider the following voting rules. (For rules that
define a score, the candidate with the highest score wins.)
• scoring rules. Let α = α1, . . . , αm be a vector of
integers such that α1 ≥ α2 . . . ≥ αm. For each voter, a
candidate receives α1 points if it is ranked first by the voter,
α2 if it is ranked second etc. The score sα of a candidate
is the total number of points the candidate receives. The
Borda rule is the scoring rule with α = m−1, m−2, . . . , 0 .
The plurality rule is the scoring rule with α = 1, 0, . . . , 0 .
• single transferable vote (STV). The rule proceeds through
a series of m − 1 rounds. In each round, the candidate with
the lowest plurality score (that is, the least number of voters
ranking it first among the remaining candidates) is
eliminated (and each of the votes for that candidate transfer
to the next remaining candidate in the order given in that
vote). The winner is the last remaining candidate.
• plurality with run-off. In this rule, a first round
eliminates all candidates except the two with the highest plurality
scores. Votes are transferred to these as in the STV rule,
and a second round determines the winner from these two.
• approval. Each voter labels each candidate as either
approved or disapproved. The candidate approved by the
greatest number of voters wins.
• Condorcet. For any two candidates i and j, let N(i, j) be
the number of voters who prefer i to j. If there is a candidate
i that is preferred to any other candidate by a majority of
the voters (that is, N(i, j) > N(j, i) for all j = i-that is, i
wins every pairwise election), then candidate i wins.
2
The term voting protocol is often used to describe the same
concept, but we seek to draw a sharp distinction between
the rule mapping preferences to outcomes, and the
communication/elicitation protocol used to implement this rule.
79
• maximin (aka. Simpson). The maximin score of i is
s(i) = minj=i N(i, j)-that is, i"s worst performance in a
pairwise election. The candidate with the highest maximin
score wins.
• Copeland. For any two distinct candidates i and j, let
C(i, j) = 1 if N(i, j) > N(j, i), C(i, j) = 1/2 if N(i, j) =
N(j, i) and C(i, j) = 0 if N(i, j) < N(j, i). The Copeland
score of candidate i is s(i) = j=i C(i, j).
• cup (sequential binary comparisons). The cup rule is
defined by a balanced3
binary tree T with one leaf per
candidate, and an assignment of candidates to leaves (each leaf
gets one candidate). Each non-leaf node is assigned the
winner of the pairwise election of the node"s children; the
candidate assigned to the root wins.
• Bucklin. For any candidate i and integer l, let B(i, l)
be the number of voters that rank candidate i among the
top l candidates. The winner is arg mini(min{l : B(i, l) >
n/2}). That is, if we say that a voter approves her top
l candidates, then we repeatedly increase l by 1 until some
candidate is approved by more than half the voters, and this
candidate is the winner.
• ranked pairs. This rule determines an order on all
the candidates, and the winner is the candidate at the top
of this order. Sort all ordered pairs of candidates (i, j) by
N(i, j), the number of voters who prefer i to j. Starting
with the pair (i, j) with the highest N(i, j), we lock in
the result of their pairwise election (i j). Then, we move
to the next pair, and we lock the result of their pairwise
election. We continue to lock every pairwise result that does
not contradict the ordering established so far.
We emphasize that these definitions of voting rules do not
concern themselves with how the votes are elicited from the
voters; all the voting rules, including those that are
suggestively defined in terms of rounds, are in actuality just
functions mapping the vector of all the voters" votes to a
winner. Nevertheless, there are always many different ways
of eliciting the votes (or the relevant parts thereof) from the
voters. For example, in the plurality with runoff rule, one
way of eliciting the votes is to ask every voter to declare her
entire ordering of the candidates up front. Alternatively, we
can first ask every voter to declare only her most preferred
candidate; then, we will know the two candidates in the
runoff, and we can ask every voter which of these two
candidates she prefers. Thus, we distinguish between the voting
rule (the mapping from vectors of votes to outcomes) and the
communication protocol (which determines how the relevant
parts of the votes are actually elicited from the voters). The
goal of this paper is to give efficient communication
protocols for the voting rules just defined, and to prove that there
do not exist any more efficient communication protocols.
It is interesting to note that the choice of the
communication protocol may affect the strategic behavior of the voters.
Multistage communication protocols may reveal to the
voters some information about how the other voters are voting
(for example, in the two-stage communication protocol just
given for plurality with runoff, in the second stage voters
3
Balanced means that the difference in depth between two
leaves can be at most one.
will know which two candidates have the highest plurality
scores). In general, when the voters receive such
information, it may give them incentives to vote differently than
they would have in a single-stage communication protocol
in which all voters declare their entire votes simultaneously.
Of course, even the single-stage communication protocol is
not strategy-proof4
for any reasonable voting rule, by the
Gibbard-Satterthwaite theorem [10, 19]. However, this does
not mean that we should not be concerned about adding
even more opportunities for strategic voting. In fact, many
of the communication protocols introduced in this paper do
introduce additional opportunities for strategic voting, but
we do not have the space to discuss this here. (In prior
work [8], we do give an example where an elicitation
protocol for the approval voting rule introduces strategic voting,
and give principles for designing elicitation protocols that
do not introduce strategic problems.)
Now that we have reviewed voting rules, we move on to a
brief review of communication complexity theory.
3. REVIEW OF SOME COMMUNICATION
COMPLEXITY THEORY
In this section, we review the basic model of a
communication problem and the lower-bounding technique of
constructing a fooling set. (The basic model of a communication
problem is due to Yao [22]; for an overview see Kushilevitz
and Nisan [12].)
Each player 1 ≤ i ≤ n knows (only) input xi. Together,
they seek to compute f(x1, x2, . . . , xn). In a deterministic
protocol for computing f, in each stage, one of the players
announces (to all other players) a bit of information based
on her own input and the bits announced so far.
Eventually, the communication terminates and all players know
f(x1, x2, . . . , xn). The goal is to minimize the worst-case
(over all input vectors) number of bits sent. The
deterministic communication complexity of a problem is the
worstcase number of bits sent in the best (correct) deterministic
protocol for it. In a nondeterministic protocol, the next
bit to be sent can be chosen nondeterministically. For the
purposes of this paper, we will consider a nondeterministic
protocol correct if for every input vector, there is some
sequence of nondeterministic choices the players can make so
that the players know the value of f when the protocol
terminates. The nondeterministic communication complexity
of a problem is the worst-case number of bits sent in the
best (correct) nondeterministic protocol for it.
We are now ready to give the definition of a fooling set.
Definition 1. A fooling set is a set of input vectors
{(x1
1, x1
2, . . . , x1
n), (x2
1, x2
2, . . . , x2
n), . . . , (xk
1 , xk
2 , . . . , xk
n) such
that for any i, f(xi
1, xi
2, . . . , xi
n) = f0 for some constant f0,
but for any i = j, there exists some vector (r1, r2, . . . , rn) ∈
{i, j}n
such that f(xr1
1 , xr2
2 , . . . , xrn
n ) = f0. (That is, we can
mix the inputs from the two input vectors to obtain a vector
with a different function value.)
It is known that if a fooling set of size k exists, then log k
is a lower bound on the communication complexity (even
the nondeterministic communication complexity) [12].
4
A strategy-proof protocol is one in which it is in the players"
best interest to report their preferences truthfully.
80
For the purposes of this paper, f is the voting rule that
maps the votes to the winning candidate, and xi is voter
i"s vote (the information that the voting rule would require
from the voter if there were no possibility of multistage
communication-i.e. the most preferred candidate
(plurality), the approved candidates (approval), or the ranking of
all the candidates (all other protocols)). However, when we
derive our lower bounds, f will only signify whether a
distinguished candidate a wins. (That is, f is 1 if a wins, and
0 otherwise.) This will strengthen our lower bound results
(because it implies that even finding out whether one given
candidate wins is hard).5
Thus, a fooling set in our
context is a set of vectors of votes so that a wins (does not win)
with each of them; but for any two different vote vectors in
the set, there is a way of taking some voters" votes from the
first vector and the others" votes from the second vector, so
that a does not win (wins).
To simplify the proofs of our lower bounds, we make
assumptions such as the number of voters n is odd in many
of these proofs. Therefore, technically, we do not prove the
lower bound for (number of candidates, number of voters)
pairs (m, n) that do not satisfy these assumptions (for
example, if we make the above assumption, then we technically
do not prove the lower bound for any pair (m, n) in which
n is even). Nevertheless, we always prove the lower bound
for a representative set of (m, n) pairs. For example, for
every one of our lower bounds it is the case that for infinitely
many values of m, there are infinitely many values of n such
that the lower bound is proved for the pair (m, n).
4. RESULTS
We are now ready to present our results. For each voting
rule, we first give a deterministic communication protocol
for determining the winner to establish an upper bound.
Then, we give a lower bound on the nondeterministic
communication complexity (even on the complexity of deciding
whether a given candidate wins, which is an easier
question). The lower bounds match the upper bounds in all but
two cases: the STV rule (upper bound O(n(log m)2
); lower
bound Ω(n log m)) and the maximin rule (upper bound
O(nm log m), although we do give a nondeterministic
protocol that is O(nm); lower bound Ω(nm)).
When we discuss a voting rule in which the voters rank the
candidates, we will represent a ranking in which candidate c1
is ranked first, c2 is ranked second, etc. as c1 c2 . . . cm.
5
One possible concern is that in the case where ties are
possible, it may require much communication to verify whether
a specific candidate a is among the winners, but little
communication to produce one of the winners. However, all the
fooling sets we use in the proofs have the property that if
a wins, then a is the unique winner. Therefore, in these
fooling sets, if one knows any one of the winners, then one
knows whether a is a winner. Thus, computing one of the
winners requires at least as much communication as
verifying whether a is among the winners. In general, when a
communication problem allows multiple correct answers for
a given vector of inputs, this is known as computing a
relation rather than a function [12]. However, as per the above,
we can restrict our attention to a subset of the domain where
the voting rule truly is a (single-valued) function, and hence
lower bounding techniques for functions rather than
relations will suffice.
Sometimes for the purposes of a proof the internal ranking
of a subset of the candidates does not matter, and in this
case we will not specify it. For example, if S = {c2, c3},
then c1 S c4 indicates that either the ranking c1
c2 c3 c4 or the ranking c1 c3 c2 c4 can be used
for the proof.
We first give a universal upper bound.
Theorem 1. The deterministic communication
complexity of any rank-based voting rule is O(nm log m).
Proof. This bound is achieved by simply having
everyone communicate their entire ordering of the candidates
(indicating the rank of an individual candidate requires only
O(log m) bits, so each of the n voters can simply indicate
the rank of each of the m candidates).
The next lemma will be useful in a few of our proofs.
Lemma 1. If m divides n, then log(n!)−m log((n/m)!) ≥
n(log m − 1)/2.
Proof. If n/m = 1 (that is, n = m), then this
expression simplifies to log(n!). We have log(n!) =
n
i=1
log i ≥
n
x=1
log(i)dx, which, using integration by parts, is equal to
n log n − (n − 1) > n(log n − 1) = n(log m − 1) > n(log m −
1)/2. So, we can assume that n/m ≥ 2. We observe that
log(n!) =
n
i=1
log i =
n/m−1
i=0
m
j=1
log(im+j) ≥
n/m−1
i=1
m
j=1
log(im)
= m
n/m−1
i=1
log(im), and that m log((n/m)!) = m
n/m
i=1
log(i).
Therefore, log(n!) − m log((n/m)!) ≥ m
n/m−1
i=1
log(im) −
m
n/m
i=1
log(i) = m((
n/m−1
i=1
log(im/i))−log(n/m)) = m((n/m−
1) log m−log n+log m) = n log m−m log n. Now, using the
fact that n/m ≥ 2, we have m log n = n(m/n) log m(n/m) =
n(m/n)(log m + log(n/m)) ≤ n(1/2)(log m + log 2). Thus,
log(n!) − m log((n/m)!) ≥ n log m − m log n ≥ n log m −
n(1/2)(log m + log 2) = n(log m − 1)/2.
Theorem 2. The deterministic communication
complexity of the plurality rule is O(n log m).
Proof. Indicating one of the candidates requires only
O(log m) bits, so each voter can simply indicate her most
preferred candidate.
Theorem 3. The nondeterministic communication
complexity of the plurality rule is Ω(n log m) (even to decide
whether a given candidate a wins).
Proof. We will exhibit a fooling set of size n !
(( n
m
)!)m
where
n = (n−1)/2. Taking the logarithm of this gives log(n !)−
m log((n /m)!), so the result follows from Lemma 1. The
fooling set will consist of all vectors of votes satisfying the
following constraints:
• For any 1 ≤ i ≤ n , voters 2i−1 and 2i vote the same.
81
• Every candidate receives equally many votes from the
first 2n = n − 1 voters.
• The last voter (voter n) votes for a.
Candidate a wins with each one of these vote vectors
because of the extra vote for a from the last voter. Given that
m divides n , let us see how many vote vectors there are in
the fooling set. We need to distribute n voter pairs evenly
over m candidates, for a total of n /m voter pairs per
candidate; and there are precisely n !
(( n
m
)!)m
ways of doing this.6
All that remains to show is that for any two distinct vectors
of votes in the fooling set, we can let each of the voters vote
according to one of these two vectors in such a way that a
loses. Let i be a number such that the two vote vectors
disagree on the candidate for which voters 2i − 1 and 2i vote.
Without loss of generality, suppose that in the first vote
vector, these voters do not vote for a (but for some other
candidate, b, instead). Now, construct a new vote vector by
taking votes 2i − 1 and 2i from the first vote vector, and
the remaining votes from the second vote vector. Then, b
receives 2n /m + 2 votes in this newly constructed vote
vector, whereas a receives at most 2n /m+1 votes. So, a is not
the winner in the newly constructed vote vector, and hence
we have a correct fooling set.
Theorem 4. The deterministic communication
complexity of the plurality with runoff rule is O(n log m).
Proof. First, let every voter indicate her most preferred
candidate using log m bits. After this, the two candidates
in the runoff are known, and each voter can indicate which
one she prefers using a single additional bit.
Theorem 5. The nondeterministic communication
complexity of the plurality with runoff rule is Ω(n log m) (even
to decide whether a given candidate a wins).
Proof. We will exhibit a fooling set of size n !
(( n
m
)!)m
where m = m/2 and n = (n − 2)/4. Taking the
logarithm of this gives log(n !) − m log((n /m )!), so the result
follows from Lemma 1. Divide the candidates into m pairs:
(c1, d1), (c2, d2), . . . , (cm , dm ) where c1 = a and d1 = b.
The fooling set will consist of all vectors of votes satisfying
the following constraints:
• For any 1 ≤ i ≤ n , voters 4i − 3 and 4i − 2 rank
the candidates ck(i) a C − {a, ck(i)}, for some
candidate ck(i). (If ck(i) = a then the vote is simply
a C − {a}.)
• For any 1 ≤ i ≤ n , voters 4i − 1 and 4i rank the
candidates dk(i) a C − {a, dk(i)} (that is, their most
preferred candidate is the candidate that is paired with
the candidate that the previous two voters vote for).
6
An intuitive proof of this is the following. We can count
the number of permutations of n elements as follows. First,
divide the elements into m buckets of size n /m, so that if
x is placed in a lower-indexed bucket than y, then x will
be indexed lower in the eventual permutation. Then, decide
on the permutation within each bucket (for which there are
(n /m)! choices per bucket). It follows that n ! equals the
number of ways to divide n elements into m buckets of size
n /m, times ((n /m)!)m
.
• Every candidate is ranked at the top of equally many
of the first 4n = n − 2 votes.
• Voter 4n +1 = n−1 ranks the candidates a C−{a}.
• Voter 4n + 2 = n ranks the candidates b C − {b}.
Candidate a wins with each one of these vote vectors:
because of the last two votes, candidates a and b are one vote
ahead of all the other candidates and continue to the runoff,
and at this point all the votes that had another candidate
ranked at the top transfer to a, so that a wins the runoff.
Given that m divides n , let us see how many vote vectors
there are in the fooling set. We need to distribute n groups
of four voters evenly over the m pairs of candidates, and
(as in the proof of Theorem 3) there are n !
(( n
m
)!)m
ways of
doing this. All that remains to show is that for any two
distinct vectors of votes in the fooling set, we can let each
of the voters vote according to one of these two vectors in
such a way that a loses. Let i be a number such that ck(i) is
not the same in both of these two vote vectors, that is, c1
k(i)
(ck(i) in the first vote vector) is not equal to c2
k(i) (ck(i) in
the second vote vector). Without loss of generality, suppose
c1
k(i) = a. Now, construct a new vote vector by taking votes
4i − 3, 4i − 2, 4i − 1, 4i from the first vote vector, and the
remaining votes from the second vote vector. In this newly
constructed vote vector, c1
k(i) and d1
k(i) each receive 4n /m+2
votes in the first round, whereas a receives at most 4n /m+1
votes. So, a does not continue to the runoff in the newly
constructed vote vector, and hence we have a correct fooling
set.
Theorem 6. The nondeterministic communication
complexity of the Borda rule is Ω(nm log m) (even to decide
whether a given candidate a wins).
Proof. We will exhibit a fooling set of size (m !)n
where
m = m−2 and n = (n−2)/4. This will prove the theorem
because m ! is Ω(m log m), so that log((m !)n
) = n log(m !)
is Ω(nm log m). For every vector (π1, π2, . . . , πn ) consisting
of n orderings of all candidates other than a and another
fixed candidate b (technically, the orderings take the form of
a one-to-one function πi : {1, 2, . . . , m } → C − {a, b} with
πi(j) = c indicating that candidate c is the jth in the order
represented by πi), let the following vector of votes be an
element of the fooling set:
• For 1 ≤ i ≤ n , let voters 4i − 3 and 4i − 2 rank the
candidates a b πi(1) πi(2) . . . πi(m ).
• For 1 ≤ i ≤ n , let voters 4i − 1 and 4i rank the
candidates πi(m ) πi(m − 1) . . . πi(1) b a.
• Let voter 4n + 1 = n − 1 rank the candidates a
b π0(1) π0(2) . . . π0(m ) (where π0 is an
arbitrary order of the candidates other than a and b
which is the same for every element of the fooling set).
• Let voter 4n + 2 = n rank the candidates π0(m )
π0(m − 1) . . . π0(1) a b.
We observe that this fooling set has size (m !)n
, and that
candidate a wins in each vector of votes in the fooling set (to
82
see why, we observe that for any 1 ≤ i ≤ n , votes 4i−3 and
4i − 2 rank the candidates in the exact opposite way from
votes 4i − 1 and 4i, which under the Borda rule means they
cancel out; and the last two votes give one more point to a
than to any other candidate-besides b who gets two fewer
points than a). All that remains to show is that for any two
distinct vectors of votes in the fooling set, we can let each of
the voters vote according to one of these two vectors in such
a way that a loses. Let the first vote vector correspond to
the vector (π1
1, π1
2, . . . , π1
n ), and let the second vote vector
correspond to the vector (π2
1, π2
2, . . . , π2
n ). For some i, we
must have π1
i = π2
i , so that for some candidate c /∈ {a, b},
(π1
i )−1
(c) < (π2
i )−1
(c) (that is, c is ranked higher in π1
i than
in π2
i ). Now, construct a new vote vector by taking votes
4i−3 and 4i−2 from the first vote vector, and the remaining
votes from the second vote vector. a"s Borda score remains
unchanged. However, because c is ranked higher in π1
i than
in π2
i , c receives at least 2 more points from votes 4i−3 and
4i − 2 in the newly constructed vote vector than it did in
the second vote vector. It follows that c has a higher Borda
score than a in the newly constructed vote vector. So, a is
not the winner in the newly constructed vote vector, and
hence we have a correct fooling set.
Theorem 7. The nondeterministic communication
complexity of the Copeland rule is Ω(nm log m) (even to decide
whether a given candidate a wins).
Proof. We will exhibit a fooling set of size (m !)n
where
m = (m − 2)/2 and n = (n − 2)/2. This will prove the
theorem because m ! is Ω(m log m), so that log((m !)n
) =
n log(m !) is Ω(nm log m). We write the set of candidates
as the following disjoint union: C = {a, b} ∪ L ∪ R where
L = {l1, l2, . . . , lm } and R = {r1, r2, . . . , rm }. For every
vector (π1, π2, . . . , πn ) consisting of n permutations of the
integers 1 through m (πi : {1, 2, . . . , m } → {1, 2, . . . , m }),
let the following vector of votes be an element of the fooling
set:
• For 1 ≤ i ≤ n , let voter 2i − 1 rank the candidates
a b lπi(1) rπi(1) lπi(2) rπi(2) . . .
lπi(m ) rπi(m ).
• For 1 ≤ i ≤ n , let voter 2i rank the candidates
rπi(m ) lπi(m ) rπi(m −1) lπi(m −1) . . .
rπi(1) lπi(1) b a.
• Let voter n − 1 = 2n + 1 rank the candidates a b
l1 r1 l2 r2 . . . lm rm .
• Let voter n = 2n +2 rank the candidates rm lm
rm −1 lm −1 . . . r1 l1 a b.
We observe that this fooling set has size (m !)n
, and
that candidate a wins in each vector of votes in the
fooling set (every pair of candidates is tied in their pairwise
election, with the exception that a defeats b, so that a wins
the election by half a point). All that remains to show is
that for any two distinct vectors of votes in the fooling
set, we can let each of the voters vote according to one
of these two vectors in such a way that a loses. Let the
first vote vector correspond to the vector (π1
1, π1
2, . . . , π1
n ),
and let the second vote vector correspond to the vector
(π2
1, π2
2, . . . , π2
n ). For some i, we must have π1
i = π2
i , so that
for some j ∈ {1, 2, . . . , m }, we have (π1
i )−1
(j) < (π2
i )−1
(j).
Now, construct a new vote vector by taking vote 2i−1 from
the first vote vector, and the remaining votes from the
second vote vector. a"s Copeland score remains unchanged. Let
us consider the score of lj. We first observe that the rank of
lj in vote 2i − 1 in the newly constructed vote vector is at
least 2 higher than it was in the second vote vector, because
(π1
i )−1
(j) < (π2
i )−1
(j). Let D1
(lj) be the set of candidates
in L ∪ R that voter 2i − 1 ranked lower than lj in the first
vote vector (D1
(lj) = {c ∈ L ∪ R : lj
1
2i−1 c}), and let
D2
(lj) be the set of candidates in L ∪ R that voter 2i − 1
ranked lower than lj in the second vote vector (D2
(lj) =
{c ∈ L ∪ R : lj
2
2i−1 c}). Then, it follows that in the
newly constructed vote vector, lj defeats all the candidates
in D1
(lj) − D2
(lj) in their pairwise elections (because lj
receives an extra vote in each one of these pairwise elections
relative to the second vote vector), and loses to all the
candidates in D2
(lj) − D1
(lj) (because lj loses a vote in each one
of these pairwise elections relative to the second vote
vector), and ties with everyone else. But |D1
(lj)|−|D2
(lj)| ≥ 2,
and hence |D1
(lj) − D2
(lj)| − |D2
(lj) − D1
(lj)| ≥ 2. Hence,
in the newly constructed vote vector, lj has at least two
more pairwise wins than pairwise losses, and therefore has
at least 1 more point than if lj had tied all its pairwise
elections. Thus, lj has a higher Copeland score than a in the
newly constructed vote vector. So, a is not the winner in the
newly constructed vote vector, and hence we have a correct
fooling set.
Theorem 8. The nondeterministic communication
complexity of the maximin rule is O(nm).
Proof. The nondeterministic protocol will guess which
candidate w is the winner, and, for each other candidate
c, which candidate o(c) is the candidate against whom c
receives its lowest score in a pairwise election. Then, let
every voter communicate the following:
• for each candidate c = w, whether she prefers c to w;
• for each candidate c = w, whether she prefers c to o(c).
We observe that this requires the communication of 2n(m−
1) bits. If the guesses were correct, then, letting N(d, e) be
the number of voters preferring candidate d to candidate e,
we should have N(c, o(c)) < N(w, c ) for any c = w, c = w,
which will prove that w wins the election.
Theorem 9. The nondeterministic communication
complexity of the maximin rule is Ω(nm) (even to decide whether
a given candidate a wins).
Proof. We will exhibit a fooling set of size 2n m
where
m = m − 2 and n = (n − 1)/4. Let b be a candidate other
than a. For every vector (S1, S2, . . . , Sn ) consisting of n
subsets Si ⊆ C − {a, b}, let the following vector of votes be
an element of the fooling set:
• For 1 ≤ i ≤ n , let voters 4i − 3 and 4i − 2 rank the
candidates Si a C − (Si ∪ {a, b}) b.
• For 1 ≤ i ≤ n , let voters 4i − 1 and 4i rank the
candidates b C − (Si ∪ {a, b}) a Si.
83
• Let voter 4n + 1 = n rank the candidates a b
C − {a, b}.
We observe that this fooling set has size (2m
)n
= 2n m
,
and that candidate a wins in each vector of votes in the
fooling set (in every one of a"s pairwise elections, a is ranked
higher than its opponent by 2n +1 = (n+1)/2 > n/2 votes).
All that remains to show is that for any two distinct vectors
of votes in the fooling set, we can let each of the voters vote
according to one of these two vectors in such a way that
a loses. Let the first vote vector correspond to the vector
(S1
1 , S1
2 , . . . , S1
n ), and let the second vote vector correspond
to the vector (S2
1 , S2
2 , . . . , S2
n ). For some i, we must have
S1
i = S2
i , so that either S1
i S2
i or S2
i S1
i . Without loss
of generality, suppose S1
i S2
i , and let c be some candidate
in S1
i − S2
i . Now, construct a new vote vector by taking
votes 4i − 3 and 4i − 2 from the first vote vector, and the
remaining votes from the second vote vector. In this newly
constructed vote vector, a is ranked higher than c by only
2n −1 voters, for the following reason. Whereas voters 4i−3
and 4i − 2 do not rank c higher than a in the second vote
vector (because c /∈ S2
i ), voters 4i − 3 and 4i − 2 do rank
c higher than a in the first vote vector (because c ∈ S1
i ).
Moreover, in every one of b"s pairwise elections, b is ranked
higher than its opponent by at least 2n voters. So, a has
a lower maximin score than b, therefore a is not the winner
in the newly constructed vote vector, and hence we have a
correct fooling set.
Theorem 10. The deterministic communication
complexity of the STV rule is O(n(log m)2
).
Proof. Consider the following communication protocol.
Let each voter first announce her most preferred candidate
(O(n log m) communication). In the remaining rounds, we
will keep track of each voter"s most preferred candidate
among the remaining candidates, which will be enough to
implement the rule. When candidate c is eliminated, let
each of the voters whose most preferred candidate among the
remaining candidates was c announce their most preferred
candidate among the candidates remaining after c"s
elimination. If candidate c was the ith candidate to be eliminated
(that is, there were m − i + 1 candidates remaining before
c"s elimination), it follows that at most n/(m − i + 1) voters
had candidate c as their most preferred candidate among
the remaining candidates, and thus the number of bits to be
communicated after the elimination of the ith candidate is
O((n/(m−i+1)) log m).7
Thus, the total communication in
this communication protocol is O(n log m +
m−1
i=1
(n/(m − i +
1)) log m). Of course,
m−1
i=1
1/(m − i + 1) =
m
i=2
1/i, which is
O(log m). Substituting into the previous expression, we find
that the communication complexity is O(n(log m)2
).
Theorem 11. The nondeterministic communication
complexity of the STV rule is Ω(n log m) (even to decide whether
a given candidate a wins).
Proof. We omit this proof because of space constraint.
7
Actually, O((n/(m − i + 1)) log(m − i + 1)) is also correct,
but it will not improve the bound.
Theorem 12. The deterministic communication
complexity of the approval rule is O(nm).
Proof. Approving or disapproving of a candidate requires
only one bit of information, so every voter can simply
approve or disapprove of every candidate for a total
communication of nm bits.
Theorem 13. The nondeterministic communication
complexity of the approval rule is Ω(nm) (even to decide whether
a given candidate a wins).
Proof. We will exhibit a fooling set of size 2n m
where
m = m − 1 and n = (n − 1)/4. For every vector
(S1, S2, . . . , Sn ) consisting of n subsets Si ⊆ C − {a}, let
the following vector of votes be an element of the fooling
set:
• For 1 ≤ i ≤ n , let voters 4i − 3 and 4i − 2 approve
Si ∪ {a}.
• For 1 ≤ i ≤ n , let voters 4i − 1 and 4i approve C −
(Si ∪ {a}).
• Let voter 4n + 1 = n approve {a}.
We observe that this fooling set has size (2m
)n
= 2n m
,
and that candidate a wins in each vector of votes in the
fooling set (a is approved by 2n + 1 voters, whereas each
other candidate is approved by only 2n voters). All that
remains to show is that for any two distinct vectors of votes
in the fooling set, we can let each of the voters vote
according to one of these two vectors in such a way that a
loses. Let the first vote vector correspond to the vector
(S1
1 , S1
2 , . . . , S1
n ), and let the second vote vector correspond
to the vector (S2
1 , S2
2 , . . . , S2
n ). For some i, we must have
S1
i = S2
i , so that either S1
i S2
i or S2
i S1
i . Without loss
of generality, suppose S1
i S2
i , and let b be some candidate
in S1
i − S2
i . Now, construct a new vote vector by taking
votes 4i − 3 and 4i − 2 from the first vote vector, and the
remaining votes from the second vote vector. In this newly
constructed vote vector, a is still approved by 2n + 1 votes.
However, b is approved by 2n + 2 votes, for the following
reason. Whereas voters 4i−3 and 4i−2 do not approve b in
the second vote vector (because b /∈ S2
i ), voters 4i − 3 and
4i − 2 do approve b in the first vote vector (because b ∈ S1
i ).
It follows that b"s score in the newly constructed vote vector
is b"s score in the second vote vector (2n ), plus two. So, a
is not the winner in the newly constructed vote vector, and
hence we have a correct fooling set.
Interestingly, an Ω(m) lower bound can be obtained even
for the problem of finding a candidate that is approved by
more than one voter [20].
Theorem 14. The deterministic communication
complexity of the Condorcet rule is O(nm).
Proof. We maintain a set of active candidates S which
is initialized to C. At each stage, we choose two of the active
candidates (say, the two candidates with the lowest indices),
and we let each voter communicate which of the two
candidates she prefers. (Such a stage requires the communication
of n bits, one per voter.) The candidate preferred by fewer
84
voters (the loser of the pairwise election) is removed from
S. (If the pairwise election is tied, both candidates are
removed.) After at most m − 1 iterations, only one candidate
is left (or zero candidates are left, in which case there is no
Condorcet winner). Let a be the remaining candidate. To
find out whether candidate a is the Condorcet winner, let
each voter communicate, for every candidate c = a, whether
she prefers a to c. (This requires the communication of at
most n(m − 1) bits.) This is enough to establish whether
a won each of its pairwise elections (and thus, whether a is
the Condorcet winner).
Theorem 15. The nondeterministic communication
complexity of the Condorcet rule is Ω(nm) (even to decide whether
a given candidate a wins).
Proof. We will exhibit a fooling set of size 2n m
where
m = m − 1 and n = (n − 1)/2. For every vector
(S1, S2, . . . , Sn ) consisting of n subsets Si ⊆ C − {a}, let
the following vector of votes be an element of the fooling
set:
• For 1 ≤ i ≤ n , let voter 2i − 1 rank the candidates
Si a C − Si.
• For 1 ≤ i ≤ n , let voter 2i rank the candidates C −
Si a Si.
• Let voter 2n +1 = n rank the candidates a C −{a}.
We observe that this fooling set has size (2m
)n
= 2n m
,
and that candidate a wins in each vector of votes in the
fooling set (a wins each of its pairwise elections by a single
vote). All that remains to show is that for any two distinct
vectors of votes in the fooling set, we can let each of the
voters vote according to one of these two vectors in such a
way that a loses. Let the first vote vector correspond to
the vector (S1
1 , S1
2 , . . . , S1
n ), and let the second vote vector
correspond to the vector (S2
1 , S2
2 , . . . , S2
n ). For some i, we
must have S1
i = S2
i , so that either S1
i S2
i or S2
i S1
i .
Without loss of generality, suppose S1
i S2
i , and let b be
some candidate in S1
i − S2
i . Now, construct a new vote
vector by taking vote 2i − 1 from the first vote vector, and
the remaining votes from the second vote vector. In this
newly constructed vote vector, b wins its pairwise election
against a by one vote (vote 2i − 1 ranks b above a in the
newly constructed vote vector because b ∈ S1
i , whereas in
the second vote vector vote 2i − 1 ranked a above b because
b /∈ S2
i ). So, a is not the Condorcet winner in the newly
constructed vote vector, and hence we have a correct fooling
set.
Theorem 16. The deterministic communication
complexity of the cup rule is O(nm).
Proof. Consider the following simple communication
protocol. First, let all the voters communicate, for every one of
the matchups in the first round, which of its two candidates
they prefer. After this, the matchups for the second round
are known, so let all the voters communicate which
candidate they prefer in each matchup in the second round-etc.
Because communicating which of two candidates is preferred
requires only one bit per voter, and because there are only
m − 1 matchups in total, this communication protocol
requires O(nm) communication.
Theorem 17. The nondeterministic communication
complexity of the cup rule is Ω(nm) (even to decide whether a
given candidate a wins).
Proof. We will exhibit a fooling set of size 2n m
where
m = (m − 1)/2 and n = (n − 7)/2. Given that m + 1 is a
power of 2, so that one candidate gets a bye (that is, does not
face an opponent) in the first round, let a be the candidate
with the bye. Of the m first-round matchups, let lj denote
the one (left) candidate in the jth matchup, and let rj be
the other (right) candidate. Let L = {lj : 1 ≤ j ≤ m }
and R = {rj : 1 ≤ j ≤ m }, so that C = L ∪ R ∪ {a}.
.
.
.
.
.
.
. . .
l r l r l r
a
m"1 1 2 2 m"
Figure 1: The schedule for the cup rule used in the
proof of Theorem 17.
For every vector (S1, S2, . . . , Sn ) consisting of n subsets
Si ⊆ R, let the following vector of votes be an element of
the fooling set:
• For 1 ≤ i ≤ n , let voter 2i − 1 rank the candidates
Si L a R − Si.
• For 1 ≤ i ≤ n , let voter 2i rank the candidates R −
Si L a Si.
• Let voters 2n +1 = n−6, 2n +2 = n−5, 2n +3 = n−4
rank the candidates L a R.
• Let voters 2n + 4 = n − 3, 2n + 5 = n − 2 rank the
candidates a r1 l1 r2 l2 . . . rm lm .
• Let voters 2n + 6 = n − 1, 2n + 7 = n rank the
candidates rm lm rm −1 lm −1 . . . r1 l1 a.
We observe that this fooling set has size (2m
)n
= 2n m
.
Also, candidate a wins in each vector of votes in the fooling
set, for the following reasons. Each candidate rj defeats its
opponent lj in the first round. (For any 1 ≤ i ≤ n , the
net effect of votes 2i − 1 and 2i on the pairwise election
between rj and lj is zero; votes n − 6, n − 5, n − 4 prefer
lj to rj, but votes n − 3, n − 2, n − 1, n all prefer rj to lj.)
Moreover, a defeats every rj in their pairwise election. (For
any 1 ≤ i ≤ n , the net effect of votes 2i − 1 and 2i on the
pairwise election between a and rj is zero; votes n − 1, n
prefer rj to a, but votes n − 6, n − 5, n − 4, n − 3, n − 2 all
prefer a to rj.) It follows that a will defeat all the candidates
that it faces.
All that remains to show is that for any two distinct
vectors of votes in the fooling set, we can let each of the voters
vote according to one of these two vectors in such a way that
a loses. Let the first vote vector correspond to the vector
85
(S1
1 , S1
2 , . . . , S1
n ), and let the second vote vector correspond
to the vector (S2
1 , S2
2 , . . . , S2
n ). For some i, we must have
S1
i = S2
i , so that either S1
i S2
i or S2
i S1
i . Without
loss of generality, suppose S1
i S2
i , and let rj be some
candidate in S1
i − S2
i . Now, construct a new vote vector by
taking vote 2i from the first vote vector, and the remaining
votes from the second vote vector. We note that, whereas
in the second vote vector vote 2i preferred rj to lj (because
rj ∈ R−S2
i ), in the newly constructed vote vector this is no
longer the case (because rj ∈ S1
i ). It follows that, whereas
in the second vote vector, rj defeated lj in the first round
by one vote, in the newly constructed vote vector, lj
defeats rj in the first round. Thus, at least one lj advances
to the second round after defeating its opponent rj. Now,
we observe that in the newly constructed vote vector, any
lk wins its pairwise election against any rq with q = k. This
is because among the first 2n votes, at least n − 1 prefer lk
to rq; votes n − 6, n − 5, n − 4 prefer lk to rq; and, because
q = k, either votes n − 3, n − 2 prefer lk to rq (if k < q),
or votes n − 1, n prefer lk to rq (if k > q). Thus, at least
n + 4 = (n + 1)/2 > n/2 votes prefer lk to rq. Moreover,
any lk wins its pairwise election against a. This is because
only votes n − 3 and n − 2 prefer a to lk. It follows that,
after the first round, any surviving candidate lk can only
lose a matchup against another surviving lk , so that one of
the lk must win the election. So, a is not the winner in the
newly constructed vote vector, and hence we have a correct
fooling set.
Theorem 18. The deterministic communication
complexity of the Bucklin rule is O(nm).
Proof. Let l be the minimum integer for which there is
a candidate who is ranked among the top l candidates by
more than half the votes. We will do a binary search for l.
At each point, we will have a lower bound lL which is smaller
than l (initialized to 0), and an upper bound lH which is at
least l (initialized to m). While lH − lL > 1, we continue
by finding out whether (lH − l)/2 is smaller than l, after
which we can update the bounds.
To find out whether a number k is smaller than l, we
determine every voter"s k most preferred candidates.
Every voter can communicate which candidates are among her
k most preferred candidates using m bits (for each
candidate, indicate whether the candidate is among the top k or
not), but because the binary search requires log m iterations,
this gives us an upper bound of O((log m)nm), which is not
strong enough. However, if lL < k < lH , and we already
know a voter"s lL most preferred candidates, as well as her lH
most preferred candidates, then the voter no longer needs to
communicate whether the lL most preferred candidates are
among her k most preferred candidates (because they must
be), and she no longer needs to communicate whether the
m−lH least preferred candidates are among her k most
preferred candidates (because they cannot be). Thus the voter
needs to communicate only m−lL −(m−lH ) = lH −lL bits
in any given stage. Because each stage, lH − lL is (roughly)
halved, each voter in total communicates only (roughly)
m + m/2 + m/4 + . . . ≤ 2m bits.
Theorem 19. The nondeterministic communication
complexity of the Bucklin rule is Ω(nm) (even to decide whether
a given candidate a wins).
Proof. We will exhibit a fooling set of size 2n m
where
m = (m−1)/2 and n = n/2. We write the set of candidates
as the following disjoint union: C = {a} ∪ L ∪ R where
L = {l1, l2, . . . , lm } and R = {r1, r2, . . . , rm }. For any
subset S ⊆ {1, 2, . . . , m }, let L(S) = {li : i ∈ S} and let
R(S) = {ri : i ∈ S}. For every vector (S1, S2, . . . , Sn )
consisting of n sets Si ⊆ {1, 2, . . . , m }, let the following
vector of votes be an element of the fooling set:
• For 1 ≤ i ≤ n , let voter 2i − 1 rank the candidates
L(Si) R − R(Si) a L − L(Si) R(Si).
• For 1 ≤ i ≤ n , let voter 2i rank the candidates L −
L(Si) R(Si) a L(Si) R − R(Si).
We observe that this fooling set has size (2m
)n
= 2n m
,
and that candidate a wins in each vector of votes in the
fooling set, for the following reason. Each candidate in C − {a}
is ranked among the top m candidates by exactly half the
voters (which is not enough to win). Thus, we need to look
at the voters" top m +1 candidates, and a is ranked m +1th
by all voters. All that remains to show is that for any two
distinct vectors of votes in the fooling set, we can let each of
the voters vote according to one of these two vectors in such
a way that a loses. Let the first vote vector correspond to
the vector (S1
1 , S1
2 , . . . , S1
n ), and let the second vote vector
correspond to the vector (S2
1 , S2
2 , . . . , S2
n ). For some i, we
must have S1
i = S2
i , so that either S1
i S2
i or S2
i S1
i .
Without loss of generality, suppose S1
i S2
i , and let j be
some integer in S1
i − S2
i . Now, construct a new vote vector
by taking vote 2i − 1 from the first vote vector, and the
remaining votes from the second vote vector. In this newly
constructed vote vector, a is still ranked m + 1th by all
votes. However, lj is ranked among the top m candidates
by n + 1 = n/2 + 1 votes. This is because whereas vote
2i − 1 does not rank lj among the top m candidates in the
second vote vector (because j /∈ S2
i , we have lj /∈ L(S2
i )),
vote 2i − 1 does rank lj among the top m candidates in the
first vote vector (because j ∈ S1
i , we have lj ∈ L(S1
i )). So, a
is not the winner in the newly constructed vote vector, and
hence we have a correct fooling set.
Theorem 20. The nondeterministic communication
complexity of the ranked pairs rule is Ω(nm log m) (even to
decide whether a given candidate a wins).
Proof. We omit this proof because of space constraint.
5. DISCUSSION
One key obstacle to using voting for preference
aggregation is the communication burden that an election places
on the voters. By lowering this burden, it may become
feasible to conduct more elections over more issues. In the
limit, this could lead to a shift from representational
government to a system in which most issues are decided by
referenda-a veritable e-democracy. In this paper, we
analyzed the communication complexity of the common voting
rules. Knowing which voting rules require little
communication is especially important when the issue to be voted
on is of low enough importance that the following is true:
the parties involved are willing to accept a rule that tends
86
to produce outcomes that are slightly less representative of
the voters" preferences, if this rule reduces the
communication burden on the voters significantly. The following table
summarizes the results we obtained.
Rule Lower bound Upper bound
plurality Ω(n log m) O(n log m)
plurality w/ runoff Ω(n log m) O(n log m)
STV Ω(n log m) O(n(log m)2)
Condorcet Ω(nm) O(nm)
approval Ω(nm) O(nm)
Bucklin Ω(nm) O(nm)
cup Ω(nm) O(nm)
maximin Ω(nm) O(nm)
Borda Ω(nm log m) O(nm log m)
Copeland Ω(nm log m) O(nm log m)
ranked pairs Ω(nm log m) O(nm log m)
Communication complexity of voting rules, sorted from low
to high. All of the upper bounds are deterministic (with
the exception of maximin, for which the best deterministic
upper bound we proved is O(nm log m)). All of the lower
bounds hold even for nondeterministic communication and
even just for determining whether a given candidate a is
the winner.
One area of future research is to study what happens when
we restrict our attention to communication protocols that
do not reveal any strategically useful information. This
restriction may invalidate some of the upper bounds that we
derived using multistage communication protocols. Also, all
of our bounds are worst-case bounds. It may be possible to
outperform these bounds when the distribution of votes has
additional structure.
When deciding which voting rule to use for an election,
there are many considerations to take into account. The
voting rules that we studied in this paper are the most
common ones that have survived the test of time. One way to
select among these rules is to consider recent results on
complexity. The table above shows that from a communication
complexity perspective, plurality, plurality with runoff, and
STV are preferable. However, plurality has the undesirable
property that it is computationally easy to manipulate by
voting strategically [3, 7]. Plurality with runoff is NP-hard
to manipulate by a coalition of weighted voters, or by an
individual that faces correlated uncertainty about the
others" votes [7, 6]. STV is NP-hard to manipulate in those
settings as well [7], but also by an individual with perfect
knowledge of the others" votes (when the number of
candidates is unbounded) [2]. Therefore, STV is more robust,
although it may require slightly more worst-case
communication as per the table above. Yet other selection criteria
are the computational complexity of determining whether
enough information has been elicited to declare a winner,
and that of determining the optimal sequence of queries [8].
6. REFERENCES
[1] Lawrence Ausubel and Paul Milgrom. Ascending auctions
with package bidding. Frontiers of Theoretical Economics,
1, 2002. No. 1, Article 1.
[2] John Bartholdi, III and James Orlin. Single transferable
vote resists strategic voting. Social Choice and Welfare,
8(4):341-354, 1991.
[3] John Bartholdi, III, Craig Tovey, and Michael Trick. The
computational difficulty of manipulating an election. Social
Choice and Welfare, 6(3):227-241, 1989.
[4] Avrim Blum, Jeffrey Jackson, Tuomas Sandholm, and
Martin Zinkevich. Preference elicitation and query learning.
Journal of Machine Learning Research, 5:649-667, 2004.
[5] Wolfram Conen and Tuomas Sandholm. Preference
elicitation in combinatorial auctions: Extended abstract. In
Proceedings of the ACM Conference on Electronic
Commerce (ACM-EC), pages 256-259, 2001.
[6] Vincent Conitzer, Jerome Lang, and Tuomas Sandholm.
How many candidates are needed to make elections hard to
manipulate? In Theoretical Aspects of Rationality and
Knowledge (TARK), pages 201-214, 2003.
[7] Vincent Conitzer and Tuomas Sandholm. Complexity of
manipulating elections with few candidates. In Proceedings
of the National Conference on Artificial Intelligence
(AAAI), pages 314-319, 2002.
[8] Vincent Conitzer and Tuomas Sandholm. Vote elicitation:
Complexity and strategy-proofness. In Proceedings of the
National Conference on Artificial Intelligence (AAAI),
pages 392-397, 2002.
[9] Sven de Vries, James Schummer, and Rakesh V. Vohra. On
ascending auctions for heterogeneous objects, 2003. Draft.
[10] Allan Gibbard. Manipulation of voting schemes.
Econometrica, 41:587-602, 1973.
[11] Benoit Hudson and Tuomas Sandholm. Effectiveness of
query types and policies for preference elicitation in
combinatorial auctions. In International Conference on
Autonomous Agents and Multi-Agent Systems (AAMAS),
pages 386-393, 2004.
[12] E Kushilevitz and N Nisan. Communication Complexity.
Cambridge University Press, 1997.
[13] Sebasti´en Lahaie and David Parkes. Applying learning
algorithms to preference elicitation. In Proceedings of the
ACM Conference on Electronic Commerce, 2004.
[14] Noam Nisan and Ilya Segal. The communication
requirements of efficient allocations and supporting prices.
Journal of Economic Theory, 2005. Forthcoming.
[15] David Parkes. iBundle: An efficient ascending price bundle
auction. In Proceedings of the ACM Conference on
Electronic Commerce (ACM-EC), pages 148-157, 1999.
[16] Tuomas Sandholm. An implementation of the contract net
protocol based on marginal cost calculations. In
Proceedings of the National Conference on Artificial
Intelligence (AAAI), pages 256-262, 1993.
[17] Tuomas Sandholm and Craig Boutilier. Preference
elicitation in combinatorial auctions. In Peter Cramton,
Yoav Shoham, and Richard Steinberg, editors,
Combinatorial Auctions, chapter 10. MIT Press, 2005.
[18] Paolo Santi, Vincent Conitzer, and Tuomas Sandholm.
Towards a characterization of polynomial preference
elicitation with value queries in combinatorial auctions. In
Conference on Learning Theory (COLT), pages 1-16, 2004.
[19] Mark Satterthwaite. Strategy-proofness and Arrow"s
conditions: existence and correspondence theorems for
voting procedures and social welfare functions. Journal of
Economic Theory, 10:187-217, 1975.
[20] Ilya Segal. The communication requirements of social choice
rules and supporting budget sets, 2004. Draft. Presented at
the DIMACS Workshop on Computational Issues in
Auction Design, Rutgers University, New Jersey, USA.
[21] Peter Wurman and Michael Wellman. AkBA: A
progressive, anonymous-price combinatorial auction. In
Proceedings of the ACM Conference on Electronic
Commerce (ACM-EC), pages 21-29, 2000.
[22] A. C. Yao. Some complexity questions related to distributed
computing. In Proceedings of the 11th ACM symposium on
theory of computing (STOC), pages 209-213, 1979.
[23] Martin Zinkevich, Avrim Blum, and Tuomas Sandholm. On
polynomial-time preference elicitation with value queries.
In Proceedings of the ACM Conference on Electronic
Commerce (ACM-EC), pages 176-185, 2003.
87 | communication;protocol;preference aggregation;maximin;voting rule;complexity;preference;stv;vote;resource allocation;communication complexity;elicitation problem |
train_J-51 | Complexity of (Iterated) Dominance∗ | We study various computational aspects of solving games using dominance and iterated dominance. We first study both strict and weak dominance (not iterated), and show that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve. We then move on to iterated dominance. We show that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance. This allows us to also show that determining whether there is a path that leads to a unique solution is NP-complete. Both of these results hold both with and without dominance by mixed strategies. (A weaker version of the second result (only without dominance by mixed strategies) was already known [7].) Iterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time. We then study what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies. First, we show that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance). Then, we show that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3). Finally, we study Bayesian games. We show that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak ∗ This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship. dominance). Finally, we show that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance). | 1. INTRODUCTION
In multiagent systems with self-interested agents, the
optimal action for one agent may depend on the actions taken
by other agents. In such settings, the agents require tools
from game theory to rationally decide on an action. Game
theory offers various formal models of strategic settings-the
best-known of which is a game in normal (or matrix) form,
specifying a utility (payoff) for each agent for each
combination of strategies that the agents choose-as well as
solution concepts, which, given a game, specify which outcomes
are reasonable (under various assumptions of rationality and
common knowledge).
Probably the best-known (and certainly the most-studied)
solution concept is that of Nash equilibrium. A Nash
equilibrium specifies a strategy for each player, in such a way that
no player has an incentive to (unilaterally) deviate from the
prescribed strategy. Recently, numerous papers have
studied computing Nash equilibria in various settings [9, 4, 12,
3, 13, 14], and the complexity of constructing a Nash
equilibrium in normal form games has been labeled one of the
two most important open problems on the boundary of P
today [20].
The problem of computing solutions according to the
perhaps more elementary solution concepts of dominance and
iterated dominance has received much less attention. (After
an early short paper on an easy case [11], the main
computational study of these concepts has actually taken place
in a paper in the game theory community [7].1
) A strategy
strictly dominates another strategy if it performs strictly
1
This is not to say that computer scientists have ignored
88
better against all vectors of opponent strategies, and weakly
dominates it if it performs at least as well against all vectors
of opponent strategies, and strictly better against at least
one. The idea is that dominated strategies can be eliminated
from consideration. In iterated dominance, the elimination
proceeds in rounds, and becomes easier as more strategies
are eliminated: in any given round, the dominating
strategy no longer needs to perform better than or as well as the
dominated strategy against opponent strategies that were
eliminated in earlier rounds. Computing solutions
according to (iterated) dominance is important for at least the
following reasons: 1) it can be computationally easier than
computing (for instance) a Nash equilibrium (and therefore
it can be useful as a preprocessing step in computing a Nash
equilibrium), and 2) (iterated) dominance requires a weaker
rationality assumption on the players than (for instance)
Nash equilibrium, and therefore solutions derived according
to it are more likely to occur.
In this paper, we study some fundamental computational
questions concerning dominance and iterated dominance,
including how hard it is to check whether a given strategy can
be eliminated by each of the variants of these notions. The
rest of the paper is organized as follows. In Section 2, we
briefly review definitions and basic properties of normal form
games, strict and weak dominance, and iterated strict and
weak dominance. In the remaining sections, we study
computational aspects of dominance and iterated dominance. In
Section 3, we study one-shot (not iterated) dominance. In
Section 4, we study iterated dominance. In Section 5, we
study dominance and iterated dominance when the
dominating strategy can only place probability on a few pure
strategies. Finally, in Section 6, we study dominance and
iterated dominance in Bayesian games.
2. DEFINITIONS AND BASIC PROPERTIES
In this section, we briefly review normal form games, as
well as dominance and iterated dominance (both strict and
weak). An n-player normal form game is defined as follows.
Definition 1. A normal form game is given by a set of
players {1, 2, . . . , n}; and, for each player i, a (finite) set of
pure strategies Σi and a utility function ui : Σ1 × Σ2 × . . . ×
Σn → R (where ui(σ1, σ2, . . . , σn) denotes player i"s utility
when each player j plays action σj).
The two main notions of dominance are defined as follows.
Definition 2. Player i"s strategy σi is said to be strictly
dominated by player i"s strategy σi if for any vector of
strategies σ−i for the other players, ui(σi, σ−i) > ui(σi, σ−i).
Player i"s strategy σi is said to be weakly dominated by
player i"s strategy σi if for any vector of strategies σ−i for the
other players, ui(σi, σ−i) ≥ ui(σi, σ−i), and for at least one
vector of strategies σ−i for the other players, ui(σi, σ−i) >
ui(σi, σ−i).
In this definition, it is sometimes allowed for the
dominating strategy σi to be a mixed strategy, that is, a probability
distribution over pure strategies. In this case, the utilities in
dominance altogether. For example, simple dominance
checks are sometimes used as a subroutine in searching for
Nash equilibria [21].
the definition are the expected utilities.2
There are other
notions of dominance, such as very weak dominance (in which
no strict inequality is required, so two strategies can
dominate each other), but we will not study them here. When we
are looking at the dominance relations for player i, the other
players (−i) can be thought of as a single player.3
Therefore, in the rest of the paper, when we study one-shot (not
iterated) dominance, we will focus without loss of generality
on two-player games.4
In two-player games, we will
generally refer to the players as r (row) and c (column) rather
than 1 and 2.
In iterated dominance, dominated strategies are removed
from the game, and no longer have any effect on future
dominance relations. Iterated dominance can eliminate more
strategies than dominance, as follows. σr may originally
not dominate σr because the latter performs better against
σc; but then, once σc is removed because it is dominated by
σc, σr dominates σr, and the latter can be removed. For
example, in the following game, R can be removed first, after
which D is dominated.
L R
U 1, 1 0, 0
D 0, 1 1, 0
Either strict or weak dominance can be used in the
definition of iterated dominance. We note that the process of
iterated dominance is never helped by removing a dominated
mixed strategy, for the following reason. If σi gives player i
a higher utility than σi against mixed strategy σj for player
j = i (and strategies σ−{i,j} for the other players), then for
at least one pure strategy σj that σj places positive
probability on, σi must perform better than σi against σj (and
strategies σ−{i,j} for the other players). Thus, removing the
mixed strategy σj does not introduce any new dominances.
More detailed discussions and examples can be found in
standard texts on microeconomics or game theory [17, 5].
We are now ready to move on to the core of this paper.
3. DOMINANCE (NOT ITERATED)
In this section, we study the notion of one-shot (not
iterated) dominance. As a first observation, checking whether a
given strategy is strictly (weakly) dominated by some pure
strategy is straightforward, by checking, for every pure
strategy for that player, whether the latter strategy performs
strictly better against all the opponent"s strategies (at least
as well against all the opponent"s strategies, and strictly
2
The dominated strategy σi is, of course, also allowed to be
mixed, but this has no technical implications for the paper:
when we study one-shot dominance, we ask whether a given
strategy is dominated, and it does not matter whether the
given strategy is pure or mixed; when we study iterated
dominance, there is no use in eliminating mixed strategies,
as we will see shortly.
3
This player may have a very large strategy space (one pure
strategy for every vector of pure strategies for the players
that are being replaced). Nevertheless, this will not result in
an increase in our representation size, because the original
representation already had to specify utilities for each of
these vectors.
4
We note that a restriction to two-player games would not
be without loss of generality for iterated dominance. This
is because for iterated dominance, we need to look at the
dominated strategies of each individual player, so we cannot
merge any players.
89
better against at least one).5
Next, we show that
checking whether a given strategy is dominated by some mixed
strategy can be done in polynomial time by solving a single
linear program. (Similar linear programs have been given
before [18]; we present the result here for completeness, and
because we will build on the linear programs given below in
Theorem 6.)
Proposition 1. Given the row player"s utilities, a
subset Dr of the row player"s pure strategies Σr, and a
distinguished strategy σ∗
r for the row player, we can check in time
polynomial in the size of the game (by solving a single linear
program of polynomial size) whether there exists some mixed
strategy σr, that places positive probability only on strategies
in Dr and dominates σ∗
r , both for strict and for weak
dominance.
Proof. Let pdr be the probability that σr places on dr ∈
Dr. We will solve a single linear program in each of our
algorithms; linear programs can be solved in polynomial
time [10]. For strict dominance, the question is whether the
pdr can be set so that for every pure strategy for the column
player σc ∈ Σc,
dr∈Dr
pdr ur(dr, σc) > ur(σ∗
r , σc). Because
the inequality must be strict, we cannot solve this directly
by linear programming. We proceed as follows. Because the
game is finite, we may assume without loss of generality that
all utilities are positive (if not, simply add a constant to all
utilities.) Solve the following linear program:
minimize
dr∈Dr
pdr
such that
for any σc ∈ Σc,
dr∈Dr
pdr ur(dr, σc) ≥ ur(σ∗
r , σc).
If σ∗
r is strictly dominated by some mixed strategy, this
linear program has a solution with objective value < 1.
(The dominating strategy is a feasible solution with
objective value exactly 1. Because no constraint is binding for this
solution, we can reduce one of the probabilities slightly
without affecting feasibility, thereby obtaining a solution with
objective value < 1.) Moreover, if this linear program has a
solution with objective value < 1, there is a mixed strategy
strictly dominating σ∗
r , which can be obtained by taking the
LP solution and adding the remaining probability to any
strategy (because all the utilities are positive, this will add
to the left side of any inequality, so all inequalities will
become strict).
For weak dominance, we can solve the following linear
program:
maximize
σc∈Σc
((
dr∈Dr
pdr ur(dr, σc)) − ur(σ∗
r , σc))
such that
for any σc ∈ Σc,
dr∈Dr
pdr ur(dr, σc) ≥ ur(σ∗
r , σc);
dr∈Dr
pdr = 1.
If σ∗
r is weakly dominated by some mixed strategy, then
that mixed strategy is a feasible solution to this program
with objective value > 0, because for at least one strategy
σc ∈ Σc we have (
dr∈Dr
pdr ur(dr, σc)) − ur(σ∗
r , σc) > 0. On
the other hand, if this program has a solution with
objective value > 0, then for at least one strategy σc ∈ Σc we
5
Recall that the assumption of a single opponent (that is,
the assumption of two players) is without loss of generality
for one-shot dominance.
must have (
dr∈Dr
pdr ur(dr, σc)) − ur(σ∗
r , σc) > 0, and thus
the linear program"s solution is a weakly dominating mixed
strategy.
4. ITERATED DOMINANCE
We now move on to iterated dominance. It is well-known
that iterated strict dominance is path-independent [6,
19]that is, if we remove dominated strategies until no more
dominated strategies remain, in the end the remaining
strategies for each player will be the same, regardless of the
order in which strategies are removed. Because of this, to
see whether a given strategy can be eliminated by iterated
strict dominance, all that needs to be done is to
repeatedly remove strategies that are strictly dominated, until no
more dominated strategies remain. Because we can check in
polynomial time whether any given strategy is dominated
(whether or not dominance by mixed strategies is allowed,
as described in Section 3), this whole procedure takes only
polynomial time. In the case of iterated dominance by pure
strategies with two players, Knuth et al. [11] slightly
improve on (speed up) the straightforward implementation of
this procedure by keeping track of, for each ordered pair of
strategies for a player, the number of opponent strategies
that prevent the first strategy from dominating the second.
Hereby the runtime for an m × n game is reduced from
O((m + n)4
) to O((m + n)3
). (Actually, they only study
very weak dominance (for which no strict inequalities are
required), but the approach is easily extended.)
In contrast, iterated weak dominance is known to be
pathdependent.6
For example, in the following game, using
iterated weak dominance we can eliminate M first, and then
D, or R first, and then U.
L M R
U 1, 1 0, 0 1, 0
D 1, 1 1, 0 0, 0
Therefore, while the procedure of removing weakly
dominated strategies until no more weakly dominated strategies
remain can certainly be executed in polynomial time, which
strategies survive in the end depends on the order in which
we remove the dominated strategies. We will investigate
two questions for iterated weak dominance: whether a given
strategy is eliminated in some path, and whether there is a
path to a unique solution (one pure strategy left per player).
We will show that both of these problems are
computationally hard.
Definition 3. Given a game in normal form and a
distinguished strategy σ∗
, IWD-STRATEGY-ELIMINATION
asks whether there is some path of iterated weak dominance
that eliminates σ∗
. Given a game in normal form,
IWDUNIQUE-SOLUTION asks whether there is some path of
iterated weak dominance that leads to a unique solution (one
strategy left per player).
The following lemma shows a special case of normal form
games in which allowing for weak dominance by mixed
strategies (in addition to weak dominance by pure strategies) does
6
There is, however, a restriction of weak dominance called
nice weak dominance which is path-independent [15, 16]. For
an overview of path-independence results, see Apt [1].
90
not help. We will prove the hardness results in this setting,
so that they will hold whether or not dominance by mixed
strategies is allowed.
Lemma 1. Suppose that all the utilities in a game are in
{0, 1}. Then every pure strategy that is weakly dominated by
a mixed strategy is also weakly dominated by a pure strategy.
Proof. Suppose pure strategy σ is weakly dominated by
mixed strategy σ∗
. If σ gets a utility of 1 against some
opponent strategy (or vector of opponent strategies if there
are more than 2 players), then all the pure strategies that
σ∗
places positive probability on must also get a utility of 1
against that opponent strategy (or else the expected utility
would be smaller than 1). Moreover, at least one of the
pure strategies that σ∗
places positive probability on must
get a utility of 1 against an opponent strategy that σ gets
0 against (or else the inequality would never be strict). It
follows that this pure strategy weakly dominates σ.
We are now ready to prove the main results of this section.
Theorem 1. IWD-STRATEGY-ELIMINATION is
NPcomplete, even with 2 players, and with 0 and 1 being the
only utilities occurring in the matrix-whether or not
dominance by mixed strategies is allowed.
Proof. The problem is in NP because given a sequence
of strategies to be eliminated, we can easily check whether
this is a valid sequence of eliminations (even when
dominance by mixed strategies is allowed, using Proposition 1).
To show that the problem is NP-hard, we reduce an
arbitrary satisfiability instance (given by a nonempty set of
clauses C over a nonempty set of variables V , with
corresponding literals L = {+v : v ∈ V } ∪ {−v : v ∈ V }) to the
following IWD-STRATEGY-ELIMINATION instance. (In
this instance, we will specify that certain strategies are
uneliminable. A strategy σr can be made uneliminable, even
when 0 and 1 are the only allowed utilities, by adding
another strategy σr and another opponent strategy σc, so that:
1. σr and σr are the only strategies that give the row player
a utility of 1 against σc. 2. σr and σr always give the row
player the same utility. 3. σc is the only strategy that gives
the column player a utility of 1 against σr, but otherwise
σc always gives the column player utility 0. This makes it
impossible to eliminate any of these three strategies. We
will not explicitly specify the additional strategies to make
the proof more legible.)
In this proof, we will denote row player strategies by s, and
column player strategies by t, to improve legibility. Let the
row player"s pure strategy set be given as follows. For every
variable v ∈ V , the row player has corresponding strategies
s1
+v, s2
+v, s1
−v, s2
−v. Additionally, the row player has the
following 2 strategies: s1
0 and s2
0, where s2
0 = σ∗
r (that is, it is
the strategy we seek to eliminate). Finally, for every clause
c ∈ C, the row player has corresponding strategies s1
c
(uneliminable) and s2
c. Let the column player"s pure strategy set
be given as follows. For every variable v ∈ V , the column
player has a corresponding strategy tv. For every clause
c ∈ C, the column player has a corresponding strategy tc,
and additionally, for every literal l ∈ L that occurs in c, a
strategy tc,l. For every variable v ∈ V , the column player
has corresponding strategies t+v, t−v (both uneliminable).
Finally, the column player has three additional strategies:
t1
0 (uneliminable), t2
0, and t1.
The utility function for the row player is given as follows:
• ur(s1
+v, tv) = 0 for all v ∈ V ;
• ur(s2
+v, tv) = 1 for all v ∈ V ;
• ur(s1
−v, tv) = 1 for all v ∈ V ;
• ur(s2
−v, tv) = 0 for all v ∈ V ;
• ur(s1
+v, t1) = 1 for all v ∈ V ;
• ur(s2
+v, t1) = 0 for all v ∈ V ;
• ur(s1
−v, t1) = 0 for all v ∈ V ;
• ur(s2
−v, t1) = 1 for all v ∈ V ;
• ur(sb
+v, t+v) = 1 for all v ∈ V and b ∈ {1, 2};
• ur(sb
−v, t−v) = 1 for all v ∈ V and b ∈ {1, 2};
• ur(sl, t) = 0 otherwise for all l ∈ L and t ∈ S2;
• ur(s1
0, tc) = 0 for all c ∈ C;
• ur(s2
0, tc) = 1 for all c ∈ C;
• ur(sb
0, t1
0) = 1 for all b ∈ {1, 2};
• ur(s1
0, t2
0) = 1;
• ur(s2
0, t2
0) = 0;
• ur(sb
0, t) = 0 otherwise for all b ∈ {1, 2} and t ∈ S2;
• ur(sb
c, t) = 0 otherwise for all c ∈ C and b ∈ {1, 2};
and the row player"s utility is 0 in every other case. The
utility function for the column player is given as follows:
• uc(s, tv) = 0 for all v ∈ V and s ∈ S1;
• uc(s, t1) = 0 for all s ∈ S1;
• uc(s2
l , tc) = 1 for all c ∈ C and l ∈ L where l ∈ c
(literal l occurs in clause c);
• uc(s2
l2
, tc,l1 ) = 1 for all c ∈ C and l1, l2 ∈ L, l1 = l2
where l2 ∈ c;
• uc(s1
c, tc) = 1 for all c ∈ C;
• uc(s2
c, tc) = 0 for all c ∈ C;
• uc(sb
c, tc,l) = 1 for all c ∈ C, l ∈ L, and b ∈ {1, 2};
• uc(s2, tc) = uc(s2, tc,l) = 0 otherwise for all c ∈ C and
l ∈ L;
and the column player"s utility is 0 in every other case. We
now show that the two instances are equivalent.
First, suppose there is a solution to the satisfiability
instance: that is, a truth-value assignment to the variables in
V such that all clauses are satisfied. Then, consider the
following sequence of eliminations in our game: 1. For every
variable v that is set to true in the assignment, eliminate
tv (which gives the column player utility 0 everywhere). 2.
Then, for every variable v that is set to true in the
assignment, eliminate s2
+v using s1
+v (which is possible because tv
has been eliminated, and because t1 has not been eliminated
(yet)). 3. Now eliminate t1 (which gives the column player
utility 0 everywhere). 4. Next, for every variable v that is set
to false in the assignment, eliminate s2
−v using s1
−v (which
is possible because t1 has been eliminated, and because tv
has not been eliminated (yet)). 5. For every clause c which
has the variable corresponding to one of its positive literals
l = +v set to true in the assignment, eliminate tc using tc,l
(which is possible because s2
l has been eliminated, and s2
c
has not been eliminated (yet)). 6. For every clause c which
has the variable corresponding to one of its negative literals
l = −v set to false in the assignment, eliminate tc using tc,l
91
(which is possible because s2
l has been eliminated, and s2
c has
not been eliminated (yet)). 7. Because the assignment
satisfied the formula, all the tc have now been eliminated. Thus,
we can eliminate s2
0 = σ∗
r using s1
0. It follows that there is a
solution to the IWD-STRATEGY-ELIMINATION instance.
Now suppose there is a solution to the
IWD-STRATEGYELIMINATION instance. By Lemma 1, we can assume that
all the dominances are by pure strategies. We first observe
that only s1
0 can eliminate s2
0 = σ∗
r , because it is the only
other strategy that gets the row player a utility of 1 against
t1
0, and t1
0 is uneliminable. However, because s2
0 performs
better than s1
0 against the tc strategies, it follows that all of
the tc strategies must be eliminated. For each c ∈ C, the
strategy tc can only be eliminated by one of the strategies tc,l
(with the same c), because these are the only other strategies
that get the column player a utility of 1 against s1
c, and s1
c is
uneliminable. But, in order for some tc,l to eliminate tc, s2
l
must be eliminated first. Only s1
l can eliminate s2
l , because
it is the only other strategy that gets the row player a utility
of 1 against tl, and tl is uneliminable. We next show that for
every v ∈ V only one of s2
+v, s2
−v can be eliminated. This
is because in order for s1
+v to eliminate s2
+v, tv needs to
have been eliminated and t1, not (so tv must be eliminated
before t1); but in order for s1
−v to eliminate s2
−v, t1 needs to
have been eliminated and tv, not (so t1 must be eliminated
before tv). So, set v to true if s2
+v is eliminated, and to false
otherwise Because by the above, for every clause c, one of
the s2
l with l ∈ c must be eliminated, it follows that this is
a satisfying assignment to the satisfiability instance.
Using Theorem 1, it is now (relatively) easy to show that
IWD-UNIQUE-SOLUTION is also NP-complete under the
same restrictions.
Theorem 2. IWD-UNIQUE-SOLUTION is NP-complete,
even with 2 players, and with 0 and 1 being the only
utilities occurring in the matrix-whether or not dominance by
mixed strategies is allowed.
Proof. Again, the problem is in NP because we can
nondeterministically choose the sequence of eliminations and
verify whether it is correct. To show NP-hardness, we reduce
an arbitrary IWD-STRATEGY-ELIMINATION instance to
the following IWD-UNIQUE-SOLUTION instance. Let all
the strategies for each player from the original instance
remain part of the new instance, and let the utilities resulting
from the players playing a pair of these strategies be the
same. We add three additional strategies σ1
r , σ2
r , σ3
r for the
row player, and three additional strategies σ1
c , σ2
c , σ3
c for the
column player. Let the additional utilities be as follows:
• ur(σr, σj
c) = 1 for all σr /∈ {σ1
r , σ2
r , σ3
r } and j ∈ {2, 3};
• ur(σi
r, σc) = 1 for all i ∈ {1, 2, 3} and σc /∈ {σ2
c , σ3
c };
• ur(σi
r, σ2
c ) = 1 for all i ∈ {2, 3};
• ur(σ1
r , σ3
c ) = 1;
• and the row player"s utility is 0 in all other cases
involving a new strategy.
• uc(σ3
r , σc) = 1 for all σc /∈ {σ1
c , σ2
c , σ3
c };
• uc(σ∗
r , σj
c) = 1 for all j ∈ {2, 3} (σ∗
r is the strategy to
be eliminated in the original instance);
• uc(σi
r, σ1
c ) = 1 for all i ∈ {1, 2};
• ur(σ1
r , σ2
c ) = 1;
• ur(σ2
r , σ3
c ) = 1;
• and the column player"s utility is 0 in all other cases
involving a new strategy.
We proceed to show that the two instances are equivalent.
First suppose there exists a solution to the original
IWDSTRATEGY-ELIMINATION instance. Then, perform the
same sequence of eliminations to eliminate σ∗
r in the new
IWD-UNIQUE-SOLUTION instance. (This is possible
because at any stage, any weak dominance for the row player
in the original instance is still a weak dominance in the new
instance, because the two strategies" utilities for the row
player are the same when the column player plays one of the
new strategies; and the same is true for the column player.)
Once σ∗
r is eliminated, let σ1
c eliminate σ2
c . (It performs
better against σ2
r .) Then, let σ1
r eliminate all the other
remaining strategies for the row player. (It always performs
better against either σ1
c or σ3
c .) Finally, σ1
c is the unique best
response against σ1
r among the column player"s remaining
strategies, so let it eliminate all the other remaining
strategies for the column player. Thus, there exists a solution to
the IWD-UNIQUE-SOLUTION instance.
Now suppose there exists a solution to the
IWD-UNIQUESOLUTION instance. By Lemma 1, we can assume that all
the dominances are by pure strategies. We will show that
none of the new strategies (σ1
r , σ2
r , σ3
r , σ1
c , σ2
c , σ3
c ) can either
eliminate another strategy, or be eliminated before σ∗
r is
eliminated. Thus, there must be a sequence of eliminations
ending in the elimination of σ∗
r , which does not involve any of
the new strategies, and is therefore a valid sequence of
eliminations in the original game (because all original strategies
perform the same against each new strategy). We now show
that this is true by exhausting all possibilities for the first
elimination before σ∗
r is eliminated that involves a new
strategy. None of the σi
r can be eliminated by a σr /∈ {σ1
r , σ2
r , σ3
r },
because the σi
r perform better against σ1
c . σ1
r cannot
eliminate any other strategy, because it always performs poorer
against σ2
c . σ2
r and σ3
r are equivalent from the row player"s
perspective (and thus cannot eliminate each other), and
cannot eliminate any other strategy because they always
perform poorer against σ3
c . None of the σj
c can be eliminated
by a σc /∈ {σ1
c , σ2
c , σ3
c }, because the σj
c always perform
better against either σ1
r or σ2
r . σ1
c cannot eliminate any other
strategy, because it always performs poorer against either
σ∗
r or σ3
r . σ2
c cannot eliminate any other strategy, because
it always performs poorer against σ2
r or σ3
r . σ3
c cannot
eliminate any other strategy, because it always performs poorer
against σ1
r or σ3
r . Thus, there exists a solution to the
IWDSTRATEGY-ELIMINATION instance.
A slightly weaker version of the part of Theorem 2
concerning dominance by pure strategies only is the main result
of Gilboa et al. [7]. (Besides not proving the result for
dominance by mixed strategies, the original result was weaker
because it required utilities {0, 1, 2, 3, 4, 5, 6, 7, 8} rather than
just {0, 1} (and because of this, our Lemma 1 cannot be
applied to it to get the result for mixed strategies).)
5. (ITERATED) DOMINANCE USING
MIXED STRATEGIES WITH SMALL
SUPPORTS
When showing that a strategy is dominated by a mixed
strategy, there are several reasons to prefer exhibiting a
92
dominating strategy that places positive probability on as
few pure strategies as possible. First, this will reduce the
number of bits required to specify the dominating
strategy (and thus the proof of dominance can be communicated
quicker): if the dominating mixed strategy places positive
probability on only k strategies, then it can be specified
using k real numbers for the probabilities, plus k log m (where
m is the number of strategies for the player under
consideration) bits to indicate which strategies are used. Second,
the proof of dominance will be cleaner: for a dominating
mixed strategy, it is typically (always in the case of strict
dominance) possible to spread some of the probability onto
any unused pure strategy and still have a dominating
strategy, but this obscures which pure strategies are the ones that
are key in making the mixed strategy dominating. Third,
because (by the previous) the argument for eliminating the
dominated strategy is simpler and easier to understand, it is
more likely to be accepted. Fourth, the level of risk
neutrality required for the argument to work is reduced, at least in
the extreme case where dominance by a single pure strategy
can be exhibited (no risk neutrality is required here).
This motivates the following problem.
Definition 4 (MINIMUM-DOMINATING-SET). We
are given the row player"s utilities of a game in normal form,
a distinguished strategy σ∗
for the row player, a specification
of whether the dominance should be strict or weak, and a
number k. We are asked whether there exists a mixed
strategy σ for the row player that places positive probability on
at most k pure strategies, and dominates σ∗
in the required
sense.
Unfortunately, this problem is NP-complete.
Theorem 3. MINIMUM-DOMINATING-SET is
NPcomplete, both for strict and for weak dominance.
Proof. The problem is in NP because we can
nondeterministically choose a set of at most k strategies to give
positive probability, and decide whether we can dominate
σ∗
with these k strategies as described in Proposition 1. To
show NP-hardness, we reduce an arbitrary SET-COVER
instance (given a set S, subsets S1, S2, . . . , Sr, and a number
t, can all of S be covered by at most t of the subsets?)
to the following MINIMUM-DOMINATING-SET instance.
For every element s ∈ S, there is a pure strategy σs for
the column player. For every subset Si, there is a pure
strategy σSi for the row player. Finally, there is the
distinguished pure strategy σ∗
for the row player. The row
player"s utilities are as follows: ur(σSi , σs) = t + 1 if s ∈ Si;
ur(σSi , σs) = 0 if s /∈ Si; ur(σ∗
, σs) = 1 for all s ∈ S.
Finally, we let k = t. We now proceed to show that the two
instances are equivalent.
First suppose there exists a solution to the SET-COVER
instance. Without loss of generality, we can assume that
there are exactly k subsets in the cover. Then, for every
Si that is in the cover, let the dominating strategy σ place
exactly 1
k
probability on the corresponding pure strategy
σSi . Now, if we let n(s) be the number of subsets in the cover
containing s (we observe that that n(s) ≥ 1), then for every
strategy σs for the column player, the row player"s expected
utility for playing σ when the column player is playing σs is
u(σ, σs) = n(s)
k
(k + 1) ≥ k+1
k
> 1 = u(σ∗
, σs). So σ strictly
(and thus also weakly) dominates σ∗
, and there exists a
solution to the MINIMUM-DOMINATING-SET instance.
Now suppose there exists a solution to the
MINIMUMDOMINATING-SET instance. Consider the (at most k)
pure strategies of the form σSi on which the dominating
mixed strategy σ places positive probability, and let T be
the collection of the corresponding subsets Si. We claim that
T is a cover. For suppose there is some s ∈ S that is not in
any of the subsets in T . Then, if the column player plays σs,
the row player (when playing σ) will always receive utility
0-as opposed to the utility of 1 the row player would receive
for playing σ∗
, contradicting the fact that σ dominates σ∗
(whether this dominance is weak or strict). It follows that
there exists a solution to the SET-COVER instance.
On the other hand, if we require that the dominating
strategy only places positive probability on a very small
number of pure strategies, then it once again becomes easy
to check whether a strategy is dominated. Specifically, to
find out whether player i"s strategy σ∗
is dominated by
a strategy that places positive probability on only k pure
strategies, we can simply check, for every subset of k of
player i"s pure strategies, whether there is a strategy that
places positive probability only on these k strategies and
dominates σ∗
, using Proposition 1. This requires only
O(|Σi|k
) such checks. Thus, if k is a constant, this
constitutes a polynomial-time algorithm.
A natural question to ask next is whether iterated strict
dominance remains computationally easy when dominating
strategies are required to place positive probability on at
most k pure strategies, where k is a small constant. (We
have already shown in Section 4 that iterated weak
dominance is hard even when k = 1, that is, only dominance by
pure strategies is allowed.) Of course, if iterated strict
dominance were path-independent under this restriction,
computational easiness would follow as it did in Section 4.
However, it turns out that this is not the case.
Observation 1. If we restrict the dominating strategies
to place positive probability on at most two pure strategies,
iterated strict dominance becomes path-dependent.
Proof. Consider the following game:
7, 1 0, 0 0, 0
0, 0 7, 1 0, 0
3, 0 3, 0 0, 0
0, 0 0, 0 3, 1
1, 0 1, 0 1, 0
Let (i, j) denote the outcome in which the row player plays
the ith row and the column player plays the jth column.
Because (1, 1), (2, 2), and (4, 3) are all Nash equilibria, none
of the column player"s pure strategies will ever be eliminated,
and neither will rows 1, 2, and 4. We now observe that
randomizing uniformly over rows 1 and 2 dominates row 3,
and randomizing uniformly over rows 3 and 4 dominates row
5. However, if we eliminate row 3 first, it becomes impossible
to dominate row 5 without randomizing over at least 3 pure
strategies.
Indeed, iterated strict dominance turns out to be hard
even when k = 3.
Theorem 4. If we restrict the dominating strategies to
place positive probability on at most three pure strategies, it
becomes NP-complete to decide whether a given strategy can
be eliminated using iterated strict dominance.
93
Proof. The problem is in NP because given a sequence
of strategies to be eliminated, we can check in polynomial
time whether this is a valid sequence of eliminations (for any
strategy to be eliminated, we can check, for every subset of
three other strategies, whether there is a strategy placing
positive probability on only these three strategies that
dominates the strategy to be eliminated, using Proposition 1).
To show that the problem is NP-hard, we reduce an
arbitrary satisfiability instance (given by a nonempty set of
clauses C over a nonempty set of variables V , with
corresponding literals L = {+v : v ∈ V } ∪ {−v : v ∈ V }) to the
following two-player game.
For every variable v ∈ V , the row player has strategies
s+v, s−v, s1
v, s2
v, s3
v, s4
v, and the column player has strategies
t1
v, t2
v, t3
v, t4
v. For every clause c ∈ C, the row player has a
strategy sc, and the column player has a strategy tc, as well
as, for every literal l occurring in c, an additional strategy
tl
c. The row player has two additional strategies s1 and s2.
(s2 is the strategy that we are seeking to eliminate.) Finally,
the column player has one additional strategy t1.
The utility function for the row player is given as follows
(where is some sufficiently small number):
• ur(s+v, tj
v) = 4 if j ∈ {1, 2}, for all v ∈ V ;
• ur(s+v, tj
v) = 1 if j ∈ {3, 4}, for all v ∈ V ;
• ur(s−v, tj
v) = 1 if j ∈ {1, 2}, for all v ∈ V ;
• ur(s−v, tj
v) = 4 if j ∈ {3, 4}, for all v ∈ V ;
• ur(s+v, t) = ur(s−v, t) = 0 for all v ∈ V and t /∈
{t1
v, t2
v, t3
v, t4
v};
• ur(si
v, ti
v) = 13 for all v ∈ V and i ∈ {1, 2, 3, 4};
• ur(si
v, t) = for all v ∈ V , i ∈ {1, 2, 3, 4}, and t = ti
v;
• ur(sc, tc) = 2 for all c ∈ C;
• ur(sc, t) = 0 for all c ∈ C and t = tc;
• ur(s1, t1) = 1 + ;
• ur(s1, t) = for all t = t1;
• ur(s2, t1) = 1;
• ur(s2, tc) = 1 for all c ∈ C;
• ur(s2, t) = 0 for all t /∈ {t1} ∪ {tc : c ∈ C}.
The utility function for the column player is given as
follows:
• uc(si
v, ti
v) = 1 for all v ∈ V and i ∈ {1, 2, 3, 4};
• uc(s, ti
v) = 0 for all v ∈ V , i ∈ {1, 2, 3, 4}, and s = si
v;
• uc(sc, tc) = 1 for all c ∈ C;
• uc(sl, tc) = 1 for all c ∈ C and l ∈ L occurring in c;
• uc(s, tc) = 0 for all c ∈ C and s /∈ {sc} ∪ {sl : l ∈ c};
• uc(sc, tl
c) = 1 + for all c ∈ C;
• uc(sl , tl
c) = 1 + for all c ∈ C and l = l occurring in
c;
• uc(s, tl
c) = for all c ∈ C and s /∈ {sc} ∪ {sl : l ∈
c, l = l };
• uc(s2, t1) = 1;
• uc(s, t1) = 0 for all s = s2.
We now show that the two instances are equivalent. First,
suppose that there is a solution to the satisfiability instance.
Then, consider the following sequence of eliminations in our
game: 1. For every variable v that is set to true in the
satisfying assignment, eliminate s+v with the mixed strategy σr
that places probability 1/3 on s−v, probability 1/3 on s1
v,
and probability 1/3 on s2
v. (The expected utility of
playing σr against t1
v or t2
v is 14/3 > 4; against t3
v or t4
v, it is
4/3 > 1; and against anything else it is 2 /3 > 0. Hence the
dominance is valid.) 2. Similarly, for every variable v that
is set to false in the satisfying assignment, eliminate s−v
with the mixed strategy σr that places probability 1/3 on
s+v, probability 1/3 on s3
v, and probability 1/3 on s4
v. (The
expected utility of playing σr against t1
v or t2
v is 4/3 > 1;
against t3
v or t4
v, it is 14/3 > 4; and against anything else it
is 2 /3 > 0. Hence the dominance is valid.) 3. For every
c ∈ C, eliminate tc with any tl
c for which l was set to true
in the satisfying assignment. (This is a valid dominance
because tl
c performs better than tc against any strategy other
than sl, and we eliminated sl in step 1 or in step 2.) 4.
Finally, eliminate s2 with s1. (This is a valid dominance
because s1 performs better than s2 against any strategy other
than those in {tc : c ∈ C}, which we eliminated in step 3.)
Hence, there is an elimination path that eliminates s2.
Now, suppose that there is an elimination path that
eliminates s2. The strategy that eventually dominates s2 must
place most of its probability on s1, because s1 is the only
other strategy that performs well against t1, which cannot
be eliminated before s2. But, s1 performs significantly worse
than s2 against any strategy tc with c ∈ C, so it follows that
all these strategies must be eliminated first. Each strategy
tc can only be eliminated by a strategy that places most
of its weight on the corresponding strategies tl
c with l ∈ c,
because they are the only other strategies that perform well
against sc, which cannot be eliminated before tc. But, each
strategy tl
c performs significantly worse than tc against sl,
so it follows that for every clause c, for one of the literals
l occurring in it, sl must be eliminated first. Now,
strategies of the form tj
v will never be eliminated because they are
the unique best responses to the corresponding strategies sj
v
(which are, in turn, the best responses to the corresponding
tj
v). As a result, if strategy s+v (respectively, s−v) is
eliminated, then its opposite strategy s−v (respectively, s+v) can
no longer be eliminated, for the following reason. There is no
other pure strategy remaining that gets a significant utility
against more than one of the strategies t1
v, t2
v, t3
v, t4
v, but s−v
(respectively, s+v) gets significant utility against all 4, and
therefore cannot be dominated by a mixed strategy placing
positive probability on at most 3 strategies. It follows that
for each v ∈ V , at most one of the strategies s+v, s−v is
eliminated, in such a way that for every clause c, for one
of the literals l occurring in it, sl must be eliminated. But
then setting all the literals l such that sl is eliminated to
true constitutes a solution to the satisfiability instance.
In the next section, we return to the setting where there
is no restriction on the number of pure strategies on which
a dominating mixed strategy can place positive probability.
6. (ITERATED) DOMINANCE IN
BAYESIAN GAMES
So far, we have focused on normal form games that are
flatly represented (that is, every matrix entry is given
ex94
plicitly). However, for many games, the flat representation
is too large to write down explicitly, and instead, some
representation that exploits the structure of the game needs to
be used. Bayesian games, besides being of interest in their
own right, can be thought of as a useful structured
representation of normal form games, and we will study them in
this section.
In a Bayesian game, each player first receives privately
held preference information (the player"s type) from a
distribution, which determines the utility that that player receives
for every outcome of (that is, vector of actions played in) the
game. After receiving this type, the player plays an action
based on it.7
Definition 5. A Bayesian game is given by a set of
players {1, 2, . . . , n}; and, for each player i, a (finite) set of
actions Ai, a (finite) type space Θi with a probability
distribution πi over it, and a utility function ui : Θi × A1 × A2 ×
. . . × An → R (where ui(θi, a1, a2, . . . , an) denotes player
i"s utility when i"s type is θi and each player j plays action
aj). A pure strategy in a Bayesian game is a mapping from
types to actions, σi : Θi → Ai, where σi(θi) denotes the
action that player i plays for type θi.
Any vector of pure strategies in a Bayesian game defines
an (expected) utility for each player, and therefore we can
translate a Bayesian game into a normal form game. In this
normal form game, the notions of dominance and iterated
dominance are defined as before. However, the normal form
representation of the game is exponentially larger than the
Bayesian representation, because each player i has |Ai||Θi|
distinct pure strategies. Thus, any algorithm for Bayesian
games that relies on expanding the game to its normal form
will require exponential time. Specifically, our easiness
results for normal form games do not directly transfer to this
setting. In fact, it turns out that checking whether a
strategy is dominated by a pure strategy is hard in Bayesian
games.
Theorem 5. In a Bayesian game, it is NP-complete to
decide whether a given pure strategy σr : Θr → Ar is
dominated by some other pure strategy (both for strict and weak
dominance), even when the row player"s distribution over
types is uniform.
Proof. The problem is in NP because it is easy to verify
whether a candidate dominating strategy is indeed a
dominating strategy. To show that the problem is NP-hard, we
reduce an arbitrary satisfiability instance (given by a set of
clauses C using variables from V ) to the following Bayesian
game. Let the row player"s action set be Ar = {t, f, 0} and
let the column player"s action set be Ac = {ac : c ∈ C}.
Let the row player"s type set be Θr = {θv : v ∈ V }, with a
distribution πr that is uniform. Let the row player"s utility
function be as follows:
• ur(θv, 0, ac) = 0 for all v ∈ V and c ∈ C;
• ur(θv, b, ac) = |V | for all v ∈ V , c ∈ C, and b ∈ {t, f}
such that setting v to b satisfies c;
• ur(θv, b, ac) = −1 for all v ∈ V , c ∈ C, and b ∈ {t, f}
such that setting v to b does not satisfy c.
7
In general, a player can also receive a signal about the other
players" preferences, but we will not concern ourselves with
that here.
Let the pure strategy to be dominated be the one that
plays 0 for every type. We show that the strategy is
dominated by a pure strategy if and only if there is a solution to
the satisfiability instance.
First, suppose there is a solution to the satisfiability
instance. Then, let σd
r be given by: σd
r (θv) = t if v is set
to true in the solution to the satisfiability instance, and
σd
r (θv) = f otherwise. Then, against any action ac by the
column player, there is at least one type θv such that either
+v ∈ c and σd
r (θv) = t, or −v ∈ c and σd
r (θv) = f. Thus,
the row player"s expected utility against action ac is at least
|V |
|V |
− |V |−1
|V |
= 1
|V |
> 0. So, σd
r is a dominating strategy.
Now, suppose there is a dominating pure strategy σd
r .
This dominating strategy must play t or f for at least one
type. Thus, against any ac by the column player, there must
at least be some type θv for which ur(θv, σd
r (θv), ac) > 0.
That is, there must be at least one variable v such that
setting v to σd
r (θv) satifies c. But then, setting each v to σd
r (θv)
must satisfy all the clauses. So a satisfying assignment
exists.
However, it turns out that we can modify the linear
programs from Proposition 1 to obtain a polynomial time
algorithm for checking whether a strategy is dominated by a
mixed strategy in Bayesian games.
Theorem 6. In a Bayesian game, it can be decided in
polynomial time whether a given (possibly mixed) strategy
σr is dominated by some other mixed strategy, using linear
programming (both for strict and weak dominance).
Proof. We can modify the linear programs presented in
Proposition 1 as follows. For strict dominance, again
assuming without loss of generality that all the utilities in
the game are positive, use the following linear program (in
which pσr
r (θr, ar) is the probability that σr, the strategy to
be dominated, places on ar for type θr):
minimize
θr∈Θr ar∈Ar
pr(ar)
such that
for any ac ∈ Ac,
θr∈Θr ar∈Ar
π(θr)ur(θr, ar, ac)pr(θr, ar) ≥
θr∈Θr ar∈Ar
π(θr)ur(θr, ar, ac)pσr
r (θr, ar);
for any θr ∈ Θr,
ar∈Ar
pr(θr, ar) ≤ 1.
Assuming that π(θr) > 0 for all θr ∈ Θr, this program
will return an objective value smaller than |Θr| if and only
if σr is strictly dominated, by reasoning similar to that done
in Proposition 1.
For weak dominance, use the following linear program:
maximize
ac∈Ac
(
θr∈Θr ar∈Ar
π(θr)ur(θr, ar, ac)pr(θr, ar)−
θr∈Θr ar∈Ar
π(θr)ur(θr, ar, ac)pσr
r (θr, ar))
such that
for any ac ∈ Ac,
θr∈Θr ar∈Ar
π(θr)ur(θr, ar, ac)pr(θr, ar) ≥
θr∈Θr ar∈Ar
π(θr)ur(θr, ar, ac)pσr
r (θr, ar);
for any θr ∈ Θr,
ar∈Ar
pr(θr, ar) = 1.
This program will return an objective value greater than
0 if and only if σr is weakly dominated, by reasoning similar
to that done in Proposition 1.
We now turn to iterated dominance in Bayesian games.
Na¨ıvely, one might argue that iterated dominance in Bayesian
95
games always requires an exponential number of steps when
a significant fraction of the game"s pure strategies can be
eliminated, because there are exponentially many pure
strategies. However, this is not a very strong argument because
oftentimes we can eliminate exponentially many pure
strategies in one step. For example, if for some type θr ∈ Θr we
have, for all ac ∈ Ac, that u(θr, a1
r, ac) > u(θr, a2
r, ac), then
any pure strategy for the row player which plays action a2
r
for type θr is dominated (by the strategy that plays
action a1
r for type θr instead)-and there are exponentially
many (|Ar||Θr|−1
) such strategies. It is therefore
conceivable that we need only polynomially many eliminations of
collections of a player"s strategies. However, the following
theorem shows that this is not the case, by giving an
example where an exponential number of iterations (that is,
alternations between the players in eliminating strategies)
is required. (We emphasize that this is not a result about
computational complexity.)
Theorem 7. Even in symmetric 3-player Bayesian games,
iterated dominance by pure strategies can require an
exponential number of iterations (both for strict and weak
dominance), even with only three actions per player.
Proof. Let each player i ∈ {1, 2, 3} have n + 1 types
θ1
i , θ2
i , . . . , θn+1
i . Let each player i have 3 actions ai, bi, ci,
and let the utility function of each player be defined as
follows. (In the below, i + 1 and i + 2 are shorthand for
i + 1(mod 3) and i + 2(mod 3) when used as player indices.
Also, −∞ can be replaced by a sufficiently negative
number. Finally, δ and should be chosen to be very small (even
compared to 2−(n+1)
), and should be more than twice as
large as δ.)
• ui(θ1
i ; ai, ci+1, ci+2) = −1;
• ui(θ1
i ; ai, si+1, si+2) = 0 for si+1 = ci+1 or si+2 = ci+2;
• ui(θ1
i ; bi, si+1, si+2) = − for si+1 = ai+1 and si+2 =
ai+2;
• ui(θ1
i ; bi, si+1, si+2) = −∞ for si+1 = ai+1 or si+2 =
ai+2;
• ui(θ1
i ; ci, si+1, si+2) = −∞ for all si+1, si+2;
• ui(θj
i ; ai, si+1, si+2) = −∞ for all si+1, si+2 when j >
1;
• ui(θj
i ; bi, si+1, si+2) = − for all si+1, si+2 when j > 1;
• ui(θj
i ; ci, si+1, ci+2) = δ − − 1/2 for all si+1 when
j > 1;
• ui(θj
i ; ci, si+1, si+2) = δ− for all si+1 and si+2 = ci+2
when j > 1.
Let the distribution over each player"s types be given by
p(θj
i ) = 2−j
(with the exception that p(θ2
i ) = 2−2
+2−(n+1)
).
We will be interested in eliminating strategies of the
following form: play bi for type θ1
i , and play one of bi or ci
otherwise. Because the utility function is the same for any
type θj
i with j > 1, these strategies are effectively defined
by the total probability that they place on ci,8
which is
t2
i (2−2
+ 2−(n+1)
) + n+1
j=3 tj
i 2−j
where tj
i = 1 if player i
8
Note that the strategies are still pure strategies; the
probability placed on an action by a strategy here is simply the
sum of the probabilities of the types for which the strategy
chooses that action.
plays ci for type θj
i , and 0 otherwise. This probability is
different for any two different strategies of the given form,
and we have exponentially many different strategies of the
given form. For any probability q which can be expressed
as t2(2−2
+ 2−(n+1)
) + n+1
j=3 tj2−j
(with all tj ∈ {0, 1}),
let σi(q) denote the (unique) strategy of the given form for
player i which places a total probability of q on ci. Any
strategy that plays ci for type θ1
i or ai for some type θj
i
with j > 1 can immediately be eliminated. We will show
that, after that, we must eliminate the strategies σi(q) with
high q first, slowly working down to those with lower q.
Claim 1: If σi+1(q ) and σi+2(q ) have not yet been
eliminated, and q < q , then σi(q) cannot yet be eliminated.
Proof: First, we show that no strategy σi(q ) can
eliminate σi(q). Against σi+1(q ), σi+2(q ), the utility of
playing σi(p) is − + p · δ − p · q /2. Thus, when q = 0, it is
best to set p as high as possible (and we note that σi+1(0)
and σi+2(0) have not been eliminated), but when q > 0, it
is best to set p as low as possible because δ < q /2. Thus,
whether q > q or q < q , σi(q) will always do strictly
better than σi(q ) against some remaining opponent
strategies. Hence, no strategy σi(q ) can eliminate σi(q). The
only other pure strategies that could dominate σi(q) are
strategies that play ai for type θ1
i , and bi or ci for all other
types. Let us take such a strategy and suppose that it plays
c with probability p. Against σi+1(q ), σi+2(q ) (which have
not yet been eliminated), the utility of playing this
strategy is −(q )2
/2 − /2 + p · δ − p · q /2. On the other hand,
playing σi(q) gives − + q · δ − q · q /2. Because q > q, we
have −(q )2
/2 < −q · q /2, and because δ and are small,
it follows that σi(q) receives a higher utility. Therefore, no
strategy dominates σi(q), proving the claim.
Claim 2: If for all q > q, σi+1(q ) and σi+2(q ) have been
eliminated, then σi(q) can be eliminated. Proof: Consider
the strategy for player i that plays ai for type θ1
i , and bi for
all other types (call this strategy σi); we claim σi dominates
σi(q). First, if either of the other players k plays ak for θ1
k,
then σi performs better than σi(q) (which receives −∞ in
some cases). Because the strategies for player k that play
ck for type θ1
k, or ak for some type θj
k with j > 1, have
already been eliminated, all that remains to check is that σi
performs better than σi(q) whenever both of the other two
players play strategies of the following form: play bk for type
θ1
k, and play one of bk or ck otherwise. We note that among
these strategies, there are none left that place probability
greater than q on ck. Letting qk denote the probability with
which player k plays ck, the expected utility of playing σi
is −qi+1 · qi+2/2 − /2. On the other hand, the utility of
playing σi(q) is − + q · δ − q · qi+2/2. Because qi+1 ≤ q, the
difference between these two expressions is at least /2 − δ,
which is positive. It follows that σi dominates σi(q).
From Claim 2, it follows that all strategies of the form
σi(q) will eventually be eliminated. However, Claim 1 shows
that we cannot go ahead and eliminate multiple such
strategies for one player, unless at least one other player
simultaneously keeps up in the eliminated strategies: every time
a σi(q) is eliminated such that σi+1(q) and σi+2(q) have
not yet been eliminated, we need to eliminate one of the
latter two strategies before any σi(q ) with q > q can be
eliminated-that is, we need to alternate between players.
Because there are exponentially many strategies of the form
σi(q), it follows that iterated elimination will require
exponentially many iterations to complete.
96
It follows that an efficient algorithm for iterated
dominance (strict or weak) by pure strategies in Bayesian games,
if it exists, must somehow be able to perform (at least part
of) many iterations in a single step of the algorithm (because
if each step only performed a single iteration, we would need
exponentially many steps). Interestingly, Knuth et al. [11]
argue that iterated dominance appears to be an inherently
sequential problem (in light of their result that iterated very
weak dominance is P-complete, that is, apparently not
efficiently parallelizable), suggesting that aggregating many
iterations may be difficult.
7. CONCLUSIONS
While the Nash equilibrium solution concept is studied
more and more intensely in our community, the perhaps
more elementary concept of (iterated) dominance has
received much less attention. In this paper we studied various
computational aspects of this concept.
We first studied both strict and weak dominance (not
iterated), and showed that checking whether a given strategy
is dominated by some mixed strategy can be done in
polynomial time using a single linear program solve. We then
moved on to iterated dominance. We showed that
determining whether there is some path that eliminates a given
strategy is NP-complete with iterated weak dominance. This
allowed us to also show that determining whether there is a
path that leads to a unique solution is NP-complete. Both
of these results hold both with and without dominance by
mixed strategies. (A weaker version of the second result
(only without dominance by mixed strategies) was already
known [7].) Iterated strict dominance, on the other hand,
is path-independent (both with and without dominance by
mixed strategies) and can therefore be done in polynomial
time.
We then studied what happens when the dominating
strategy is allowed to place positive probability on only a few pure
strategies. First, we showed that finding the dominating
strategy with minimum support size is NP-complete (both
for strict and weak dominance). Then, we showed that
iterated strict dominance becomes path-dependent when there
is a limit on the support size of the dominating strategies,
and that deciding whether a given strategy can be
eliminated by iterated strict dominance under this restriction is
NP-complete (even when the limit on the support size is 3).
Finally, we studied dominance and iterated dominance in
Bayesian games, as an example of a concise representation
language for normal form games that is interesting in its own
right. We showed that, unlike in normal form games,
deciding whether a given pure strategy is dominated by another
pure strategy in a Bayesian game is NP-complete (both with
strict and weak dominance); however, deciding whether a
strategy is dominated by some mixed strategy can still be
done in polynomial time with a single linear program solve
(both with strict and weak dominance). Finally, we showed
that iterated dominance using pure strategies can require an
exponential number of iterations in a Bayesian game (both
with strict and weak dominance).
There are various avenues for future research. First, there
is the open question of whether it is possible to complete
iterated dominance in Bayesian games in polynomial time
(even though we showed that an exponential number of
alternations between the players in eliminating strategies is
sometimes required). Second, we can study computational
aspects of (iterated) dominance in concise representations
of normal form games other than Bayesian games-for
example, in graphical games [9] or local-effect/action graph
games [12, 2]. (How to efficiently perform iterated very weak
dominance has already been studied for partially observable
stochastic games [8].) Finally, we can ask whether some of
the algorithms we described (such as the one for iterated
strict dominance with mixed strategies) can be made faster.
8. REFERENCES
[1] Krzysztof R. Apt. Uniform proofs of order independence for
various strategy elimination procedures. Contributions to
Theoretical Economics, 4(1), 2004.
[2] Nivan A. R. Bhat and Kevin Leyton-Brown. Computing
Nash equilibria of action-graph games. In UAI, 2004.
[3] Ben Blum, Christian R. Shelton, and Daphne Koller. A
continuation method for Nash equilibria in structured
games. In IJCAI, 2003.
[4] Vincent Conitzer and Tuomas Sandholm. Complexity
results about Nash equilibria. In IJCAI, pages 765-771,
2003.
[5] Drew Fudenberg and Jean Tirole. Game Theory. MIT
Press, 1991.
[6] Itzhak Gilboa, Ehud Kalai, and Eitan Zemel. On the order
of eliminating dominated strategies. Operations Research
Letters, 9:85-89, 1990.
[7] Itzhak Gilboa, Ehud Kalai, and Eitan Zemel. The
complexity of eliminating dominated strategies.
Mathematics of Operation Research, 18:553-565, 1993.
[8] Eric A. Hansen, Daniel S. Bernstein, and Shlomo
Zilberstein. Dynamic programming for partially observable
stochastic games. In AAAI, pages 709-715, 2004.
[9] Michael Kearns, Michael Littman, and Satinder Singh.
Graphical models for game theory. In UAI, 2001.
[10] Leonid Khachiyan. A polynomial algorithm in linear
programming. Soviet Math. Doklady, 20:191-194, 1979.
[11] Donald E. Knuth, Christos H. Papadimitriou, and John N
Tsitsiklis. A note on strategy elimination in bimatrix
games. Operations Research Letters, 7(3):103-107, 1988.
[12] Kevin Leyton-Brown and Moshe Tennenholtz. Local-effect
games. In IJCAI, 2003.
[13] Richard Lipton, Evangelos Markakis, and Aranyak Mehta.
Playing large games using simple strategies. In ACM-EC,
pages 36-41, 2003.
[14] Michael Littman and Peter Stone. A polynomial-time Nash
equilibrium algorithm for repeated games. In ACM-EC,
pages 48-54, 2003.
[15] Leslie M. Marx and Jeroen M. Swinkels. Order
independence for iterated weak dominance. Games and
Economic Behavior, 18:219-245, 1997.
[16] Leslie M. Marx and Jeroen M. Swinkels. Corrigendum,
order independence for iterated weak dominance. Games
and Economic Behavior, 31:324-329, 2000.
[17] Andreu Mas-Colell, Michael Whinston, and Jerry R. Green.
Microeconomic Theory. Oxford University Press, 1995.
[18] Roger Myerson. Game Theory: Analysis of Conflict.
Harvard University Press, Cambridge, 1991.
[19] Martin J Osborne and Ariel Rubinstein. A Course in
Game Theory. MIT Press, 1994.
[20] Christos Papadimitriou. Algorithms, games and the
Internet. In STOC, pages 749-753, 2001.
[21] Ryan Porter, Eugene Nudelman, and Yoav Shoham. Simple
search methods for finding a Nash equilibrium. In AAAI,
pages 664-669, 2004.
97 | game theory;nash equilibrium;bayesian game;dominance;normal form game;elimination;iterated dominance;self-interested agent;strategy;optimal action;multiagent system |
train_J-52 | Hidden-Action in Multi-Hop Routing | In multi-hop networks, the actions taken by individual intermediate nodes are typically hidden from the communicating endpoints; all the endpoints can observe is whether or not the end-to-end transmission was successful. Therefore, in the absence of incentives to the contrary, rational (i.e., selfish) intermediate nodes may choose to forward packets at a low priority or simply not forward packets at all. Using a principal-agent model, we show how the hidden-action problem can be overcome through appropriate design of contracts, in both the direct (the endpoints contract with each individual router) and recursive (each router contracts with the next downstream router) cases. We further demonstrate that per-hop monitoring does not necessarily improve the utility of the principal or the social welfare in the system. In addition, we generalize existing mechanisms that deal with hidden-information to handle scenarios involving both hidden-information and hidden-action. | 1. INTRODUCTION
Endpoints wishing to communicate over a multi-hop network
rely on intermediate nodes to forward packets from the sender to
the receiver. In settings where the intermediate nodes are
independent agents (such as individual nodes in ad-hoc and
peer-topeer networks or autonomous systems on the Internet), this poses
an incentive problem; the intermediate nodes may incur significant
communication and computation costs in the forwarding of packets
without deriving any direct benefit from doing so. Consequently, a
rational (i.e., utility maximizing) intermediate node may choose to
forward packets at a low priority or not forward the packets at all.
This rational behavior may lead to suboptimal system performance.
The endpoints can provide incentives, e.g., in the form of
payments, to encourage the intermediate nodes to forward their
packets. However, the actions of the intermediate nodes are often hidden
from the endpoints. In many cases, the endpoints can only observe
whether or not the packet has reached the destination, and cannot
attribute failure to a specific node on the path. Even if some form
of monitoring mechanism allows them to pinpoint the location of
the failure, they may still be unable to attribute the cause of failure
to either the deliberate action of the intermediate node, or to some
external factors beyond the control of the intermediate node, such
as network congestion, channel interference, or data corruption.
The problem of hidden action is hardly unique to networks. Also
known as moral hazard, this problem has long been of interest in
the economics literature concerning information asymmetry,
incentive and contract theory, and agency theory. We follow this
literature by formalizing the problem as a principal-agent model, where
multiple agents making sequential hidden actions [17, 27].
Our results are threefold. First, we show that it is possible to
design contracts to induce cooperation when intermediate nodes
can choose to forward or drop packets, as well as when the nodes
can choose to forward packets with different levels of quality of
service. If the path and transit costs are known prior to
transmission, the principal achieves first best solution, and can implement
the contracts either directly with each intermediate node or
recursively through the network (each node making a contract with the
following node) without any loss in utility. Second, we find that
introducing per-hop monitoring has no impact on the principal"s
expected utility in equilibrium. For a principal who wishes to induce
an equilibrium in which all intermediate nodes cooperate, its
expected total payment is the same with or without monitoring.
However, monitoring provides a dominant strategy equilibrium, which
is a stronger solution concept than the Nash equilibrium achievable
in the absence of monitoring. Third, we show that in the absence
of a priori information about transit costs on the packet forwarding
path, it is possible to generalize existing mechanisms to overcome
scenarios that involve both hidden-information and hidden-action.
In these scenarios, the principal pays a premium compared to
scenarios with known transit costs.
2. BASELINE MODEL
We consider a principal-agent model, where the principal is a
pair of communication endpoints who wish to communicate over
a multi-hop network, and the agents are the intermediate nodes
capable of forwarding packets between the endpoints. The
principal (who in practice can be either the sender, the receiver, or
117
both) makes individual take-it-or-leave-it offers (contracts) to the
agents. If the contracts are accepted, the agents choose their
actions sequentially to maximize their expected payoffs based on the
payment schedule of the contract. When necessary, agents can in
turn make subsequent take-it-or-leave-it offers to their downstream
agents.
We assume that all participants are risk neutral and that standard
assumptions about the global observability of the final outcome and
the enforceability of payments by guaranteeing parties hold.
For simplicity, we assume that each agent has only two possible
actions; one involving significant effort and one involving little
effort. We denote the action choice of agent i by ai ∈ {0, 1}, where
ai = 0 and ai = 1 stand for the low-effort and high-effort actions,
respectively. Each action is associated with a cost (to the agent)
C(ai), and we assume:
C(ai = 1) > C(ai = 0)
At this stage, we assume that all nodes have the same C(ai) for
presentation clarity, but we relax this assumption later. Without loss
of generality we normalize the C(ai = 0) to be zero, and denote
the high-effort cost by c, so C(ai = 0) = 0 and C(ai = 1) = c.
The utility of agent i, denoted by ui, is a function of the payment
it receives from the principal (si), the action it takes (ai), and the
cost it incurs (ci), as follows:
ui(si, ci, ai) = si − aici
The outcome is denoted by x ∈ {xG
, xB
}, where xG
stands
for the Good outcome in which the packet reaches the
destination, and xB
stands for the Bad outcome in which the packet
is dropped before it reaches the destination. The outcome is a
function of the vector of actions taken by the agents on the path,
a = (a1, ..., an) ∈ {0, 1}n
, and the loss rate on the channels, k.
The benefit of the sender from the outcome is denoted by w(x),
where:
w(xG
) = wG
; and w(xB
) = wB
= 0
The utility of the sender is consequently:
u(x, S) = w(x) − S
where: S =
Pn
i=1 si
A sender who wishes to induce an equilibrium in which all nodes
engage in the high-effort action needs to satisfy two constraints for
each agent i:
(IR) Individual rationality (participation constraint)1
: the
expected utility from participation should (weakly) exceed its
reservation utility (which we normalize to 0).
(IC) Incentive compatibility: the expected utility from exerting
high-effort should (weakly) exceed its expected utility from
exerting low-effort.
In some network scenarios, the topology and costs are common
knowledge. That is, the sender knows in advance the path that its
packet will take and the costs on that path. In other routing
scenarios, the sender does not have this a priori information. We show
that our model can be applied to both scenarios with known and
unknown topologies and costs, and highlight the implications of each
scenario in the context of contracts. We also distinguish between
direct contracts, where the principal signs an individual contract
1We use the notion of ex ante individual rationality, in which the agents
choose to participate before they know the state of the system.
S Dn1
Source Destination
n intermediate nodes
Figure 1: Multi-hop path from sender to destination.
Figure 2: Structure of the multihop routing game under known
topology and transit costs.
with each node, and recursive contracts, where each node enters a
contractual relationship with its downstream node.
The remainder of this paper is organized as follows. In Section 3
we consider agents who decide whether to drop or forward
packets with and without monitoring when the transit costs are common
knowledge. In Section 4, we extend the model to scenarios with
unknown transit costs. In Section 5, we distinguish between recursive
and direct contracts and discuss their relationship. In Section 6, we
show that the model applies to scenarios in which agents choose
between different levels of quality of service. We consider Internet
routing as a case study in Section 7. In Section 8 we present related
work, and Section 9 concludes the paper.
3. KNOWN TRANSIT COSTS
In this section we analyze scenarios in which the principal knows
in advance the nodes on the path to the destination and their costs,
as shown in figure 1. We consider agents who decide whether to
drop or forward packets, and distinguish between scenarios with
and without monitoring.
3.1 Drop versus Forward without Monitoring
In this scenario, the agents decide whether to drop (a = 0) or
forward (a = 1) packets. The principal uses no monitoring to
observe per-hop outcomes. Consequently, the principal makes the
payment schedule to each agent contingent on the final outcome, x,
as follows:
si(x) = (sB
i , sG
i )
where:
sB
i = si(x = xB
)
sG
i = si(x = xG
)
The timeline of this scenario is shown in figure 2. Given a
perhop loss rate of k, we can express the probability that a packet is
successfully delivered from node i to its successor i + 1 as:
Pr(xG
i→i+1|ai) = (1 − k)ai (1)
where xG
i→j denotes a successful transmission from node i to j.
PROPOSITION 3.1. Under the optimal contract that induces
high-effort behavior from all intermediate nodes in the Nash
Equi118
librium2
, the expected payment to each node is the same as its
expected cost, with the following payment schedule:
sB
i = si(x = xB
) = 0 (2)
sG
i = si(x = xG
) =
c
(1 − k)n−i+1
(3)
PROOF. The principal needs to satisfy the IC and IR
constraints for each agent i, which can be expressed as follows:
(IC)Pr(xG
|aj≥i = 1)sG
i + Pr(xB
|aj≥i = 1)sB
i − c ≥
Pr(xG
|ai = 0, aj>i = 1)sG
i + Pr(xB
|ai = 0, aj>i = 1)sB
i
(4)
This constraint says that the expected utility from forwarding is
greater than or equal to its expected utility from dropping, if all
subsequent nodes forward as well.
(IR)Pr(xG
S→i|aj<i = 1)(Pr(xG
|aj≥i = 1)sG
i +
Pr(xB
|aj≥i = 1)sB
i − c) + Pr(xB
S→i|aj<i = 1)sB
i ≥ 0
(5)
This constraint says that the expected utility from participating is
greater than or equal to zero (reservation utility), if all other nodes
forward.
The above constraints can be expressed as follows, based on
Eq. 1:
(IC) : (1 − k)n−i+1
sG
i + (1 − (1 − k)n−i+1
)sB
i − c ≥ sB
i
(IR) : (1−k)i
((1−k)n−i+1
sG
i +(1−(1−k)n−i+1
)sB
i −c)+
(1 − (1 − k)i
)sB
i ≥ 0
It is a standard result that both constraints bind at the optimal
contract (see [23]). Solving the two equations, we obtain the
solution that is presented in Eqs. 2 and 3.
We next prove that the expected payment to a node equals its
expected cost in equilibrium. The expected cost of node i is its
transit cost multiplied by the probability that it faces this cost (i.e.,
the probability that the packet reaches node i), which is: (1 − k)i
c.
The expected payment that node i receives is:
Pr(xG
)sG
i + Pr(xB
)sB
i = (1 − k)n+1 c
(1 − k)n−i+1
= (1 − k)i
c
Note that the expected payment to a node decreases as the node
gets closer to the destination due to the asymmetric distribution
of risk. The closer the node is to the destination, the lower the
probability that a packet will fail to reach the destination, resulting
in the low payment being made to the node.
The expected payment by the principal is:
E[S] = (1 − k)n+1
nX
i=1
sG
i + (1 − (1 − k)n+1
)
nX
i=1
sB
i
= (1 − k)n+1
nX
i=1
ci
(1 − k)n−i+1
(6)
The expected payment made by the principal depends not only
on the total cost, but also the number of nodes on the path.
PROPOSITION 3.2. Given two paths with respective lengths of
n1 and n2 hops, per-hop transit costs of c1 and c2, and per-hop
loss rates of k1 and k2, such that:
2Since transit nodes perform actions sequentially, this is really a
subgameperfect equilibrium (SPE), but we will refer to it as Nash equilibrium in the
remainder of the paper.
Figure 3: Two paths of equal total costs but different lengths and
individual costs.
• c1n1 = c2n2 (equal total cost)
• (1 − k1)n1+1
= (1 − k2)n2+1
(equal expected benefit)
• n1 < n2 (path 1 is shorter than path 2)
the expected total payment made by the principal is lower on the
shorter path.
PROOF. The expected payment in path j is
E[S]j =
nj
X
i=1
cj (1 − kj )i
= cj (1 − kj )
1 − (1 − kj)nj
kj
So, we have to show that:
c1(1 − k1)
1 − (1 − k1)n1
k1
> c2(1 − k2)
1 − (1 − k2)n2
k2
Let M = c1n1 = c2n2 and N = (1 − k1)n1+1
= (1 − k2)n2+1
.
We have to show that
MN
1
n1+1 (1 − N
n1
n1+1 )
n1(1 − N
1
n1+1 )
<
MN
1
n2+1 (1 − N
n2
n2+1 )
n2(1 − N
1
n2+1 )
(7)
Let
f =
N
1
n+1 (1 − N
n
n+1 )
n(1 − N
1
n+1 )
Then, it is enough to show that f is monotonically increasing in n
∂f
∂n
=
g(N, n)
h(N, n)
where:
g(N, n) = −((ln(N)n − (n + 1)2
)(N
1
n+1
− N
n+2
n+1 ) − (n + 1)2
(N + N
2
n+1 ))
and
h(N, n) = (n + 1)2
n2
(−1 + N
1
n+1 )2
but h(N, n) > 0 ∀N, n, therefore, it is enough to show that
g(N, n) > 0. Because N ∈ (0, 1): (i) ln(N) < 0, and (ii)
N
1
n+1 > N
n+2
n+1 . Therefore, g(N, n) > 0 ∀N, n.
This means that, ceteris paribus, shorter paths should always be
preferred over longer ones.
For example, consider the two topologies presented in Figure 3.
While the paths are of equal total cost, the total expected payment
by the principal is different. Based on Eqs. 2 and 3, the expected
total payment for the top path is:
E[S] = Pr(xG
)(sG
A + sG
B)
=
„
c1
(1 − k1)2
+
c1
1 − k1
«
(1 − k1)3
(8)
119
while the expect total payment for the bottom path is:
E[S] = Pr(xG
)(sG
A + sG
B + sG
C )
= (
c2
(1 − k2)3
+
c2
(1 − k2)2
+
c2
1 − k2
)(1 − k2)4
For n1 = 2, c1 = 1.5, k1 = 0.5, n2 = 3, c2 = 1, k2 = 0.405,
we have equal total cost and equal expected benefit, but E[S]1 =
0.948 and E[S]2 = 1.313.
3.2 Drop versus Forward with Monitoring
Suppose the principal obtains per-hop monitoring information.3
Per-hop information broadens the set of mechanisms the principal
can use. For example, the principal can make the payment schedule
contingent on arrival to the next hop instead of arrival to the final
destination. Can such information be of use to a principal wishing
to induce an equilibrium in which all intermediate nodes forward
the packet?
PROPOSITION 3.3. In the drop versus forward model, the
principal derives the same expected utility whether it obtains per-hop
monitoring information or not.
PROOF. The proof to this proposition is already implied in the
findings of the previous section. We found that in the absence of
per-hop information, the expected cost of each intermediate node
equals its expected payment. In order to satisfy the IR constraint, it
is essential to pay each intermediate node an expected amount of at
least its expected cost; otherwise, the node would be better-off not
participating. Therefore, no other payment scheme can reduce the
expected payment from the principal to the intermediate nodes. In
addition, if all nodes are incentivized to forward packets, the
probability that the packet reaches the destination is the same in both
scenarios, thus the expected benefit of the principal is the same.
Indeed, we have found that even in the absence of per-hop monitoring
information, the principal achieves first-best solution.
To convince the reader that this is indeed the case, we provide an
example of a mechanism that conditions payments on arrival to the
next hop. This is possible only if per-hop monitoring information
is provided. In the new mechanism, the principal makes the
payment schedule contingent on whether the packet has reached the
next hop or not. That is, the payment to node i is sG
i if the packet
has reached node i + 1, and sB
i otherwise. We assume costless
monitoring, giving us the best case scenario for the use of
monitoring. As before, we consider a principal who wishes to induce an
equilibrium in which all intermediate nodes forward the packet.
The expected utility of the principal is the difference between its
expected benefit and its expected payment. Because the expected
benefit when all nodes forward is the same under both scenarios,
we only need to show that the expected total payment is identical as
well. Under the monitoring mechanism, the principal has to satisfy
the following constraints:
(IC)Pr(xG
i→i+1|ai = 1)sG
+ Pr(xB
i→i+1|ai = 1)sB
− c ≥
Pr(xG
i→i+1|ai = 0)sG
+ Pr(xB
i→i+1|ai = 0)sB
(9)
(IR)Pr(xG
S→i|aj<i = 1)(Pr(xG
i→i+1|ai = 1)sG
+ Pr(xB
i→i+1|ai = 1)sB
− c) ≥ 0
(10)
3For a recent proposal of an accountability framework that provides such
monitoring information see [4].
These constraints can be expressed as follows:
(IC) : (1 − k)sG
+ ksB
− c ≥ s0
(IR) : (1 − k)i
((1 − k)sG
+ ksB
− c) ≥ 0
The two constraints bind at the optimal contract as before, and
we get the following payment schedule:
sB
= 0
sG
=
c
1 − k
The expected total payment under this scenario is:
E[S] =
nX
i=1
((1 − k)i
(sB
+ (i − 1)sG
)) + (1 − k)n+1
nsG
= (1 − k)n+1
nX
i=1
ci
(1 − k)n−i+1
as in the scenario without monitoring (see Equation 6.)
While the expected total payment is the same with or without
monitoring, there are some differences between the two scenarios.
First, the payment structure is different. If no per-hop monitoring
is used, the payment to each node depends on its location (i). In
contrast, monitoring provides us with n identical contracts.
Second, the solution concept used is different. If no monitoring
is used, the strategy profile of ai = 1 ∀i is a Nash equilibrium,
which means that no agent has an incentive to deviate unilaterally
from the strategy profile. In contrast, with the use of monitoring,
the action chosen by node i is independent of the other agents"
forwarding behavior. Therefore, monitoring provides us with
dominant strategy equilibrium, which is a stronger solution concept than
Nash equilibrium. [15], [16] discuss the appropriateness of
different solution concepts in the context of online environments.
4. UNKNOWN TRANSIT COSTS
In certain network settings, the transit costs of nodes along the
forwarding path may not be common knowledge, i.e., there exists
the problem of hidden information. In this section, we address the
following questions:
1. Is it possible to design contracts that induce cooperative
behavior in the presence of both hidden-action and
hiddeninformation?
2. What is the principal"s loss due to the lack of knowledge of
the transit costs?
In hidden-information problems, the principal employs
mechanisms to induce truthful revelation of private information from the
agents. In the routing game, the principal wishes to extract transit
cost information from the network routers in order to determine the
lowest cost path (LCP) for a given source-destination pair. The
network routers act strategically and declare transit costs to maximize
their profit. Mechanisms that have been proposed in the literature
for the routing game [24, 13] assume that once the transit costs
have been obtained, and the LCP has been determined, the nodes
on the LCP obediently forward all packets, and that there is no loss
in the network, i.e., k = 0. In this section, we consider both hidden
information and hidden action, and generalize these mechanisms
to induce both truth revelation and high-effort action in
equilibrium, where nodes transmit over a lossy communication channel,
i.e., k ≥ 0.
4.1 V CG Mechanism
In their seminal paper [24], Nisan and Ronen present a VCG
mechanism that induces truthful revelation of transit costs by edges
120
Figure 4: Game structure for F P SS, where only hidden-information
is considered.
Figure 5: Game structure for F P SS , where both
hiddeninformation and hidden-action are considered.
in a biconnected network, such that lowest cost paths can be
chosen. Like all VCG mechanisms, it is a strategyproof mechanism,
meaning that it induces truthful revelation in a dominant strategy
equilibrium. In [13] (FPSS), Feigenbaum et al. slightly modify
the model to have the routers as the selfish agents instead of the
edges, and present a distributed algorithm that computes the VCG
payments. The timeline of the FPSS game is presented in
figure 4. Under FPSS, transit nodes keep track of the amount of
traffic routed through them via counters, and payments are
periodically transferred from the principals to the transit nodes based on
the counter values. FPSS assumes that transit nodes are
obedient in packet forwarding behavior, and will not update the counters
without exerting high effort in packet forwarding.
In this section, we present FPSS , which generalizes FPSS to
operate in an environment with lossy communication channels (i.e.,
k ≥ 0) and strategic behavior in terms of packet forwarding. We
will show that FPSS induces an equilibrium in which all nodes
truthfully reveal their transit costs and forward packets if they are
on the LCP. Figure 5 presents the timeline of FPSS . In the first
stage, the sender declares two payment functions, (sG
i , sB
i ), that
will be paid upon success or failure of packet delivery. Given these
payments, nodes have incentive to reveal their costs truthfully, and
later to forward packets. Payments are transferred based on the
final outcome.
In FPSS , each node i submits a bid bi, which is its reported
transit cost. Node i is said to be truthful if bi = ci. We write b for
the vector (b1, . . . , bn) of bids submitted by all transit nodes. Let
Ii(b) be the indicator function for the LCP given the bid vector b
such that
Ii(b) =
1 if i is on the LCP;
0 otherwise.
Following FPSS [13], the payment received by node i at
equilibrium is:
pi = biIi(b) + [
X
r
Ir(b|i
∞)br −
X
r
Ir(b)br]
=
X
r
Ir(b|i
∞)br −
X
r=i
Ir(b)br
(11)
where the expression b|i
x means that (b|i
x)j = cj for all j = i,
and (b|i
x)i = x.
In FPSS , we compute sB
i and sG
i as a function of pi, k, and
n. First, we recognize that sB
i must be less than or equal to zero
in order for the true LCP to be chosen. Otherwise, strategic nodes
may have an incentive to report extremely small costs to mislead
the principal into believing that they are on the LCP. Then, these
nodes can drop any packets they receive, incur zero transit cost,
collect a payment of sB
i > 0, and earn positive profit.
PROPOSITION 4.1. Let the payments of FPSS be:
sB
i = 0
sG
i =
pi
(1 − k)n−i+1
Then, FPSS has a Nash equilibrium in which all nodes truthfully
reveal their transit costs and all nodes on the LCP forward packets.
PROOF. In order to prove the proposition above, we have to
show that nodes have no incentive to engage in the following
misbehaviors:
1. truthfully reveal cost but drop packet,
2. lie about cost and forward packet,
3. lie about cost and drop packet.
If all nodes truthfully reveal their costs and forward packets, the
expected utility of node i on the LCP is:
E[u]i = Pr(xG
S→i)(E[si] − ci) + Pr(xB
S→i)sB
i
= (1 − k)i
(1 − k)n−i+1
sG
i + (1 − (1 − k)n−i+1
)sB
i − ci
+ (1 − (1 − k)i
)sB
i
= (1 − k)i
(1 − k)n−i+1 pi
(1 − k)n−i+1
− (1 − k)i
ci
= (1 − k)i
(pi − ci)
≥ 0
(12)
The last inequality is derived from the fact that FPSS is a truthful
mechanism, thus pi ≥ ci. The expected utility of a node not on the
LCP is 0.
A node that drops a packet receives sB
i = 0, which is smaller
than or equal to E[u]i for i ∈ LCP and equals E[u]i for i /∈ LCP.
Therefore, nodes cannot gain utility from misbehaviors (1) or (3).
We next show that nodes cannot gain utility from misbehavior (2).
1. if i ∈ LCP, E[u]i > 0.
(a) if it reports bi > ci:
i. if bi <
P
r Ir(b|i
∞)br −
P
r=i Ir(b)br, it is still
on the LCP, and since the payment is independent
of bi, its utility does not change.
ii. if bi >
P
r Ir(b|i
∞)br −
P
r=i Ir(b)br, it will
not be on the LCP and obtain E[u]i = 0, which is
less than its expected utility if truthfully revealing
its cost.
121
(b) if it reports bi < ci, it is still on the LCP, and since
the payment is independent of bi, its utility does not
change.
2. if i /∈ LCP, E[u]i = 0.
(a) if it reports bi > ci, it remains out of the LCP, so its
utility does not change.
(b) if it reports bi < ci:
i. if bi <
P
r Ir(b|i
∞)br −
P
r=i Ir(b)br, it joins
the LCP, and gains an expected utility of
E[u]i = (1 − k)i
(pi − ct)
However, if i /∈ LCP, it means that
ci >
X
r
Ir(c|i
∞)cr −
X
r=i
Ir(c)cr
But if all nodes truthfully reveal their costs,
pi =
X
r
Ir(c|i
∞)cr −
X
r=i
Ir(c)cr < ci
therefore, E[u]i < 0
ii. if bi >
P
r Ir(b|i
∞)br −
P
r=i Ir(b)br, it
remains out of the LCP, so its utility does not change.
Therefore, there exists an equilibrium in which all nodes truthfully
reveal their transit costs and forward the received packets.
We note that in the hidden information only context, FPSS
induces truthful revelation as a dominant strategy equilibrium. In
the current setting with both hidden information and hidden action,
FPSS achieves a Nash equilibrium in the absence of per-hop
monitoring, and a dominant strategy equilibrium in the presence of
per-hop monitoring, consistent with the results in section 3 where
there is hidden action only. In particular, with per-hop monitoring,
the principal declares the payments sB
i and sG
i to each node upon
failure or success of delivery to the next node. Given the payments
sB
i = 0 and sG
i = pi/(1 − k), it is a dominant strategy for the
nodes to reveal costs truthfully and forward packets.
4.2 Discussion
More generally, for any mechanism M that induces a bid vector b
in equilibrium by making a payment of pi(b) to node i on the LCP,
there exists a mechanism M that induces an equilibrium with the
same bid vector and packet forwarding by making a payment of:
sB
i = 0
sG
i =
pi(b)
(1 − k)n−i+1
.
A sketch of the proof would be as follows:
1. IM
i (b) = IM
i (b)∀i, since M uses the same choice metric.
2. The expected utility of a LCP node is E[u]i = (1 −
k)i
(pi(b) − ci) ≥ 0 if it forwards and 0 if it drops, and
the expected utility of a non-LCP node is 0.
3. From 1 and 2, we get that if a node i can increase its expected
utility by deviating from bi under M , it can also increase its
utility by deviating from bi in M, but this is in contradiction
to bi being an equilibrium in M.
4. Nodes have no incentive to drop packets since they derive an
expected utility of 0 if they do.
In addition to the generalization of FPSS into FPSS , we
can also consider the generalization of the first-price auction (FPA)
mechanism, where the principal determines the LCP and pays each
node on the LCP its bid, pi(b) = bi. First-price auctions achieve
Nash equilibrium as opposed to dominant strategy equilibrium.
Therefore, we should expect the generalization of FPA to achieve
Nash equilibrium with or without monitoring.
We make two additional comments concerning this class of
mechanisms. First, we find that the expected total payment made
by the principal under the proposed mechanisms is
E[S] =
nX
i=1
(1 − k)i
pi(b)
and the expected benefit realized by the principal is
E[w] = (1 − k)n+1
wG
where
Pn
i=1 pi and wG
are the expected payment and expected
benefit, respectively, when only the hidden-information problem is
considered. When hidden action is also taken into consideration,
the generalized mechanism handles strategic forwarding behavior
by conditioning payments upon the final outcome, and accounts for
lossy communication channels by designing payments that reflect
the distribution of risk. The difference between expected payment
and benefit is not due to strategic forwarding behavior, but to lossy
communications. Therefore, in a lossless network, we should not
see any gap between expected benefits and payments, independent
of strategic or non-strategic forwarding behavior.
Second, the loss to the principal due to unknown transit costs is
also known as the price of frugality, and is an active field of
research [2, 12]. This price greatly depends on the network topology
and on the mechanism employed. While it is simple to characterize
the principal"s loss in some special cases, it is not a trivial problem
in general. For example, in topologies with parallel disjoint paths
from source to destination, we can prove that under first-price
auctions, the loss to the principal is the difference between the cost of
the shortest path and the second-shortest path, and the loss is higher
under the FPSS mechanism.
5. RECURSIVE CONTRACTS
In this section, we distinguish between direct and recursive
contracts. In direct contracts, the principal contracts directly with each
node on the path and pays it directly. In recursive payment, the
principal contracts with the first node on the path, which in turn
contracts with the second, and so on, such that each node contracts
with its downstream node and makes the payment based on the final
result, as demonstrated in figure 6.
With direct payments, the principal needs to know the identity
and cost of each node on the path and to have some communication
channel with the node. With recursive payments, every node needs
to communicate only with its downstream node. Several questions
arise in this context:
• What knowledge should the principal have in order to induce
cooperative behavior through recursive contracts?
• What should be the structure of recursive contracts that
induce cooperative behavior?
• What is the relation between the total expected payment
under direct and recursive contracts?
• Is it possible to design recursive contracts in scenarios of
unknown transit costs?
122
Figure 6: Structure of the multihop routing game under known
topology and recursive contracts.
In order to answer the questions outlined above, we look at the
IR and IC constraints that the principal needs to satisfy when
contracting with the first node on the path. When the principal designs
a contract with the first node, he should take into account the
incentives that the first node should provide to the second node, and
so on all the way to the destination.
For example, consider the topology given in figure 3 (a). When
the principal comes to design a contract with node A, he needs to
consider the subsequent contract that A should sign with B, which
should satisfy the following constrints.
(IR) :Pr(xG
A→B|aA = 1)(E[s|aB = 1] − c)+
Pr(xB
A→B|aA = 1)sB
A→B ≥ 0
(IC) :E[s|aB = 1] − c ≥ E[s|aB = 0]
where:
E[s|aB = 1] = Pr(xG
B→D|aB = 1)sG
A→B
+ Pr(xB
B→D|aB = 1)sB
A→B
and
E[s|aB = 0] = Pr(xG
B→D|aB = 0)sG
A→B
+ Pr(xB
B→D|aB = 0)sB
A→B
These (binding) constraints yield the values of sB
A→B and sG
A→B:
sB
A→B = 0
sG
A→B = c/(1 − k)
Based on these values, S can express the constraints it should
satisfy in a contract with A.
(IR) :Pr(xG
S→A|aS = 1)(E[sS→A − sA→B|ai = 1∀i] − c)
+ Pr(xB
S→A|aS = 1)sB
S→A ≥ 0
(IC) : E[sS→A − sA→B|ai = 1∀i] − c
≥ E[sS→A − sA→B|aA = 0, aB = 1]
where:
E[sS→A − sA→B|ai = 1∀i] =
Pr(xG
A→D|ai = 1∀i)(sG
S→A − sG
A→B)
+Pr(xB
A→D|ai = 1∀i)(sB
S→A − sB
A→B)
and
E[sS→A − sA→B|aA = 0, aB = 1] =
Pr(xG
A→D|aA = 0, aB = 1)(sG
S→A − sG
A→B)
+Pr(xB
A→D|aA = 0, aB = 1)(sB
S→A − sB
A→B)
Solving for sB
S→A and sG
S→A, we get:
sB
S→A = 0
sG
S→A =
c(2 − k)
1 − 2k + k2
The expected total payment is
E[S] = sG
S→APr(xG
S→D) + sB
S→APr(xB
S→D)
= c(2 − k)(1 − k)
(13)
which is equal to the expected total payment under direct contracts
(see Eq. 8).
PROPOSITION 5.1. The expected total payments by the
principal under direct and recursive contracts are equal.
PROOF. In order to calculate the expected total payment, we
have to find the payment to the first node on the path that will
induce appropriate behavior. Because sB
i = 0 in the drop / forward
model, both constraints can be reduced to:
Pr(xG
i→R|aj = 1∀j)(sG
i − sG
i+1) − ci = 0
⇔ (1 − k)n−i+1
(sG
i − sG
i+1) − ci = 0
which yields, for all 1 ≤ i ≤ n:
sG
i =
ci
(1 − k)n−i+1
+ sG
i+1
Thus,
sG
n =
cn
1 − k
sG
n−1 =
cn−1
(1 − k)2
+ sG
n =
cn−1
(1 − k)2
+
cn
1 − k
· · ·
sG
1 =
c1
(1 − k)n
+ sG
2 = . . . =
nX
i=1
ci
(1 − k)i
and the expected total payment is
E[S] = (1 − k)n+1
sG
1 = (1 − k)n+1
nX
i=1
ci
(1 − k)n−i+1
which equals the total expected payment in direct payments, as
expressed in Eq. 6.
Because the payment is contingent on the final outcome, and the
expected payment to a node equals its expected cost, nodes have
no incentive to offer their downstream nodes lower payment than
necessary, since if they do, their downstream nodes will not forward
the packet.
What information should the principal posess in order to
implement recursive contracts? Like in direct payments, the expected
payment is not affected solely by the total payment on the path, but
also by the topology. Therefore, while the principal only needs to
communicate with the first node on the forwarding path and does
not have to know the identities of the other nodes, it still needs to
know the number of nodes on the path and their individual transit
costs.
Finally, is it possible to design recursive contracts under
unknown transit costs, and, if so, what should be the structure of such
contracts? Suppose the principal has implemented the distributed
algorithm that calculates the necessary payments, pi for truthful
123
revelation, would the following payment schedule to the first node
induce cooperative behavior?
sB
1 = 0
sG
1 =
nX
i=1
pi
(1 − k)i
The answer is not clear. Unlike contracts in known transit costs,
the expected payment to a node usually exceeds its expected cost.
Therefore, transit nodes may not have the appropriate incentive to
follow the principal"s guarantee during the payment phase. For
example, in FPSS , the principal guarantees to pay each node an
expected payment of pi > ci. We assume that payments are
enforceable if made by the same entity that pledge to pay. However, in the
case of recursive contracts, the entity that pledges to pay in the cost
discovery stage (the principal) is not the same as the entity that
defines and executes the payments in the forwarding stage (the transit
nodes). Transit nodes, who design the contracts in the second stage,
know that their downstream nodes will forward the packet as long
as the expected payment exceeds the expected cost, even if it is less
than the promised amount. Thus, every node has incentive to offer
lower payments than promised and keep the profit. Transit nodes,
who know this is a plausible scenario, may no longer truthfully
reveal their cost. Therefore, while recursive contracts under known
transit costs are strategically equivalent to direct contracts, it is not
clear whether this is the case under unknown transit costs.
6. HIGH-QUALITY VERSUS
LOW-QUALITY FORWARDING
So far, we have considered the agents" strategy space to be
limited to the drop (a = 0) and forward (a = 1) actions. In this
section, we consider a variation of the model where the agents choose
between providing a low-quality service (a = 0) and a high-quality
service (a = 1).
This may correspond to a service-differentiated service model
where packets are forwarded on a best-effort or a priority basis [6].
In contrast to drop versus forward, a packet may still reach the next
hop (albeit with a lower probability) even if the low-effort action is
taken.
As a second example, consider the practice of hot-potato routing
in inter-domain routing of today"s Internet. Individual autonomous
systems (AS"s) can either adopt hot-potato routing or early exit
routing (a = 0), where a packet is handed off to the downstream
AS at the first possible exit, or late exit routing (a = 1), where an
AS carries the packet longer than it needs to, handing off the packet
at an exit closer to the destination. In the absence of explicit
incentives, it is not surprising that AS"s choose hot-potato routing to
minimize their costs, even though it leads to suboptimal routes [28,
29].
In both examples, in the absence of contracts, a rational node
would exert low-effort, resulting in lower performance.
Nevertheless, this behavior can be avoided with an appropriate design of
contracts.
Formally, the probability that a packet successfully gets from
node i to node i + 1 is:
Pr(xG
i→i+1|ai) = 1 − (k − qai) (14)
where: q ∈ (0, 1] and k ∈ (q, 1]
In the drop versus forward model, a low-effort action by any node
results in a delivery failure. In contrast, a node in the high/low
scenario may exert low-effort and hope to free-ride on the
higheffort level exerted by the other agents.
PROPOSITION 6.1. In the high-quality versus low-quality
forwarding model, where transit costs are common knowledge, the
principal derives the same expected utility whether it obtains
perhop monitoring information or not.
PROOF. The IC and IR constraints are the same as specified
in the proof of proposition 3.1, but their values change, based on
Eq. 14 to reflect the different model:
(IC) : (1−k +q)n−i+1
sG
i +(1−(1−k +q)n−i+1
)sB
i −c ≥
(1 − k)(1 − k + q)n−i
sG
i + (1 − (1 − k)(1 − k + q)n−i
)sB
i
(IR) : (1 − k + q)i
((1 − k + q)n−i+1
sG
i
+(1 − (1 − k + q)n−i+1
)sB
i − c) + (1 − (1 − k + q)i
)sB
i ≥ 0
For this set of constraints, we obtain the following solution:
sB
i =
(1 − k + q)i
c(k − 1)
q
(15)
sG
i =
(1 − k + q)i
c(k − 1 + (1 − k + q)−n
)
q
(16)
We observe that in this version, both the high and the low payments
depend on i. If monitoring is used, we obtain the following
constraints:
(IC) : (1 − k + q)sG
i + (k − q)sB
i − c ≥ (1 − k)sG
i + (k)sB
i
(IR) : (1 − k + q)i
((1 − k + q)sG
i + (k − q)sB
i − c) ≥ 0
and we get the solution:
sB
i =
c(k − 1)
q
sG
i =
ck
q
The expected payment by the principal with or without forwarding
is the same, and equals:
E[S] =
c(1 − k + q)(1 − (1 − k + q)n
)
k − q
(17)
and this concludes the proof.
The payment structure in the high-quality versus low-quality
forwarding model is different from that in the drop versus forward
model. In particular, at the optimal contract, the low-outcome
payment sB
i is now less than zero. A negative payment means that
the agent must pay the principal in the event that the packet fails
to reach the destination. In some settings, it may be necessary to
impose a limited liability constraint, i.e., si ≥ 0. This prevents the
first-best solution from being achieved.
PROPOSITION 6.2. In the high-quality versus low-quality
forwarding model, if negative payments are disallowed, the expected
payment to each node exceeds its expected cost under the optimal
contract.
PROOF. The proof is a direct outcome of the following
statements, which are proved above:
1. The optimal contract is the contract specified in equations 15
and 16
2. Under the optimal contract, E[si] equals node i s expected
cost
3. Under the optimal contract, sB
i = (1−k+q)i
c(k−1)
q
< 0
Therefore, under any other contract the sender will have to
compensate each node with an expected payment that is higher than its
expected transit cost.
124
There is an additional difference between the two models. In
drop versus forward, a principal either signs a contract with all n
nodes along the path or with none. This is because a single node
dropping the packet determines a failure. In contrast, in high versus
low-quality forwarding, a success may occur under the low effort
actions as well, and payments are used to increase the probability
of success. Therefore, it may be possible for the principal to
maximize its utility by contracting with only m of the n nodes along the
path. While the expected outcome depends on m, it is independent
of which specific m nodes are induced. At the same time, the
individual expected payments decrease in i (see Eq. 16). Therefore,
a principal who wishes to sign a contract with only m out of the n
nodes should do so with the nodes that are closest to the destination;
namely, nodes (n − m + 1, ..., n − 1, n).
Solving for the high-quality versus low-quality forwarding
model with unknown transit costs is left for future work.
7. CASE STUDY: INTERNET ROUTING
We can map different deployed and proposed Internet routing
schemes to the various models we have considered in this work.
Border Gateway Protocol (BGP), the current inter-domain
routing protocol in the Internet, computes routes based on path vectors.
Since the protocol reveals only the autonomous systems (AS"s)
along a route but not the cost associated to them, the current BGP
routing is best characterized by lack of a priori information about
transit costs. In this case, the principal (e.g., a multi-homed site
or a tier-1 AS) can implement one of the mechanisms proposed in
Section 4 by contracting with individual nodes on the path. Such
contracts involve paying some premium over the real cost, and it
is not clear whether recursive contacts can be implemented in this
scenario. In addition, the current protocol does not have the
infrastructure to support implementation of direct contracts between
endpoints and the network.
Recently, several new architectures have been proposed in the
context of the Internet to provide the principal not only with a set
of paths from which it can chose (like BGP does) but also with
the performance along those paths and the network topology. One
approach to obtain such information is through end-to-end
probing [1]. Another approach is to have the edge networks perform
measurements and discover the network topology [32]. Yet another
approach is to delegate the task of obtaining topology and
performance information to a third-party, like in the routing-as-a-service
proposal [21]. These proposals are quite different in nature, but
they are common in their attempt to provide more visibility and
transparency into the network. If information about topology and
transit costs is obtained, the scenario is mapped to the known
transit costs model (Section 3). In this case, first-best contracts can be
achieved through individual contracts with nodes along the path.
However, as we have shown in Section 5, as long as each agent can
chose the next hop, the principal can gain full benefit by
contracting with only the first hop (through the implementation of recursive
contracts).
However, the various proposals for acquiring network topology
and performance information do not deal with strategic behavior
by the intermediate nodes. With the realization that the
information collected may be used by the principal in subsequent
contractual relationships, the intermediate nodes may behave strategically,
misrepresenting their true costs to the entities that collect and
aggregate such information. One recent approach that can alleviate
this problem is to provide packet obituaries by having each packet
to confirm its delivery or report its last successful AS hop [4].
Another approach is to have third parties like Keynote independently
monitor the network performance.
8. RELATED WORK
The study of non-cooperative behavior in communication
networks, and the design of incentives, has received significant
attention in the context of wireless ad-hoc routing. [22] considers the
problem of malicious behavior, where nodes respond positively to
route requests but then fail to forward the actual packets. It
proposes to mitigate it by detection and report mechanisms that will
essentially help to route around the malicious nodes. However,
rather than penalizing nodes that do not forward traffic, it bypasses
the misbehaving nodes, thereby relieving their burden. Therefore,
such a mechanism is not effective against selfish behavior.
In order to mitigate selfish behavior, some approaches [7, 8, 9]
require reputation exchange between nodes, or simply first-hand
observations [5]. Other approaches propose payment schemes [10,
20, 31] to encourage cooperation. [31] is the closest to our work in
that it designs payment schemes in which the sender pays the
intermediate nodes in order to prevent several types of selfish behavior.
In their approach, nodes are supposed to send receipts to a
thirdparty entity. We show that this type of per-hop monitoring may not
be needed.
In the context of Internet routing, [4] proposes an accountability
framework that provide end hosts and service providers
after-thefact audits on the fate of their packets. This proposal is part of a
broader approach to provide end hosts with greater control over the
path of their packets [3, 30]. If senders have transit cost
information and can fully control the path of their packets, they can design
contracts that yield them with first-best utility. The accountability
framework proposed in [4] can serve two main goals: informing
nodes of network conditions to help them make informed decision,
and helping entities to establish whether individual ASs have
performed their duties adequately. While such a framework can be
used for the first task, we propose a different approach to the
second problem without the need of per-hop auditing information.
Research in distributed algorithmic mechanism design (DAMD)
has been applied to BGP routing [13, 14]. These works propose
mechanisms to tackle the hidden-information problem, but ignore
the problem of forwarding enforcement. Inducing desired behavior
is also the objective in [26], which attempts to respond to the
challenge of distributed AMD raised in [15]: if the same agents that
seek to manipulate the system also run the mechanism, what
prevents them from deviating from the mechanism"s proposed rules to
maximize their own welfare? They start with the proposed
mechanism presented in [13] and use mostly auditing mechanisms to
prevent deviation from the algorithm.
The focus of this work is the design of a payment scheme that
provides the appropriate incentives within the context of multi-hop
routing. Like other works in this field, we assume that all the
accounting services are done using out-of-band mechanisms.
Security issues within this context, such as node authentication or
message encryption, are orthogonal to the problem presented in this
paper, and can be found, for example, in [18, 19, 25].
The problem of information asymmetry and hidden-action (also
known as moral hazard) is well studied in the economics
literature [11, 17, 23, 27]. [17] identifies the problem of moral hazard in
production teams, and shows that it is impossible to design a
sharing rule which is efficient and budget-balanced. [27] shows that this
task is made possible when production takes place sequentially.
9. CONCLUSIONS AND FUTURE
DIRECTIONS
In this paper we show that in a multi-hop routing setting, where
the actions of the intermediate nodes are hidden from the source
125
and/or destination, it is possible to design payment schemes to
induce cooperative behavior from the intermediate nodes. We
conclude that monitoring per-hop outcomes may not improve the
utility of the participants or the network performace. In addition, in
scenarios of unknown transit costs, it is also possible to design
mechanisms that induce cooperative behavior in equilibrium, but
the sender pays a premium for extracting information from the
transit nodes.
Our model and results suggest several natural and intriguing
research avenues:
• Consider manipulative or collusive behaviors which may
arise under the proposed payment schemes.
• Analyze the feasibility of recursive contracts under
hiddeninformation of transit costs.
• While the proposed payment schemes sustain cooperation in
equilibrium, it is not a unique equilibrium. We plan to study
under what mechanisms this strategy profile may emerge as
a unique equilibrium (e.g., penalty by successor nodes).
• Consider the effect of congestion and capacity constraints on
the proposed mechanisms. Our preliminary results show that
when several senders compete for a single transit node"s
capacity, the sender with the highest demand pays a premium
even if transit costs are common knowledge. The premium
can be expressed as a function of the second-highest demand.
In addition, if congestion affects the probability of
successful delivery, a sender with a lower cost alternate path may
end up with a lower utility level than his rival with a higher
cost alternate path.
• Fully characterize the full-information Nash equilibrium in
first price auctions, and use this characterization to derive its
overcharging compared to truthful mechaisms.
10. ACKNOWLEDGEMENTS
We thank Hal Varian for his useful comments. This work is
supported in part by the National Science Foundation under ITR
awards ANI-0085879 and ANI-0331659, and Career award
ANI0133811.
11. REFERENCES
[1] ANDERSEN, D. G., BALAKRISHNAN, H., KAASHOEK, M. F., AND
MORRIS, R. Resilient Overlay Networks. In 18th ACM SOSP (2001).
[2] ARCHER, A., AND TARDOS, E. Frugal path mechanisms.
[3] ARGYRAKI, K., AND CHERITON, D. Loose Source Routing as a
Mechanism for Traffic Policies. In Proceedings of SIGCOMM FDNA
(August 2004).
[4] ARGYRAKI, K., MANIATIS, P., CHERITON, D., AND SHENKER, S.
Providing Packet Obituaries. In Third Workshop on Hot Topics in
Networks (HotNets) (November 2004).
[5] BANSAL, S., AND BAKER, M. Observation-based cooperation
enforcement in ad-hoc networks. Technical report, Stanford
university (2003).
[6] BLAKE, S., BLACK, D., CARLSON, M., DAVIES, E., WANG, Z.,
AND WEISS, W. An Architecture for Differentiated Service.
RFC 2475, 1998.
[7] BUCHEGGER, S., AND BOUDEC, J.-Y. L. Performance Analysis of
the CONFIDANT Protocol: Cooperation of Nodes - Fairness in
Dynamic ad-hoc Networks. In IEEE/ACM Symposium on Mobile Ad
Hoc Networking and Computing (MobiHOC) (2002).
[8] BUCHEGGER, S., AND BOUDEC, J.-Y. L. Coping with False
Accusations in Misbehavior Reputation Systems For Mobile ad-hoc
Networks. In EPFL, Technical report (2003).
[9] BUCHEGGER, S., AND BOUDEC, J.-Y. L. The effect of rumor
spreading in reputation systems for mobile ad-hoc networks. In
WiOpt"03: Modeling and Optimization in Mobile ad-hoc and
Wireless Networks (2003).
[10] BUTTYAN, L., AND HUBAUX, J. Stimulating Cooperation in
Self-Organizing Mobile ad-hoc Networks. ACM/Kluwer Journal on
Mobile Networks and Applications (MONET) (2003).
[11] CAILLAUD, B., AND HERMALIN, B. Hidden Action and Incentives.
Teaching Notes. U.C. Berkeley.
[12] ELKIND, E., SAHAI, A., AND STEIGLITZ, K. Frugality in path
auctions, 2004.
[13] FEIGENBAUM, J., PAPADIMITRIOU, C., SAMI, R., AND SHENKER,
S. A BGP-based Mechanism for Lowest-Cost Routing. In
Proceedings of the ACM Symposium on Principles of Distributed
Computing (2002).
[14] FEIGENBAUM, J., SAMI, R., AND SHENKER, S. Mechanism Design
for Policy Routing. In Yale University, Technical Report (2003).
[15] FEIGENBAUM, J., AND SHENKER, S. Distributed Algorithmic
Mechanism Design: Recent Results and Future Directions. In
Proceedings of the International Workshop on Discrete Algorithms
and Methods for Mobile Computing and Communications (2002).
[16] FRIEDMAN, E., AND SHENKER, S. Learning and implementation on
the internet. In Manuscript. New Brunswick: Rutgers University,
Department of Economics (1997).
[17] HOLMSTROM, B. Moral Hazard in Teams. Bell Journal of
Economics 13 (1982), 324-340.
[18] HU, Y., PERRIG, A., AND JOHNSON, D. Ariadne: A Secure
On-Demand Routing Protocol for ad-hoc Networks. In Eighth
Annual International Conference on Mobile Computing and
Networking (Mobicom) (2002), pp. 12-23.
[19] HU, Y., PERRIG, A., AND JOHNSON, D. SEAD: Secure Efficient
Distance Vector Routing for Mobile ad-hoc Networks. In 4th IEEE
Workshop on Mobile Computing Systems and Applications (WMCSA)
(2002).
[20] JAKOBSSON, M., HUBAUX, J.-P., AND BUTTYAN, L. A
Micro-Payment Scheme Encouraging Collaboration in Multi-Hop
Cellular Networks. In Financial Cryptography (2003).
[21] LAKSHMINARAYANAN, K., STOICA, I., AND SHENKER, S.
Routing as a service. In UCB Technical Report No.
UCB/CSD-04-1327 (January 2004).
[22] MARTI, S., GIULI, T. J., LAI, K., AND BAKER, M. Mitigating
Routing Misbehavior in Mobile ad-hoc Networks. In Proceedings of
MobiCom (2000), pp. 255-265.
[23] MASS-COLELL, A., WHINSTON, M., AND GREEN, J.
Microeconomic Theory. Oxford University Press, 1995.
[24] NISAN, N., AND RONEN, A. Algorithmic Mechanism Design. In
Proceedings of the 31st Symposium on Theory of Computing (1999).
[25] SANZGIRI, K., DAHILL, B., LEVINE, B., SHIELDS, C., AND
BELDING-ROYER, E. A Secure Routing Protocol for ad-hoc
Networks. In International Conference on Network Protocols (ICNP)
(2002).
[26] SHNEIDMAN, J., AND PARKES, D. C. Overcoming rational
manipulation in mechanism implementation, 2004.
[27] STRAUSZ, R. Moral Hazard in Sequential Teams. Departmental
Working Paper. Free University of Berlin (1996).
[28] TEIXEIRA, R., GRIFFIN, T., SHAIKH, A., AND VOELKER, G.
Network sensitivity to hot-potato disruptions. In Proceedings of ACM
SIGCOMM (September 2004).
[29] TEIXEIRA, R., SHAIKH, A., GRIFFIN, T., AND REXFORD,
J. Dynamics of hot-potato routing in IP networks. In Proceedings of
ACM SIGMETRICS (June 2004).
[30] YANG, X. NIRA: A New Internet Routing Architecture. In
Proceedings of SIGCOMM FDNA (August 2003).
[31] ZHONG, S., CHEN, J., AND YANG, Y. R. Sprite: A Simple,
Cheat-Proof, Credit-Based System for Mobile ad-hoc Networks. In
22nd Annual Joint Conference of the IEEE Computer and
Communications Societies (2003).
[32] ZHU, D., GRITTER, M., AND CHERITON, D. Feedback-based
Routing. In Proc Hotnets-I (2002).
126 | mechanism;moralhazard;route;multi-hop;endpoint;hidden action;contract;hidden-action;failure cause;cost;multi-hop network;principal-agent model;intermediate node;moral hazard;cause of failure;priority;incentive;mechanism design |
train_J-53 | A Price-Anticipating Resource Allocation Mechanism for Distributed Shared Clusters | In this paper we formulate the fixed budget resource allocation game to understand the performance of a distributed marketbased resource allocation system. Multiple users decide how to distribute their budget (bids) among multiple machines according to their individual preferences to maximize their individual utility. We look at both the efficiency and the fairness of the allocation at the equilibrium, where fairness is evaluated through the measures of utility uniformity and envy-freeness. We show analytically and through simulations that despite being highly decentralized, such a system converges quickly to an equilibrium and unlike the social optimum that achieves high efficiency but poor fairness, the proposed allocation scheme achieves a nice balance of high degrees of efficiency and fairness at the equilibrium. | 1. INTRODUCTION
The primary advantage of distributed shared clusters like
the Grid [7] and PlanetLab [1] is their ability to pool
together shared computational resources. This allows increased
throughput because of statistical multiplexing and the bursty
utilization pattern of typical users. Sharing nodes that are
dispersed in the network allows lower delay because
applications can store data close to users. Finally, sharing allows
greater reliability because of redundancy in hosts and
network connections.
However, resource allocation in these systems remains the
major challenge. The problem is how to allocate a shared
resource both fairly and efficiently (where efficiency is the
ratio of the achieved social welfare to the social optimal)
with the presence of strategic users who act in their own
interests.
Several non-economic allocation algorithms have been
proposed, but these typically assume that task values (i.e., their
importance) are the same, or are inversely proportional to
the resources required, or are set by an omniscient
administrator. However, in many cases, task values vary
significantly, are not correlated to resource requirements, and are
difficult and time-consuming for an administrator to set.
Instead, we examine a market-based resource allocation
system (others are described in [2, 4, 6, 21, 26, 27]) that allows
users to express their preferences for resources through a
bidding mechanism.
In particular, we consider a price-anticipating [12] scheme
in which a user bids for a resource and receives the ratio of
his bid to the sum of bids for that resource. This
proportional scheme is simpler, more scalable, and more
responsive [15] than auction-based schemes [6, 21, 26]. Previous
work has analyzed price-anticipating schemes in the context
of allocating network capacity for flows for users with
unlimited budgets. In this work, we examine a price-anticipating
scheme in the context of allocating computational
capacity for users with private preferences and limited budgets,
resulting in a qualitatively different game (as discussed in
Section 6).
In this paper, we formulate the fixed budget resource
allocation game and study the existence and performance of
the Nash equilibria of this game. For evaluating the Nash
equilibria, we consider both their efficiency, measuring how
close the social welfare at equilibrium is to the social
optimum, and fairness, measuring how different the users"
utilities are. Although rarely considered in previous game
theoretical study, we believe fairness is a critical metric for a
resource allocation schemes because the perception of
unfairness will cause some users to reject a system with more
efficient, but less fair resource allocation in favor of one with
less efficient, more fair resource allocation. We use both
utility uniformity and envy-freeness to measure fairness.
Utility uniformity, which is common in Computer Science work,
measures the closeness of utilities of different users.
Envyfreeness, which is more from the Economic perspective,
measures the happiness of users with their own resources
compared to the resources of others.
Our contributions are as follows:
• We analyze the existence and performance of
127
Nash equilibria. Using analysis, we show that there is
always a Nash equilibrium in the fixed budget game if the
utility functions satisfy a fairly weak and natural condition
of strong competitiveness. We also show the worst case
performance bounds: for m players the efficiency at equilibrium
is Ω(1/
√
m), the utility uniformity is ≥ 1/m, and the
envyfreeness ≥ 2
√
2−2 ≈ 0.83. Although these bounds are quite
low, the simulations described below indicate these bounds
are overly pessimistic.
• We describe algorithms that allow strategic users
to optimize their utility. As part of the fixed budget
game analysis, we show that strategic users with linear
utility functions can calculate their bids using a best response
algorithm that quickly results in an allocation with high
efficiency with little computational and communication
overhead. We present variations of the best response algorithm
for both finite and infinite parallelism tasks. In addition, we
present a local greedy adjustment algorithm that converges
more slowly than best response, but allows for non-linear or
unformulatable utility functions.
• We show that the price-anticipating resource
allocation mechanism achieves a high degree of
efficiency and fairness. Using simulation, we find that
although the socially optimal allocation results in perfect
efficiency, it also results in very poor fairness. Likewise,
allocating according to only users" preference weights results in a
high fairness, but a mediocre efficiency. Intuition would
suggest that efficiency and fairness are exclusive. Surprisingly,
the Nash equilibrium, reached by each user iteratively
applying the best response algorithm to adapt his bids, achieves
nearly the efficiency of the social optimum and nearly the
fairness of the weight-proportional allocation: the efficiency
is ≥ 0.90, the utility uniformity is ≥ 0.65, and the
envyfreeness is ≥ 0.97, independent of the number of users in
the system. In addition, the time to converge to the
equilibrium is ≤ 5 iterations when all users use the best response
strategy. The local adjustment algorithm performs similarly
when there is sufficient competitiveness, but takes 25 to 90
iterations to stabilize.
As a result, we believe that shared distributed systems
based on the fixed budget game can be highly decentralized,
yet achieve a high degree of efficiency and fairness.
The rest of the paper is organized as follows. We
describe the model in Section 2 and derive the performance
at the Nash equilibria for the infinite parallelism model in
Section 3. In Section 4, we describe algorithms for users
to optimize their own utility in the fixed budget game. In
Section 5, we describe our simulator and simulation results.
We describe related work in Section 6. We conclude by
discussing some limit of our model and future work in Section 7.
2. THE MODEL
Price-Anticipating Resource Allocation. We study
the problem of allocating a set of divisible resources (or
machines). Suppose that there are m users and n machines.
Each machine can be continuously divided for allocation
to multiple users. An allocation scheme ω = (r1, . . . , rm),
where ri = (ri1, · · · , rin) with rij representing the share of
machine j allocated to user i, satisfies that for any 1 ≤ i ≤ m
and 1 ≤ j ≤ n, rij ≥ 0 and
Pm
i=1 rij ≤ 1. Let Ω denote the
set of all the allocation schemes.
We consider the price anticipating mechanism in which
each user places a bid to each machine, and the price of the
machine is determined by the total bids placed. Formally,
suppose that user i submits a non-negative bid xij to
machine j. The price of machine j is then set to Yj =
Pn
i=1 xij,
the total bids placed on the machine j. Consequently, user i
receives a fraction of rij =
xij
Yj
of j. When Yj = 0, i.e. when
there is no bid on a machine, the machine is not allocated
to anyone. We call xi = (xi1, . . . , xin) the bidding vector of
user i.
The additional consideration we have is that each user i
has a budget constraint Xi. Therefore, user i"s total bids
have to sum up to his budget, i.e.
Pn
j=1 xij = Xi. The
budget constraints come from the fact that the users do not
have infinite budget.
Utility Functions. Each user i"s utility is represented
by a function Ui of the fraction (ri1, . . . , rin) the user receives
from each machine. Given the problem domain we consider,
we assume that each user has different and relatively
independent preferences for different machines. Therefore, the
basic utility function we consider is the linear utility
function: Ui(ri1, · · · , rin) = wi1ri1 +· · ·+winrin, where wij ≥ 0
is user i"s private preference, also called his weight, on
machine j. For example, suppose machine 1 has a faster CPU
but less memory than machine 2, and user 1 runs CPU
bounded applications, while user 2 runs memory bounded
applications. As a result, w11 > w12 and w21 < w22.
Our definition of utility functions corresponds to the user
having enough jobs or enough parallelism within jobs to
utilize all the machines. Consequently, the user"s goal is to grab
as much of a resource as possible. We call this the infinite
parallelism model. In practice, a user"s application may have
an inherent limit on parallelization (e.g., some computations
must be done sequentially) or there may be a system limit
(e.g., the application"s data is being served from a file server
with limited capacity). To model this, we also consider the
more realistic finite parallelism model, where the user"s
parallelism is bounded by ki, and the user"s utility Ui is the
sum of the ki largest wijrij. In this model, the user only
submits bids to up to ki machines. Our abstraction is to
capture the essense of the problem and facilitate our
analysis. In Section 7, we discuss the limit of the above definition
of utility functions.
Best Response. As typically, we assume the users are
selfish and strategic - they all act to maximize their own
utility, defined by their utility functions. From the
perspective of user i, if the total bids of the other users placed on
each machine j is yj, then the best response of user i to the
system is the solution of the following optimization problem:
maximize Ui(
xij
xij +yj
) subject to
Pn
j=1 xij = Xi, and xij ≥ 0.
The difficulty of the above optimization problem depends
on the formulation of Ui. We will show later how to solve
it for the infinite parallelism model and provide a heuristic
for finite parallelism model.
Nash Equilibrium. By the assumption that the user is
selfish, each user"s bidding vector is the best response to the
system. The question we are most interested in is whether
there exists a collection of bidding vectors, one for each user,
such that each user"s bidding vector is the best response to
those of the other users. Such a state is known as the Nash
equilibrium, a central concept in Game Theory. Formally,
the bidding vectors x1, . . . , xm is a Nash equilibrium if for
128
any 1 ≤ i ≤ m, xi is the best response to the system, or, for
any other bidding vector xi,
Ui(x1, . . . , xi, . . . , xm) ≥ Ui(x1, . . . , xi, . . . , xm) .
The Nash equilibrium is desirable because it is a stable
state at which no one has incentive to change his strategy.
But a game may not have an equilibrium. Indeed, a Nash
equilibrium may not exist in the price anticipating scheme
we define above. This can be shown by a simple example of
two players and two machines. For example, let U1(r1, r2) =
r1 and U2(r1, r2) = r1 + r2. Then player 1 should never bid
on machine 2 because it has no value to him. Now, player 2
has to put a positive bid on machine 2 to claim the machine,
but there is no lower limit, resulting in the non-existence of
the Nash equilibrium. We should note that even the mixed
strategy equilibrium does not exist in this example. Clearly,
this happens whenever there is a resource that is wanted
by only one player. To rule out this case, we consider those
strongly competitive games.1
Under the infinite parallelism
model, a game is called strongly competitive if for any 1 ≤
j ≤ n, there exists an i = k such that wij, wkj > 0. Under
such a condition, we have that (see [5] for a proof),
Theorem 1. There always exists a pure strategy Nash
equilibrium in a strongly competitive game.
Given the existence of the Nash equilibrium, the next
important question is the performance at the Nash equilibrium,
which is often measured by its efficiency and fairness.
Efficiency (Price of Anarchy). For an allocation scheme
ω ∈ Ω, denote by U(ω) =
P
i Ui(ri) the social welfare under
ω. Let U∗
= maxω∈Ω U(ω) denote the optimal social welfare
- the maximum possible aggregated user utilities. The
efficiency at an allocation scheme ω is defined as π(ω) = U(ω)
U∗ .
Let Ω0 denote the set of the allocation at the Nash
equilibrium. When there exists Nash equilibrium, i.e. Ω0 = ∅,
define the efficiency of a game Q to be π(Q) = minω∈Ω0 π(ω).
It is usually the case that π < 1, i.e. there is an efficiency
loss at a Nash equilibrium. This is the price of anarchy [18]
paid for not having central enforcement of the user"s good
behavior. This price is interesting because central control
results in the best possible outcome, but is not possible in
most cases.
Fairness. While the definition of efficiency is standard,
there are multiple ways to define fairness. We consider two
metrics. One is by comparing the users" utilities. The utility
uniformity τ(ω) of an allocation scheme ω is defined to be
mini Ui(ω)
maxi Ui(ω)
, the ratio of the minimum utility and the
maximum utility among the users. Such definition (or utility
discrepancy defined similarly as maxi Ui(ω)
mini Ui(ω)
) is used
extensively in Computer Science literature. Under this
definition, the utility uniformity τ(Q) of a game Q is defined to
be τ(Q) = minω∈Ω0 τ(ω).
The other metric extensively studied in Economics is the
concept of envy-freeness [25]. Unlike the utility uniformity
metric, the enviness concerns how the user perceives the
value of the share assigned to him, compared to the shares
other users receive. Within such a framework, define the
envy-freeness of an allocation scheme ω by ρ(ω) = mini,j
Ui(ri)
Ui(rj )
.
1Alternatives include adding a reservation price or limiting the
lowest allowable bid to each machine. These alternatives,
however, introduce the problem of coming up with the right price
or limit.
When ρ(ω) ≥ 1, the scheme is known as an envy-free
allocation scheme. Likewise, the envy-freeness ρ(Q) of a game
Q is defined to be ρ(Q) = minω∈Ω0 ρ(ω).
3. NASH EQUILIBRIUM
In this section, we present some theoretical results
regarding the performance at Nash equilibrium under the infinite
parallelism model. We assume that the game is strongly
competitive to guarantee the existence of equilibria. For a
meaningful discussion of efficiency and fairness, we assume
that the users are symmetric by requiring that Xi = 1 andPn
j=1 wij = 1 for all the 1 ≤ i ≤ m. Or informally, we
require all the users have the same budget, and they have
the same utility when they own all the resources. This
precludes the case when a user has an extremely high budget,
resulting in very low efficiency or low fairness at equilibrium.
We first provide a characterization of the equilibria. By
definition, the bidding vectors x1, . . . , xm is a Nash
equilibrium if and only if each player"s strategy is the best
response to the group"s bids. Since Ui is a linear function
and the domain of each users bids {(xi1, . . . , xin)|
P
j xij =
Xi , and xij ≥ 0} is a convex set, the optimality condition is
that there exists λi > 0 such that
∂Ui
∂xij
= wij
Yj − xij
Y 2
j
= λi if xij > 0, and
< λi if xij = 0.
(1)
Or intuitively, at an equilibrium, each user has the same
marginal value on machines where they place positive bids
and has lower marginal values on those machines where they
do not bid.
Under the infinite parallelism model, it is easy to compute
the social optimum U∗
as it is achieved when we allocate
each machine wholly to the person who has the maximum
weight on the machine, i.e. U∗
=
Pn
j=1 max1≤i≤m wij.
3.1 Two-player Games
We first show that even in the simplest nontrivial case
when there are two users and two machines, the game has
interesting properties. We start with two special cases to
provide some intuition about the game. The weight
matrices are shown in figure 1(a) and (b), which correspond
respectively to the equal-weight and opposite-weight games.
Let x and y denote the respective bids of users 1 and 2 on
machine 1. Denote by s = x + y and δ = (2 − s)/s.
Equal-weight game. In Figure 1, both users have equal
valuations for the two machines. By the optimality
condition, for the bid vectors to be in equilibrium, they need to
satisfy the following equations according to (1)
α
y
(x + y)2
= (1 − α)
1 − y
(2 − x − y)2
α
x
(x + y)2
= (1 − α)
1 − x
(2 − x − y)2
By simplifying the above equations, we obtain that δ =
1 − 1/α and x = y = α. Thus, there exists a unique Nash
equilibrium of the game where the two users have the same
bidding vector. At the equilibrium, the utility of each user
is 1/2, and the social welfare is 1. On the other hand, the
social optimum is clearly 1. Thus, the equal-weight game
is ideal as the efficiency, utility uniformity, and the
envyfreeness are all 1.
129
m1 m2
u1 α 1 − α
u2 α 1 − α
m1 m2
u1 α 1 − α
u2 1 − α α
(a) equal weight game (b) opposite weight game
Figure 1: Two special cases of two-player games.
Opposite-weight game. The situation is different for
the opposite game in which the two users put the exact
opposite weights on the two machines. Assume that α ≥
1/2. Similarly, for the bid vectors to be at the equilibrium,
they need to satisfy
α
y
(x + y)2
= (1 − α)
1 − y
(2 − x − y)2
(1 − α)
x
(x + y)2
= α
1 − x
(2 − x − y)2
By simplifying the above equations, we have that each
Nash equilibrium corresponds to a nonnegative root of the
cubic equation f(δ) = δ3
− cδ2
+ cδ − 1 = 0, where c =
1
2α(1−α)
− 1.
Clearly, δ = 1 is a root of f(δ). When δ = 1, we have
that x = α, y = 1 − α, which is the symmetric equilibrium
that is consistent with our intuition - each user puts a
bid proportional to his preference of the machine. At this
equilibrium, U = 2 − 4α(1 − α), U∗
= 2α, and U/U∗
=
(2α + 1
α
) − 2, which is minimized when α =
√
2
2
with the
minimum value of 2
√
2 − 2 ≈ 0.828. However, when α is
large enough, there exist two other roots, corresponding to
less intuitive asymmetric equilibria.
Intuitively, the asymmetric equilibrium arises when user 1
values machine 1 a lot, but by placing even a relatively small
bid on machine 1, he can get most of the machine because
user 2 values machine 1 very little, and thus places an even
smaller bid. In this case, user 1 gets most of machine 1 and
almost half of machine 2.
The threshold is at when f (1) = 0, i.e. when c = 1
2α(1−α)
=
4. This solves to α0 = 2+
√
2
4
≈ 0.854. Those asymmetric
equilibria at δ = 1 are bad as they yield lower efficiency
than the symmetric equilibrium. Let δ0 be the minimum
root. When α → 0, c → +∞, and δ0 = 1/c + o(1/c) → 0.
Then, x, y → 1. Thus, U → 3/2, U∗
→ 2, and U/U∗
→ 0.75.
From the above simple game, we already observe that the
Nash equilibrium may not be unique, which is different from
many congestion games in which the Nash equilibrium is
unique.
For the general two player game, we can show that 0.75
is actually the worst efficiency bound with a proof in [5].
Further, at the asymmetric equilibrium, the utility
uniformity approaches 1/2 when α → 1. This is the worst possible
for two player games because as we show in Section 3.2, a
user"s utility at any Nash equilibrium is at least 1/m in the
m-player game.
Another consequence is that the two player game is
always envy-free. Suppose that the two user"s shares are r1 =
(r11, . . . , r1n) and r2 = (r21, . . . , r2n) respectively. Then
U1(r1) + U1(r2) = U1(r1 + r2) = U1(1, . . . , 1) = 1 because
ri1 + ri2 = 1 for all 1 ≤ i ≤ n. Again by that U1(r1) ≥ 1/2,
we have that U1(r1) ≥ U1(r2), i.e. any equilibrium
allocation is envy-free.
Theorem 2. For a two player game, π(Q) ≥ 3/4, τ(Q) ≥
0.5, and ρ(Q) = 1. All the bounds are tight in the worst case.
3.2 Multi-player Game
For large numbers of players, the loss in social welfare
can be unfortunately large. The following example shows
the worst case bound. Consider a system with m = n2
+ n
players and n machines. Of the players, there are n2
who
have the same weights on all the machines, i.e. 1/n on each
machine. The other n players have weight 1, each on a
different machine and 0 (or a sufficiently small ) on all the
other machines. Clearly, U∗
= n. The following allocation
is an equilibrium: the first n2
players evenly distribute their
money among all the machines, the other n player invest all
of their money on their respective favorite machine. Hence,
the total money on each machine is n + 1. At this
equilibrium, each of the first n2
players receives 1
n
1/n
n+1
= 1
n2(n+1)
on
each machine, resulting in a total utility of n3
· 1
n2(n+1)
< 1.
The other n players each receives 1
n+1
on their favorite
machine, resulting in a total utility of n · 1
n+1
< 1. Therefore,
the total utility of the equilibrium is < 2, while the social
optimum is n = Θ(
√
m). This bound is the worst possible.
What about the utility uniformity of the multi-player
allocation game? We next show that the utility uniformity of
the m-player allocation game cannot exceed m.
Let (S1, . . . , Sn) be the current total bids on the n
machines, excluding user i. User i can ensure a utility of 1/m
by distributing his budget proportionally to the current bids.
That is, user i, by bidding sij = Xi/
Pn
i=1 Si on machine j,
obtains a resource level of:
rij =
sij
sij + Sj
=
Sj/
Pn
i=1 Si
Sj/
Pn
i=1 Si + Sj
=
1
1 +
Pn
i=1 Si
,
where
Pn
j=1 Sj =
Pm
j=1 Xj − Xi = m − 1.
Therefore, rij = 1
1+m−1
= 1
m
. The total utility of user i
is
nX
j=1
rijwij = (1/m)
nX
j=1
wij = 1/m .
Since each user"s utility cannot exceed 1, the minimal
possible uniformity is 1/m.
While the utility uniformity can be small, the envy-freeness,
on the other hand, is bounded by a constant of 2
√
2 − 2 ≈
0.828, as shown in [29]. To summarize, we have that
Theorem 3. For the m-player game Q, π(Q) = Ω(1/
√
m),
τ(Q) ≥ 1/m, and ρ(Q) ≥ 2
√
2 − 2. All of these bounds are
tight in the worst case.
4. ALGORITHMS
In the previous section, we present the performance bounds
of the game under the infinite parallelism model. However,
the more interesting questions in practice are how the
equilibrium can be reached and what is the performance at the
Nash equilibrium for the typical distribution of utility
functions. In particular, we would like to know if the intuitive
strategy of each player constantly re-adjusting his bids
according to the best response algorithm leads to the
equilibrium. To answer these questions, we resort to simulations.
In this section, we present the algorithms that we use to
compute or approximate the best response and the social
optimum in our experiments. We consider both the infinite
parallelism and finite parallelism model.
130
4.1 Infinite Parallelism Model
As we mentioned before, it is easy to compute the social
optimum under the infinite parallelism model - we simply
assign each machine to the user who likes it the most. We
now present the algorithm for computing the best response.
Recall that for weights w1, . . . , wn, total bids y1, . . . , yn, and
the budget X, the best response is to solve the following
optimization problem
maximize U =
Pn
j=1 wj
xj
xj +yj
subject to
Pn
j=1 xj = X, and xj ≥ 0.
To compute the best response, we first sort
wj
yj
in
decreasing order. Without loss of generality, suppose that
w1
y1
≥
w2
y2
≥ . . .
wn
yn
.
Suppose that x∗
= (x∗
1, . . . , x∗
n) is the optimum solution.
We show that if x∗
i = 0, then for any j > i, x∗
j = 0 too.
Suppose this were not true. Then
∂U
∂xj
(x∗
) = wj
yj
(x∗
j + yj)2
< wj
yj
y2
j
=
wj
yj
≤
wi
yi
=
∂U
∂xi
(x∗
) .
Thus it contradicts with the optimality condition (1).
Suppose that k = max{i|x∗
i > 0}. Again, by the optimality
condition, there exists λ such that wi
yi
(x∗
i +yi)2 = λ for 1 ≤ i ≤ k,
and x∗
i = 0 for i > k. Equivalently, we have that:
x∗
i =
r
wiyi
λ
− yi , for 1 ≤ i ≤ k, and x∗
i = 0 for i > k.
Replacing them in the equation
Pn
i=1 x∗
i = X, we can
solve for λ =
(
Pk
i=1
√
wiyi)2
(X+
Pk
i=1 yi)2 . Thus,
x∗
i =
√
wiyi
Pk
i=1
√
wiyi
(X +
kX
i=1
yi) − yi .
The remaining question is how to determine k. It is the
largest value such that x∗
k > 0. Thus, we obtain the
following algorithm to compute the best response of a user:
1. Sort the machines according to wi
yi
in decreasing order.
2. Compute the largest k such that
√
wkyk
Pk
i=1
√
wiyi
(X +
kX
i=1
yi) − yk ≥ 0.
3. Set xj = 0 for j > k, and for 1 ≤ j ≤ k, set:
xj =
√
wjyj
Pk
i=1
√
wiyi
(X +
kX
i=1
yi) − yj.
The computational complexity of this algorithm is O(n log n),
dominated by the sorting. In practice, the best response can
be computed infrequently (e.g. once a minute), so for a
typically powerful modern host, this cost is negligible.
The best response algorithm must send and receive O(n)
messages because each user must obtain the total bids from
each host. In practice, this is more significant than the
computational cost. Note that hosts only reveal to users the sum
of the bids on them. As a result, hosts do not reveal the
private preferences and even the individual bids of one user to
another.
4.2 Finite Parallelism Model
Recall that in the finite parallelism model, each user i only
places bids on at most ki machines. Of course, the infinite
parallelism model is just a special case of finite parallelism
model in which ki = n for all the i"s. In the finite parallelism
model, computing the social optimum is no longer trivial
due to bounded parallelism. It can instead be computed by
using the maximum matching algorithm.
Consider the weighted complete bipartite graph G = U ×
V , where U = {ui |1 ≤ i ≤ m , and 1 ≤ ≤ ki}, V =
{1, 2, . . . , n} with edge weight wij assigned to the edge (ui , vj).
A matching of G is a set of edges with disjoint nodes, and
the weight of a matching is the total weights of the edges in
the matching. As a result, the following lemma holds.
Lemma 1. The social optimum is the same as the
maximum weight matching of G.
Thus, we can use the maximum weight matching
algorithm to compute the social optimum. The maximum weight
matching is a classical network problem and can be solved
in polynomial time [8, 9, 14]. We choose to implement the
Hungarian algorithm [14, 19] because of its simplicity. There
may exist a more efficient algorithm for computing the
maximum matching by exploiting the special structure of G. This
remains an interesting open question.
However, we do not know an efficient algorithm to
compute the best response under the finite parallelism model.
Instead, we provide the following local search heuristic.
Suppose we again have n machines with weights w1, . . . , wn
and total bids y1, . . . , yn. Let the user"s budget be X and
the parallelism bound be k. Our goal is to compute an
allocation of X to up to k machines to maximize the user"s
utility.
For a subset of machines A, denote by x(A) the best
response on A without parallelism bound and by U(A) the
utility obtained by the best response algorithm. The local
search works as follows:
1. Set A to be the k machines with the highest wi/yi.
2. Compute U(A) by the infinite parallelism best response
algorithm (Sec 4.1) on A.
3. For each i ∈ A and each j /∈ A, repeat
4. Let B = A − {i} + {j}, compute U(B).
5. If(U(B) > U(A)), let A ← B, and goto 2.
6. Output x(A).
Intuitively, by the local search heuristic, we test if we can
swap a machine in A for one not in A to improve the best
response utility. If yes, we swap the machines and repeat the
process. Otherwise, we have reached a local maxima and
output that value. We suspect that the local maxima that
this algorithm finds is also the global maximum (with
respect to an individual user) and that this process stop after
a few number of iterations, but we are unable to establish
it. However, in our simulations, this algorithm quickly
converges to a high (≥ .7) efficiency.
131
4.3 Local Greedy Adjustment
The above best response algorithms only work for the
linear utility functions described earlier. In practice,
utility functions may have more a complicated form, or even
worse, a user may not have a formulation of his utility
function. We do assume that the user still has a way to measure
his utility, which is the minimum assumption necessary for
any market-based resource allocation mechanism. In these
situations, users can use a more general strategy, the local
greedy adjustment method, which works as follows. A user
finds the two machines that provide him with the highest
and lowest marginal utility. He then moves a fixed small
amount of money from the machine with low marginal
utility to the machine with the higher one. This strategy aims
to adjust the bids so that the marginal values at each
machine being bid on are the same. This condition guarantees
the allocation is the optimum when the utility function is
concave. The tradeoff for local greedy adjustment is that it
takes longer to stabilize than best-response.
5. SIMULATION RESULTS
While the analytic results provide us with worst-case
analysis for the infinite parallelism model, in this section we
employ simulations to study the properties of the Nash
equilibria in more realistic scenarios and for the finite parallelism
model. First, we determine whether the user bidding process
converges, and if so, what the rate of convergence is.
Second, in cases of convergence, we look at the performance at
equilibrium, using the efficiency and fairness metrics defined
above.
Iterative Method. In our simulations, each user starts
with an initial bid vector and then iteratively updates his
bids until a convergence criterion (described below) is met.
The initial bid is set proportional to the user"s weights on
the machines. We experiment with two update methods, the
best response methods, as described in Section 4.1 and 4.2,
and the local greedy adjustment method, as described in
Section 4.3.
Convergence Criteria. Convergence time measures how
quickly the system reaches equilibrium. It is particularly
important in the highly dynamic environment of distributed
shared clusters, in which the system"s conditions may change
before reaching the equilibrium. Thus, a high convergence
rate may be more significant than the efficiency at the
equilibrium.
There are several different criteria for convergence. The
strongest criterion is to require that there is only negligible
change in the bids of each user. The problem with this
criterion is that it is too strict: users may see negligible change
in their utilities, but according to this definition the
system has not converged. The less strict utility gap criterion
requires there to be only negligible change in the users"
utility. Given users" concern for utility, this is a more natural
definition. Indeed, in practice, the user is probably not
willing to re-allocate their bids dramatically for a small utility
gain. Therefore, we use the utility gap criterion to measure
convergence time for the best response update method, i.e.
we consider that the system has converged if the utility gap
of each user is smaller than (0.001 in our experiments).
However, this criterion does not work for the local greedy
adjustment method because users of that method will
experience constant fluctuations in utility as they move money
around. For this method, we use the marginal utility gap
criterion. We compare the highest and lowest utility
margins on the machines. If the difference is negligible, then we
consider the system to be converged.
In addition to convergence to the equilibrium, we also
consider the criterion from the system provider"s view, the
social welfare stabilization criterion. Under this criterion,
a system has stabilized if the change in social welfare is
≤ . Individual users" utility may not have converged. This
criterion is useful to evaluate how quickly the system as a
whole reaches a particular efficiency level.
User preferences. We experiment with two models of
user preferences, random distribution and correlated
distribution. With random distribution, users" weights on the
different machines are independently and identically
distributed, according the uniform distribution. In practice,
users" preferences are probably correlated based on factors
like the hosts" location and the types of applications that
users run. To capture these correlations, we associate with
each user and machine a resource profile vector where each
dimension of the vector represents one resource (e.g., CPU,
memory, and network bandwidth). For a user i with a
profile pi = (pi1, . . . , pi ), pik represents user i"s need for
resource k. For machine j with profile qj = (qj1, . . . , qj ),
qjk represents machine j"s strength with respect to resource
k. Then, wij is the dot product of user i"s and machine
j"s resource profiles, i.e. wij = pi · qj =
P
k=1 pikqjk. By
using these profiles, we compress the parameter space and
introduce correlations between users and machines.
In the following simulations, we fix the number of
machines to 100 and vary the number of users from 5 to 250
(but we only report the results for the range of 5 − 150
users since the results remain similar for a larger number of
users). Sections 5.1 and 5.2 present the simulation results
when we apply the infinite parallelism and finite parallelism
models, respectively. If the system converges, we report the
number of iterations until convergence. A convergence time
of 200 iterations indicates non-convergence, in which case
we report the efficiency and fairness values at the point we
terminate the simulation.
5.1 Infinite parallelism
In this section, we apply the infinite parallelism model,
which assumes that users can use an unlimited number of
machines. We present the efficiency and fairness at the
equilibrium, compared to two baseline allocation methods:
social optimum and weight-proportional, in which users
distribute their bids proportionally to their weights on the
machines (which may seem a reasonable distribution method
intuitively).
We present results for the two user preference models.
With uniform preferences, users" weights for the different
machines are independently and identically distributed
according to the uniform distribution, U ∼ (0, 1) (and are
normalized thereafter). In correlated preferences, each user"s
and each machine"s resource profile vector has three
dimensions, and their values are also taken from the uniform
distribution, U ∼ (0, 1).
Convergence Time. Figure 2 shows the convergence
time, efficiency and fairness of the infinite parallelism model
under uniform (left) and correlated (right) preferences. Plots
(a) and (b) show the convergence and stabilization time
of the best-response and local greedy adjustment methods.
132
0
50
100
150
200
0 20 40 60 80 100 120 140 160
Convergencetime(#iterations)
Number of Users
Uniform preferences
(a)
Best-Response
Greedy (convergence)
Greedy (stabilization)
0
50
100
150
200
0 20 40 60 80 100 120 140 160
Number of Users
Correlated preferences
(b)
Best-response
Greedy (convergence)
Greedy (stabilization)
0
0.2
0.4
0.6
0.8
1
0 20 40 60 80 100 120 140 160
Efficiency
Number of Users
(c)
Nash equilibrium
Weight-proportional
Social Optimum
0
0.2
0.4
0.6
0.8
1
0 20 40 60 80 100 120 140 160
Number of Users
(d)
Nash equilibrium
Weight-proportional
Social optimum
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 20 40 60 80 100 120 140 160
Utilityuniformity
Number of Users
(e)
Nash equilibrium
Weight-proportional
Social optimum
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 20 40 60 80 100 120 140 160
Number of Users
(f)
Nash equilibrium
Weight proportional
Social optimum
0
0.2
0.4
0.6
0.8
1
0 20 40 60 80 100 120 140 160
Envy-freeness
Number of Users
(g)
Nash equilibrium
Weight proportional
Social optimum
0
0.2
0.4
0.6
0.8
1
0 20 40 60 80 100 120 140 160
Number of Users
(h)
Nash equilibrium
Weight proportional
Social optimum
Figure 2: Efficiency, utility uniformity, enviness and convergence time as a function of the number of users under the
infinite parallelism model, with uniform and correlated preferences. n = 100.
133
0
0.2
0.4
0.6
0.8
1
0 20 40 60 80 100
Efficiency
Iteration number
Best-Response
Greedy
Figure 3: Efficiency level over time under the infinite
parallelism model. number of users = 40. n = 100.
The best-response algorithm converges within a few number
of iterations for any number of users. In contrast, the local
greedy adjustment algorithm does not converge even within
500 iterations when the number of users is smaller than 60,
but does converge for a larger number of users. We believe
that for small numbers of users, there are dependency cycles
among the users that prevent the system from converging
because one user"s decisions affects another user, whose
decisions affect another user, etc. Regardless, the local greedy
adjustment method stabilizes within 100 iterations.
Figure 3 presents the efficiency over time for a system
with 40 users. It demonstrates that while both adjustment
methods reach the same social welfare, the best-response
algorithm is faster.
In the remainder of this paper, we will refer to the (Nash)
equilibrium, independent of the adjustment method used to
reach it.
Efficiency. Figure 2 (c) and (d) present the efficiency as
a function of the number of users. We present the efficiency
at equilibrium, and use the social optimum and the
weightproportional static allocation methods for comparison.
Social optimum provides an efficient allocation by definition.
For both user preference models, the efficiency at the
equilibrium is approximately 0.9, independent of the number of
users, which is only slightly worse than the social optimum.
The efficiency at the equilibrium is ≈ 50% improvement over
the weight-proportional allocation method for uniform
preferences, and ≈ 30% improvement for correlated preferences.
Fairness. Figure 2(e) and (f) present the utility
uniformity as a function of the number of users, and figures (g)
and (h) present the envy-freeness. While the social optimum
yields perfect efficiency, it has poor fairness. The
weightproportional method achieves the highest fairness among the
three allocation methods, but the fairness at the equilibrium
is close.
The utility uniformity is slightly better at the equilibrium
under uniform preferences (> 0.7) than under correlated
preferences (> 0.6), since when users" preferences are more
aligned, users" happiness is more likely going to be at the
expense of each other. Although utility uniformity decreases
in the number of users, it remains reasonable even for a large
number of users, and flattens out at some point. At the
social optimum, utility uniformity can be infinitely poor, as
some users may be allocated no resources at all. The same is
true with respect to envy-freeness. The difference between
uniform and correlated preferences is best demonstrated in
the social optimum results. When the number of users is
small, it may be possible to satisfy all users to some extent
if their preferences are not aligned, but if they are aligned,
even with a very small number of users, some users get no
resources, thus both utility uniformity and envy-freeness go
to zero. As the number of users increases, it becomes almost
impossible to satisfy all users independent of the existence
of correlation.
These results demonstrate the tradeoff between the
different allocation methods. The efficiency at the equilibrium is
lower than the social optimum, but it performs much
better with respect to fairness. The equilibrium allocation is
completely envy-free under uniform preferences and almost
envy-free under correlated preferences.
5.2 Finite parallelism
0
50
100
150
200
0 10 20 30 40 50 60 70 80 90
Convergencetime(#iterations)
Number of Users
5 machines/user
20 machines/user
Figure 4: Convergence time under the finite parallelism
model. n = 100.
0.6
0.7
0.8
0.9
1
0 20 40 60 80 100
Efficiency
Iteration number
5-machines/user (40 users)
20-machines/user (10 users)
Figure 5: Efficiency level over time under the finite
parallelism model with local search algorithm. n = 100.
We also consider the finite parallelism model and use the
local search algorithm, as described in Section 4.2, to
adjust user"s bids. We again experimented with both the
uniform and correlated preferences distributions and did not
find significant differences in the results so we present the
simulation results for only the uniform distribution.
In our experiments, the local search algorithm stops quickly
- it usually discovers a local maximum within two
iterations. As mentioned before, we cannot prove that a local
maximum is the global maximum, but our experiments
indicate that the local search heuristic leads to high efficiency.
134
Convergence time. Let ∆ denote the parallelism bound
that limits the maximum number of machines each user can
bid on. We experiment with ∆ = 5 and ∆ = 20. In both
cases, we use 100 machines and vary the number of users.
Figure 4 shows that the system does not always converge,
but if it does, the convergence happens quickly. The
nonconvergence occurs when the number of users is between 20
and 40 for ∆ = 5, between 5 and 10 for ∆ = 20. We
believe that the non-convergence is caused by moderate
competition. No competition allows the system to equilibrate
quickly because users do not have to change their bids in
reaction to changes in others" bids. High competition also
allows convergence because each user"s decision has only a
small impact on other users, so the system is more stable
and can gradually reach convergence. However, when there
is moderate competition, one user"s decisions may cause
dramatic changes in another"s decisions and cause large
fluctuations in bids. In both cases of non-convergence, the ratio of
competitors per machine, δ = m×∆/n for m users and n
machines, is in the interval [1, 2]. Although the system does
not converge in these bad ranges, the system nontheless
achieves and maintains a high level of overall efficiency after
a few iterations (as shown in Figure 5).
Performance. In Figure 6, we present the efficiency,
utility uniformity, and envy-freeness at the Nash
equilibrium for the finite parallelism model. When the system does
not converge, we measure performance by taking the
minimum value we observe after running for many iterations.
When ∆ = 5, there is a performance drop, in particular
with respect to the fairness metrics, in the range between
20 and 40 users (where it does not converge). For a larger
number of users, the system converges and achieves a lower
level of utility uniformity, but a high degree of efficiency and
envy-freeness, similar to those under the infinite parallelism
model. As described above, this is due the competition ratio
falling into the head-to-head range. When the parallelism
bound is large (∆ = 20), the performance is closer to the
infinite parallelism model, and we do not observe this drop
in performance.
6. RELATED WORK
There are two main groups of related work in resource
allocation: those that incorporate an economic mechanism,
and those that do not.
One non-economic approach is scheduling (surveyed by
Pinedo [20]). Examples of this approach are queuing in
first-come, first-served (FCFS) order, queueing using the
resource consumption of tasks (e.g., [28]), and scheduling
using combinatorial optimization [19]. These all assume that
the values and resource consumption of tasks are reported
accurately, which does not apply in the presence of strategic
users. We view scheduling and resource allocation as two
separate functions. Resource allocation divides a resource
among different users while scheduling takes a given
allocation and orders a user"s jobs.
Examples of the economic approach are Spawn [26]), work
by Stoica, et al. [24]., the Millennium resource allocator [4],
work by Wellman, et al. [27], Bellagio [2]), and Tycoon [15]).
Spawn and the work by Wellman, et al. uses a reservation
abstraction similar to the way airline seats are allocated.
Unfortunately, reservations have a high latency to acquire
resources, unlike the price-anticipating scheme we consider.
The tradeoff of the price-anticipating schemes is that users
have uncertainty about exactly how much of the resources
they will receive.
Bellagio[3] uses the SHARE centralized allocator. SHARE
allocates resources using a centralized combinatorial auction
that allows users to express preferences with
complementarities. Solving the NP-complete combinatorial auction
problem provides an optimally efficient allocation. The
priceanticipating scheme that we consider does not explicitly
operate on complementarities, thereby possibly losing some
efficiency, but it also avoids the complexity and overhead of
combinatorial auctions.
There have been several analyses [10, 11, 12, 13, 23] of
variations of price-anticipating allocation schemes in the
context of allocation of network capacity for flows. Their
methodology follows the study of congestion (potential) games [17,
22] by relating the Nash equilibrium to the solution of a
(usually convex) global optimization problem. But those
techniques no longer apply to our game because we model
users as having fixed budgets and private preferences for
machines. For example, unlike those games, there may
exist multiple Nash equilibria in our game. Milchtaich [16]
studied congestion games with private preferences but the
technique in [16] is specific to the congestion game.
7. CONCLUSIONS
This work studies the performance of a market-based
mechanism for distributed shared clusters using both analyatical
and simulation methods. We show that despite the worst
case bounds, the system can reach a high performance level
at the Nash equilibrium in terms of both efficiency and
fairness metrics. In addition, with a few exceptions under
the finite parallelism model, the system reaches equilibrium
quickly by using the best response algorithm and, when the
number of users is not too small, by the greedy local
adjustment method.
While our work indicates that the price-anticipating scheme
may work well for resource allocation for shared clusters,
there are many interesting directions for future work. One
direction is to consider more realistic utility functions. For
example, we assume that there is no parallelization cost, and
there is no performance degradation when multiple users
share the same machine. In practice, both assumptions may
not be correct. For examples, the user must copy code and
data to a machine before running his application there, and
there is overhead for multiplexing resources on a single
machine. When the job size is large enough and the degree
of multiplexing is sufficiently low, we can probably ignore
those effects, but those costs should be taken into account
for a more realistic modeling. Another assumption is that
users have infinite work, so the more resources they can
acquire, the better. In practice, users have finite work. One
approach to address this is to model the user"s utility
according to the time to finish a task rather than the amount
of resources he receives.
Another direction is to study the dynamic properties of
the system when the users" needs change over time,
according to some statistical model. In addition to the usual
questions concerning repeated games, it would also be important
to understand how users should allocate their budgets wisely
over time to accomodate future needs.
135
0
0.2
0.4
0.6
0.8
1
0 20 40 60 80 100 120 140 160
Number of Users
(a) Limit: 5 machines/user
Efficiency
Utility uniformity
Envy-freeness
0
0.2
0.4
0.6
0.8
1
0 10 20 30 40 50 60 70 80 90
Number of Users
(b) Limit: 20 machines/user
Efficiency
Utility uniformity
Envy-freeness
Figure 6: Efficiency, utility uniformity and envy-freeness under the finite parallelism model. n = 100.
8. ACKNOWLEDGEMENTS
We thank Bernardo Huberman, Lars Rasmusson, Eytan
Adar and Moshe Babaioff for fruitful discussions. We also
thank the anonymous reviewers for their useful comments.
9. REFERENCES
[1] http://planet-lab.org.
[2] A. AuYoung, B. N. Chun, A. C. Snoeren, and A. Vahdat.
Resource Allocation in Federated Distributed Computing
Infrastructures. In Proceedings of the 1st Workshop on
Operating System and Architectural Support for the
On-demand IT InfraStructure, 2004.
[3] B. Chun, C. Ng, J. Albrecht, D. C. Parkes, and A. Vahdat.
Computational Resource Exchanges for Distributed
Resource Allocation. 2004.
[4] B. N. Chun and D. E. Culler. Market-based Proportional
Resource Sharing for Clusters. Technical Report CSD-1092,
University of California at Berkeley, Computer Science
Division, January 2000.
[5] M. Feldman, K. Lai, and L. Zhang. A Price-anticipating
Resource Allocation Mechanism for Distributed Shared
Clusters. Technical report, arXiv, 2005.
http://arxiv.org/abs/cs.DC/0502019.
[6] D. Ferguson, Y. Yemimi, and C. Nikolaou. Microeconomic
Algorithms for Load Balancing in Distributed Computer
Systems. In International Conference on Distributed
Computer Systems, pages 491-499, 1988.
[7] I. Foster and C. Kesselman. Globus: A Metacomputing
Infrastructure Toolkit. The International Journal of
Supercomputer Applications and High Performance
Computing, 11(2):115-128, Summer 1997.
[8] M. L. Fredman and R. E. Tarjan. Fibonacci Heaps and
Their Uses in Improved Network Optimization Algorithms.
Journal of the ACM, 34(3):596-615, 1987.
[9] H. N. Gabow. Data Structures for Weighted Matching and
Nearest Common Ancestors with Linking. In Proceedings of
1st Annual ACM-SIAM Symposium on Discrete
algorithms, pages 434-443, 1990.
[10] B. Hajek and S. Yang. Strategic Buyers in a Sum Bid
Game for Flat Networks. Manuscript, http:
//tesla.csl.uiuc.edu/~hajek/Papers/HajekYang.pdf,
2004.
[11] R. Johari and J. N. Tsitsiklis. Efficiency Loss in a Network
Resource Allocation Game. Mathematics of Operations
Research, 2004.
[12] F. P. Kelly. Charging and Rate Control for Elastic Traffic.
European Transactions on Telecommunications, 8:33-37,
1997.
[13] F. P. Kelly and A. K. Maulloo. Rate Control in
Communication Networks: Shadow Prices, Proportional
Fairness and Stability. Operational Research Society,
49:237-252, 1998.
[14] H. W. Kuhn. The Hungarian Method for the Assignment
Problem. Naval Res. Logis. Quart., 2:83-97, 1955.
[15] K. Lai, L. Rasmusson, S. Sorkin, L. Zhang, and B. A.
Huberman. Tycoon: an Implemention of a Distributed
Market-Based Resource Allocation System. Manuscript,
http://www.hpl.hp.com/research/tycoon/papers_and_
presentations, 2004.
[16] I. Milchtaich. Congestion Games with Player-Specific
Payoff Functions. Games and Economic Behavior,
13:111-124, 1996.
[17] D. Monderer and L. S. Sharpley. Potential Games. Games
and Economic Behavior, 14:124-143, 1996.
[18] C. Papadimitriou. Algorithms, Games, and the Internet. In
Proceedings of 33rd STOC, 2001.
[19] C. H. Papadimitriou and K. Steiglitz. Combinatorial
Optimization. Dover Publications, Inc., 1982.
[20] M. Pinedo. Scheduling. Prentice Hall, 2002.
[21] O. Regev and N. Nisan. The Popcorn Market: Online
Markets for Computational Resources. In Proceedings of
1st International Conference on Information and
Computation Economies, pages 148-157, 1998.
[22] R. W. Rosenthal. A Class of Games Possessing
Pure-Strategy Nash Equilibria. Internation Journal of
Game Theory, 2:65-67, 1973.
[23] S. Sanghavi and B. Hajek. Optimal Allocation of a
Divisible Good to Strategic Buyers. Manuscript, http:
//tesla.csl.uiuc.edu/~hajek/Papers/OptDivisible.pdf,
2004.
[24] I. Stoica, H. Abdel-Wahab, and A. Pothen. A
Microeconomic Scheduler for Parallel Computers. In
Proceedings of the Workshop on Job Scheduling Strategies
for Parallel Processing, pages 122-135, April 1995.
[25] H. R. Varian. Equity, Envy, and Efficiency. Journal of
Economic Theory, 9:63-91, 1974.
[26] C. A. Waldspurger, T. Hogg, B. A. Huberman, J. O.
Kephart, and S. Stornetta. Spawn: A Distributed
Computational Economy. IEEE Transactions on Software
Engineering, 18(2):103-117, February 1992.
[27] M. P. Wellman, W. E. Walsh, P. R. Wurman, and J. K.
MacKie-Mason. Auction Protocols for Decentralized
Scheduling. Games and Economic Behavior, 35:271-303,
2001.
[28] A. Wierman and M. Harchol-Balter. Classifying Scheduling
Policies with respect to Unfairness in an M/GI/1. In
Proceedings of the ACM SIGMETRICS 2003 Conference
on Measurement and Modeling of Computer Systems, 2003.
[29] L. Zhang. On the Efficiency and Fairness of a Fixed Budget
Resource Allocation Game. Manuscript, 2004.
136 | algorithm;nash equilibrium;distributed shared cluster;fairness;parallelism;efficiency;anarchy price;simulation;price-anticipate mechanism;utility;bidding mechanism;resource allocation;price-anticipating scheme;price of anarchy |
train_J-55 | From Optimal Limited To Unlimited Supply Auctions | We investigate the class of single-round, sealed-bid auctions for a set of identical items to bidders who each desire one unit. We adopt the worst-case competitive framework defined by [9, 5] that compares the profit of an auction to that of an optimal single-price sale of least two items. In this paper, we first derive an optimal auction for three items, answering an open question from [8]. Second, we show that the form of this auction is independent of the competitive framework used. Third, we propose a schema for converting a given limited-supply auction into an unlimited supply auction. Applying this technique to our optimal auction for three items, we achieve an auction with a competitive ratio of 3.25, which improves upon the previously best-known competitive ratio of 3.39 from [7]. Finally, we generalize a result from [8] and extend our understanding of the nature of the optimal competitive auction by showing that the optimal competitive auction occasionally offers prices that are higher than all bid values. | 1. INTRODUCTION
The research area of optimal mechanism design looks at
designing a mechanism to produce the most desirable outcome for the
entity running the mechanism. This problem is well studied for the
auction design problem where the optimal mechanism is the one
that brings the seller the most profit. Here, the classical approach
is to design such a mechanism given the prior distribution from
which the bidders" preferences are drawn (See e.g., [12, 4]).
Recently Goldberg et al. [9] introduced the use of worst-case
competitive analysis (See e.g., [3]) to analyze the performance of auctions
that have no knowledge of the prior distribution. The goal of such
work is to design an auction that achieves a large constant fraction
of the profit attainable by an optimal mechanism that knows the
prior distribution in advance. Positive results in this direction are
fueled by the observation that in auctions for a number of identical
units, much of the distribution from which the bidders are drawn
can be deduced on the fly by the auction as it is being run [9, 14,
2].
The performance of an auction in such a worst-case competitive
analysis is measured by its competitive ratio, the ratio between a
benchmark performance and the auction"s performance on the input
distribution that maximizes this ratio. The holy grail of the
worstcase competitive analysis of auctions is the auction that achieves
the optimal competitive ratio (as small as possible). Since [9] this
search has led to improved understanding of the nature of the
optimal auction, the techniques for on-the-fly pricing in these
scenarios, and the competitive ratio of the optimal auction [5, 7, 8]. In this
paper we continue this line of research by improving in all of these
directions. Furthermore, we give evidence corroborating the
conjecture that the form of the optimal auction is independent of the
benchmark used in the auction"s competitive analysis. This result
further validates the use of competitive analysis in gauging auction
performance.
We consider the single item, multi-unit, unit-demand auction
problem. In such an auction there are many units of a single item
available for sale to bidders who each desire only one unit. Each bidder
has a valuation representing how much the item is worth to him.
The auction is performed by soliciting a sealed bid from each of
the bidders and deciding on the allocation of units to bidders and
the prices to be paid by the bidders. The bidders are assumed to bid
so as to maximize their personal utility, the difference between their
valuation and the price they pay. To handle the problem of
designing and analyzing auctions where bidders may falsely declare their
valuations to get a better deal, we will adopt the solution concept
of truthful mechanism design (see, e.g., [9, 15, 13]). In a truthful
auction, revealing one"s true valuation as one"s bid is an optimal
strategy for each bidder regardless of the bids of the other bidders.
In this paper, we will restrict our attention to truthful (a.k.a.,
incentive compatible or strategyproof) auctions.
A particularly interesting special case of the auction problem is
the unlimited supply case. In this case the number of units for sale is
at least the number of bidders in the auction. This is natural for the
sale of digital goods where there is negligible cost for duplicating
175
and distributing the good. Pay-per-view television and
downloadable audio files are examples of such goods.
The competitive framework introduced in [9] and further refined
in [5] uses the profit of the optimal omniscient single priced
mechanism that sells at least two units as the benchmark for competitive
analysis. The assumption that two or more units are sold is
necessary because in the worst case it is impossible to obtain a constant
fraction of the profit of the optimal mechanism when it sells only
one unit [9]. In this framework for competitive analysis, an auction
is said to be β-competitive if it achieves a profit that is within a
factor of β ≥ 1 of the benchmark profit on every input. The optimal
auction is the one which is β-competitive with the minimum value
of β.
Previous to this work, the best known auction for the unlimited
supply case had a competitive ratio of 3.39 [7] and the best lower
bound known was 2.42 [8]. For the limited supply case, auctions
can achieve substantially better competitive ratios. When there are
only two units for sale, the optimal auction gives a competitive ratio
of 2, which matches the lower bound for two units. When there
are three units for sale, the best previously known auction had a
competitive ratio of 2.3, compared with a lower bound of 13/6 ≈
2.17 [8].
The results of this paper are as follows:
• We give the auction for three units that is optimally
competitive against the profit of the omniscient single priced
mechanism that sells at least two units. This auction achieves a
competitive ratio of 13/6, matching the lower bound from
[8] (Section 3).
• We show that the form of the optimal auction is
independent of the benchmark used in competitive analysis. In doing
so, we give an optimal three bidder auction for generalized
benchmarks (Section 4).
• We give a general technique for converting a limited supply
auction into an unlimited supply auction where it is possible
to use the competitive ratio of the limited supply auction to
obtain a bound on the competitive ratio of the unlimited
supply auction. We refer to auctions derived from this
framework as aggregation auctions (Section 5).
• We improve on the best known competitive ratio by
proving that the aggregation auction constructed from our optimal
three-unit auction is 3.25-competitive (Section 5.1).
• Assuming that the conjecture that the optimal -unit auction
has a competitive ratio that matches the lower bound proved
in [8], we show that this optimal auction for ≥ 3 on some
inputs will occasionally offer prices that are higher than any
bid in that input (Section 6). For the three-unit case where we
have shown that the lower bound of [8] is tight, this
observation led to our construction of the optimal three-unit auction.
2. DEFINITIONS AND BACKGROUND
We consider single-round, sealed-bid auctions for a set of
identical units of an item to bidders who each desire one unit. As
mentioned in the introduction, we adopt the game-theoretic solution
concept of truthful mechanism design. A useful simplification of
the problem of designing truthful auctions is obtained through the
following algorithmic characterization [9]. Related formulations to
this one have appeared in numerous places in recent literature (e.g.,
[1, 14, 5, 10]).
DEFINITION 1. Given a bid vector of n bids, b = (b1, . . . , bn),
let b-i denote the vector of with bi replaced with a ‘?", i.e.,
b-i = (b1, . . . , bi−1, ?, bi+1, . . . , bn).
DEFINITION 2. Let f be a function from bid vectors (with a
‘?") to prices (non-negative real numbers). The deterministic
bidindependent auction defined by f, BIf , works as follows. For each
bidder i:
1. Set ti = f(b-i).
2. If ti < bi, bidder i wins at price ti.
3. If ti > bi, bidder i loses.
4. Otherwise, (ti = bi) the auction can either accept the bid at
price ti or reject it.
A randomized bid-independent auction is a distribution over
deterministic bid-independent auctions.
The proof of the following theorem can be found, for example,
in [5].
THEOREM 1. An auction is truthful if and only if it is equivalent
to a bid-independent auction.
Given this equivalence, we will use the the terminology
bidindependent and truthful interchangeably.
For a randomized bid-independent auction, f(b-i) is a random
variable. We denote the probability density of f(b-i) at z by ρb-i (z).
We denote the profit of a truthful auction A on input b as A(b).
The expected profit of the auction, E[A(b)], is the sum of the
expected payments made by each bidder, which we denote by pi(b)
for bidder i. Clearly, the expected payment of each bid satisfies
pi(b) =
bi
0
xρb-i (x)dx.
2.1 Competitive Framework
We now review the competitive framework from [5]. In order
to evaluate the performance of auctions with respect to the goal of
profit maximization, we introduce the optimal single price
omniscient auction F and the related omniscient auction F(2)
.
DEFINITION 3. Give a vector b = (b1, . . . , bn), let b(i)
represent the i-th largest value in b.
The optimal single price omniscient auction, F, is defined as
follows. Auction F on input b determines the value k such that
kb(k) is maximized. All bidders with bi ≥ b(k) win at price b(k); all
remaining bidders lose. The profit of F on input b is thus F(b) =
max1≤k≤n kb(k).
In the competitive framework of [5] and subsequent papers, the
performance of a truthful auction is gauged in comparison to F(2)
,
the optimal singled priced auction that sells at least two units. The
profit of F(2)
is max2≤k≤n kb(k) There are a number of reasons
to choose this benchmark for comparison, interested readers should
see [5] or [6] for a more detailed discussion.
Let A be a truthful auction. We say that A is β-competitive
against F(2)
(or just β-competitive) if for all bid vectors b, the
expected profit of A on b satisfies
E[A(b)] ≥
F(2)
(b)
β
.
In Section 4 we generalize this framework to other profit
benchmarks.
176
2.2 Scale Invariant and Symmetric Auctions
A symmetric auction is one where the auction outcome is
unchanged when the input bids arrive in a different permutation.
Goldberg et al. [8] show that a symmetric auction achieves the optimal
competitive ratio. This is natural as the profit benchmark we
consider is symmetric, and it allows us to consider only symmetric
auctions when looking for the one with the optimal competitive
ratio.
An auction defined by bid-independent function f is scale
invariant if, for all i and all z, Pr[f(b-i) ≥ z] = Pr[f(cb-i) ≥ cz].
It is conjectured that the assumption of scale invariance is without
loss of generality. Thus, we are motivated to consider
symmetric scale-invariant auctions. When specifying a symmetric
scaleinvariant auction we can assume that f is only a function of the
relative magnitudes of the n − 1 bids in b-i and that one of the
bids, bj = 1. It will be convenient to specify such auctions via the
density function of f(b-i), ρb-i (z). It is enough to specify such a
density function of the form ρ1,z1,...,zn−1 (z) with 1 ≤ zi ≤ zi+1.
2.3 Limited Supply Versus Unlimited Supply
Following [8], throughout the remainder of this paper we will
be making the assumption that n = , i.e., the number of bidders
is equal to the number of units for sale. This is without loss of
generality as (a) any lower bound that applies to the n = case
also extends to the case where n ≥ [8], and (b) there is a
reduction from the unlimited supply auction problem to the limited
supply auction problem that takes an unlimited supply auction that
is β-competitive with F(2)
and constructs a limited supply auction
parameterized by that is β-competitive with F(2, )
, the optimal
omniscient auction that sells between 2 and units [6].
Henceforth, we will assume that we are in the unlimited supply
case, and we will examine lower bounds for limited supply
problems by placing a restriction on the number of bidders in the
auction.
2.4 Lower Bounds and Optimal Auctions
Frequently in this paper, we will refer to the best known lower
bound on the competitive ratio of truthful auctions:
THEOREM 2. [8] The competitive ratio of any auction on n
bidders is at least
1 −
n
i=2
−1
n
i−1
i
i − 1
n − 1
i − 1
.
DEFINITION 4. Let Υn denote the n-bidder auction that achieves
the optimal competitive ratio.
This bound is derived by analyzing the performance of any
auction on the following distribution B. In each random bid
vector B, each bid Bi is drawn i.i.d. from the distribution such that
Pr[Bi ≥ s] ≤ 1/s for all s ∈ S.
In the two-bidder case, this lower bound is 2. This is achieved by
Υ2 which is the 1-unit Vickrey auction.1
In the three-bidder case,
this lower bound is 13/6. In the next section, we define the
auction Υ3 which matches this lower bound. In the four-bidder case,
this lower bound is 96/215. In the limit as the number of bidders
grows, this lower bound approaches a number which is
approximately 2.42.
It is conjectured that this lower bound is tight for any number of
bidders and the optimal auction, Υn, matches it.
1
The 1-unit Vickrey auction sells to the highest bidder at the second
highest bid value.
2.5 Profit Extraction
In this section we review the truthful profit extraction mechanism
ProfitExtractR. This mechanism is a special case of a general
cost-sharing schema due to Moulin and Shenker [11].
The goal of profit extraction is, given bids b, to extract a target
value R of profit from some subset of the bidders.
ProfitExtractR: Given bids b, find the largest k such
that the highest k bidders can equally share the cost
R. Charge each of these bidders R/k. If no subset
of bidders can cover the cost, the mechanism has no
winners.
Important properties of this auction are as follows:
• ProfitExtractR is truthful.
• If R ≤ F(b), ProfitExtractR(b) = R; otherwise it has no
winners and no revenue.
We will use this profit extraction mechanism in Section 5 with
the following intuition. Such a profit extractor makes it possible to
treat this subset of bidders as a single bid with value F(S). Note
that given a single bid, b, a truthful mechanism might offer it price
t and if t ≤ b then the bidder wins and pays t; otherwise the
bidder pays nothing (and loses). Likewise, a mechanism can offer
the set of bidders S a target revenue R. If R ≤ F(2)
(S), then
ProfitExtractR raises R from S; otherwise, the it raises no
revenue from S.
3. AN OPTIMAL AUCTION FOR THREE
BIDDERS
In this section we define the optimal auction for three bidders,
Υ3, and prove that it indeed matches the known lower bound of
13/6. We follow the definition and proof with a discussion of how
this auction was derived.
DEFINITION 5. Υ3 is scale-invariant and symmetric and given
by the bid-independent function with density function
ρ1,x(z) =
⎧
⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩
For x ≤ 3/2
1 with probability 9/13
z with probability density g(z) for z > 3/2
For x > 3/2⎧
⎪⎨
⎪⎩
1 with probability 9/13 −
x
3/2
zg(z)dz
x with probability
x
3/2
(z + 1)g(z)dz
z with probability density g(z) for z > x
where g(x) = 2/13
(x−1)3 .
THEOREM 3. The Υ3 auction has a competitive ratio of 13/6 ≈
2.17, which is optimal. Furthermore, the auction raises exactly
6
13
F(2)
on every input with non-identical bids.
PROOF. Consider the bids 1, x, y, with 1 < x < y. There are
three cases.
CASE 1 (x < y ≤ 3/2): F(2)
= 3. The auction must raise
expected revenue of at least 18/13 on these bids. The bidder with
valuation x will pay 1 with 9/13, and the bidder with valuation y
will pay 1 with probability 9/13. Therefore Υ3 raises 18/13 on
these bids.
CASE 2 (x ≤ 3/2 < y): F(2)
= 3. The auction must raise
expected revenue of at least 18/13 on these bids. The bidder with
177
valuation x will pay 9/13 −
y
3/2
zg(z)dz in expectation. The
bidder with valuation y will pay 9/13 +
y
3/2
zg(z)dz in expectation.
Therefore Υ3 raises 18/13 on these bids.
CASE 3 (3/2 < x ≤ y): F(2)
= 2x. The auction must raise
expected revenue of at least 12x/13 on these bids. Consider the
revenue raised from all three bidders:
E[Υ3(b)] = p(1, x, y) + p(x, 1, y) + p(y, 1, x)
= 0 + 9/13 −
y
3/2
zg(z)dz + 9/13 −
x
3/2
zg(z)dz
+ x
x
3/2
(z + 1)g(z)dz +
y
x
zg(z)dz
= 18/13 + (x − 2)
x
3/2
zg(z)dz + x
x
3/2
g(z)dz
= 12x/13.
The final equation comes from substituting in g(x) = 2/13
(x−1)3 and
expanding the integrals. Note that the fraction of F(2)
raised on
every input is identical. If any of the inequalities 1 ≤ x ≤ y
are not strict, the same proof applies giving a lower bound on the
auction"s profit; however, this bound may no longer be tight.
Motivation for Υ3
In this section, we will conjecture that a particular input distribution
is worst-case, and show, as a consequence, that all inputs are
worstcase in the optimal auction. By applying this consequence, we will
derive an optimal auction for three bidders.
A truthful, randomized auction on n bidders can be represented
by a randomized function f : Rn−1
× n → R that maps masked
bid vectors to prices in R. By normalization, we can assume that
the lowest possible bid is 1. Recall that ρb-i (z) = Pr[f(b-i) = z].
The optimal auction for the finite auction problem can be found
by the following optimization problem in which the variables are
ρb-i (z):
maximize r
subject to
n
i=1
bi
z=1
zρb-i (z)dz ≥ rF(2)
(b)
∞
z=1
ρb-i (z)dz = 1
ρb-i (z) ≥ 0
This set of integral inequalities is difficult to maximize over.
However, by guessing which constraints are tight and which are
slack at the optimum, we will be able to derive a set of differential
equations for which any feasible solution is an optimal auction.
As we discuss in Section 2.4, in [8], the authors define a
distribution and use it to find a lower bound on the competitive ratio of
the optimal auction. For two bidders, this bid distribution is the
worst-case input distribution. We guess (and later verify) that this
distribution is the worst-case input distribution for three bidders as
well. Since this distribution has full support over the set of all bid
vectors and a worst-case distribution puts positive probability only
on worst-case inputs, we can therefore assume that all but a
measure zero set of inputs is worst-case for the optimal auction. In the
optimal two-bidder auction, all inputs with non-identical bids are
worst-case, so we will assume the same for three bidders.
The guess that these constraints are tight allows us to transform
the optimization problem into a feasibility problem constrained by
differential equations. If the solution to these equations has value
matching the lower bound obtained from the worst-case
distribution, then this solution is the optimal auction and that our
conjectured choice of worst-case distribution is correct.
In Section 6 we show that the optimal auction must sometimes
place probability mass on sale prices above the highest bid. This
motivates considering symmetric scale-invariant auctions for three
bidders with probability density function, ρ1,x(z), of the following
form:
ρ1,x(z) =
⎧
⎪⎨
⎪⎩
1 with discrete probability a(x)
x with discrete probability b(x)
z with probability density g(z) for z > x
In this auction, the sale price for the first bidder is either one
of the latter two bids, or higher than either bid with a probability
density which is independent of the input.
The feasibility problem which arises from the linear optimization
problem by assuming the constraints are tight is as follows:
a(y) + a(x) + xb(x) +
y
x
zg(z)dz = r max(3, 2x) ∀x < y
a(x) + b(x) +
∞
x
g(z)dz = 1
a(x) ≥ 0
b(x) ≥ 0
g(z) ≥ 0
Solving this feasibility problem gives the auction Υ3 proposed
above. The proof of its optimality validates its proposed form.
Finding a simple restriction on the form of n-bidder auctions for
n > 3 under which the optimal auction can be found analytically
as above remains an open problem.
4. GENERALIZED PROFIT BENCHMARKS
In this section, we widen our focus beyond auctions that compete
with F(2)
to consider other benchmarks for an auction"s profit. We
will show that, for three bidders, the form of the optimal auction
is essentially independent of the benchmark profit used. This
results strongly corroborates the worst-case competitive analysis of
auctions by showing that our techniques allow us to derive auctions
which are competitive against a broad variety of reasonable
benchmarks rather than simply against F(2)
.
Previous work in competitive analysis of auctions has focused on
the question of designing the auction with the best competitive
ratio against F(2)
, the profit of the optimal omniscient single-priced
mechanism that sells at least two items. However, it is reasonable to
consider other benchmarks. For instance, one might wish to
compete against V∗
, the profit of the k-Vickrey auction with
optimal-inhindsight choice of k.2
Alternatively, if an auction is being used as
a subroutine in a larger mechanism, one might wish to choose the
auction which is optimally competitive with a benchmark specific
to that purpose.
Recall that F(2)
(b) = max2≥k≥n kb(k). We can generalize this
definition to Gs, parameterized by s = (s2, . . . , sn) and defined as:
Gs(b) = max
2≤k≤n
skb(k).
When considering Gs we assume without loss of generality that
si < si+1 as otherwise the constraint imposed by si+1 is irrelevant.
Note that F(2)
is the special case of Gs with si = i, and that V∗
=
Gs with si = i − 1.
2
Recall that the k-Vickrey auction sells a unit to each of the
highest k bidders at a price equal to the k + 1st highest bid, b(k+1),
achieving a profit of kb(k+1).
178
Competing with Gs
We will now design a three-bidder auction Υs,t
3 that achieves the
optimal competitive ratio against Gs,t. As before, we will first find
a lower bound on the competitive ratio and then design an auction
to meet that bound.
We can lower bound the competitive ratio of Υs,t
3 using the same
worst-case distribution from [8] that we used against F(2)
.
Evaluating the performance of any auction competing against Gs,t on
this distribution will yield the following theorem. We denote the
optimal auction for three bidders against Gs,t as Υs,t
3 .
THEOREM 4. The optimal three-bidder auction, Υs,t
3 ,
competing against Gs,t(b) = max(sb(2), tb(3)) has a competitive ratio of
at least s2
+t2
2t
.
The proof can be found in the appendix.
Similarly, we can find the optimal auction against Gs,t using the
same technique we used to solve for the three bidder auction with
the best competitive ratio against F(2)
.
DEFINITION 6. Υs,t
3 is scale-invariant and symmetric and given
by the bid-independent function with density function
ρ1,x(z) =
⎧
⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩
For x ≤ t
s
1 with probability t2
s2+t2
z with probability density g(z) for z > t
s
For x > t
s⎧
⎪⎪⎨
⎪⎪⎩
1 with probability t2
s2+t2 −
x
t
s
zg(z)dz
x with probability
x
t
s
(z + 1)g(z)dz
z with probability density g(z) for z > x
where g(x) = 2(t−s)2
/(s2
+t2
)
(x−1)3 .
THEOREM 5. Υs,t
3 is s2
+t2
2t
-competitive with Gs,t.
This auction, like Υ3, can be derived by reducing the optimization
problem to a feasibility problem, guessing that the optimal solution
has the same form as Υs,t
3 , and solving. The auction is optimal
because it matches the lower bound found above. Note that the form
of Υs,t
3 is essentially the same as for Υ3, but that the probability of
each price is scaled depending on the values of s and t.
That our auction for three bidders matches the lower bound
computed by the input distribution used in [8] is strong evidence that
this input distribution is the worst-case input distribution for any
number of bidders and any generalized profit benchmark.
Furthermore, we strongly suspect that for any number of bidders, the form
of the optimal auction will be independent of the benchmark used.
5. AGGREGATION AUCTIONS
We have seen that optimal auctions for small cases of the
limitedsupply model can be found analytically. In this section, we will
construct a schema for turning limited supply auctions into
unlimited supply auctions with a good competitive ratio.
As discussed in Section 2.5, the existence of a profit
extractor, ProfitExtractR, allows an auction to treat a set of bids S
as a single bid with value F(S). Given n bidders and an
auction, Am, for m < n bidders, we can convert the m-bidder
auction into an n-bidder auction by randomly partitioning the bidders
into m subsets and then treating each subset as a single bidder (via
ProfitExtractR) and running the m-bidder auction.
DEFINITION 7. Given a truthful m-bidder auction, Am, the
m-aggregation auction for Am, AggAm
, works as follows:
1. Cast each bid uniformly at random into one of m bins,
resulting in bid vectors b(1)
, . . . , b(m)
.
2. For each bin j, compute the aggregate bid Bj = F(b(j)
).
Let B be the vector of aggregate bids, and B−j be the
aggregate bids for all bins other than j.
3. Compute the aggregate price Tj = f(B−j), where f is the
bid-independent function for Am.
4. For each bin j, run ProfitExtractTj on b(j)
.
Since Am and ProfitExtractR are truthful, Tj is computed
independently of any bid in bin j and thus the price offered any bidder
in b(j)
is independent of his bid; therefore,
THEOREM 6. If Am is truthful, the m-aggregation auction for
Am, AggAm
, is truthful.
Note that this schema yields a new way of understanding the
Random Sampling Profit Extraction (RSPE) auction [5] as the
simplest case of an aggregation auction. It is the 2-aggregation auction
for Υ2, the 1-unit Vickrey auction.
To analyze AggAm
, consider throwing k balls into m labeled
bins. Let k represent a configuration of balls in bins, so that ki is
equal to the number of balls in bin i, and k(i) is equal to the number
of balls in the ith largest bin. Let Km,k represent the set of all
possible configurations of k balls in m bins. We write the multinomial
coefficient of k as k
k
. The probability that a particular
configuration k arises by throwing balls into bins uniformly at random is
k
k
m−k
.
THEOREM 7. Let Am be an auction with competitive ratio β.
Then the m-aggregation auction for Am, AggAm
, raises the
following fraction of the optimal revenue F(2)
(b):
E AggAm
(b)
F(2)
≥ min
k≥2
k∈Km,k
F(2)
(k) k
k
βkmk
PROOF. By definition, F(2)
sells to k ≥ 2 bidders at a single
price p. Let kj be the number of such bidders in b(j)
. Clearly,
F(b(j)
) ≥ pkj. Therefore,
F(2)
(F(b(1)
), . . . , F(b(n)
))
F(2)(b)
≥
F(2)
(pk1, . . . , pkn)
pk
=
F(2)
(k1, . . . , kn)
k
The inequality follows from the monotonicity of F(2)
, and the
equality from the homogeneity of F(2)
.
ProfitExtractTj will raise Tj if Tj ≤ Bj , and no profit
otherwise. Thus, E AggAm
(b) ≥ E F(2)
(B)/β . The theorem
follows by rewriting this expectation as a sum over all k in Km,k.
5.1 A 3.25 Competitive Auction
We apply the aggregation auction schema to Υ3, our optimal
auction for three bidders, to achieve an auction with competitive
ratio 3.25. This improves on the previously best known auction
which is 3.39-competitive [7].
THEOREM 8. The aggregation auction for Υ3 has competitive
ratio 3.25.
179
PROOF. By theorem 7,
E AggΥ3
(b)
F(2)(b)
≥ min
k≥2
k
i=1
k−i
j=1
F(2)
(i, j, k − i − j) k
i,j,k−i−j
βk3k
For k = 2 and k = 3, E AggΥ3
(b) = 2
3
k/β. As k increases,
E AggΥ3
(b) /F(2)
increases as well. Since we do not expect
to find a closed-form formula for the revenue, we lower bound
F(2)
(b) by 3b(3). Using large deviation bounds, one can show
that this lower bound is greater than 2
3
k/β for large-enough k, and
the remainder can be shown by explicit calculation.
Plugging in β = 13/6, the competitive ratio is 13/4. As k
increases, the competitive ratio approaches 13/6.
Note that the above bound on the competitive ratio of AggΥ3
is tight. To see this, consider the bid vector with two very large
and non-identical bids of h and h + with the remaining bids 1.
Given that the competitive ratio of Υ3 is tight on this example,
the expected revenue of this auction on this input will be exactly
13/4.
5.2 A Gs,t-based Aggregation Auction
In this section we show that Υ3 is not the optimal auction to
use in an aggregation auction. One can do better by choosing the
auction that is optimally competitive against a specially tailored
benchmark.
To see why this might be the case, notice (Table 1) that the
fraction of F(2)
(b) raised for when there are k = 2 and k = 3
winning bidders in F(2)
(b) is substantially smaller than the fraction of
F(2)
(b) raised when there are more winners. This occurs because
the expected ratio between F(2)
(B) and F(2)
(b) is lower in this
case while the competitive ratio of Υ3 is constant. If we chose a
three bidder auction that performed better when F(2)
has smaller
numbers of winners, our aggregation auction would perform better
in the worst case.
One approach is to compete against a different benchmark that
puts more weight than F(2)
on solutions with a small number of
winners. Recall that F(2)
is the instance of Gs,t with s = 2 and
t = 3. By using the auction that competes optimally against Gs,t
with s > 2, while holding t = 3, we will raise a higher
fraction of revenue on smaller numbers of winning bidders and a lower
fraction of revenue on large numbers of winning bidders. We can
numerically optimize the values of s and t in Gs,t(b) in order to
achieve the best competitive ratio for the aggregation auction. In
fact, this will allow us to improve our competitive ratio slightly.
THEOREM 9. For an optimal choice of s and t, the aggregation
auction for Υs,t
3 is 3.243-competitive.
The proof follows the outline of Theorem 7 and 8 with the
optimal choice of s = 2.162 (while t is held constant at 3).
5.3 Further Reducing the Competitive Ratio
There are a number of ways we might attempt to use this
aggregation auction schema to continue to push the competitive ratio
down. In this section, we give a brief discussion of several attempts.
5.3.1 AggΥm
for m > 3
If the aggregation auction for Υ2 has a competitive ratio of 4
and the aggregation auction for Υ3 has a competitive ratio of 3.25,
can we improve the competitive ratio by aggregating Υ4 or Υm
for larger m? We conjecture in the negative: for m > 3, the
aggregation auction for Υm has a larger competitive ratio than the
aggregation auction for Υ3. The primary difficulty in proving this
k m = 2 m = 3 m = 4 m = 5 m = 6 m = 7
2 0.25 0.3077 0.3349 0.3508 0.3612 0.3686
3 0.25 0.3077 0.3349 0.3508 0.3612 0.3686
4 0.3125 0.3248 0.3349 0.3438 0.3512 0.3573
5 0.3125 0.3191 0.3244 0.3311 0.3378 0.3439
6 0.3438 0.321 0.3057 0.3056 0.311 0.318
7 0.3438 0.333 0.3081 0.3009 0.3025 0.3074
8 0.3633 0.3229 0.3109 0.3022 0.3002 0.3024
9 0.3633 0.3233 0.3057 0.2977 0.2927 0.292
10 0.377 0.3328 0.308 0.2952 0.2866 0.2837
11 0.377 0.3319 0.3128 0.298 0.2865 0.2813
12 0.3872 0.3358 0.3105 0.3001 0.2894 0.2827
13 0.3872 0.3395 0.3092 0.2976 0.2905 0.2841
14 0.3953 0.3391 0.312 0.2961 0.2888 0.2835
15 0.3953 0.3427 0.3135 0.2973 0.2882 0.2825
16 0.4018 0.3433 0.3128 0.298 0.2884 0.2823
17 0.4018 0.3428 0.3129 0.2967 0.2878 0.282
18 0.4073 0.3461 0.3133 0.2959 0.2859 0.2808
19 0.4073 0.3477 0.3137 0.2962 0.2844 0.2789
20 0.4119 0.3486 0.3148 0.2973 0.2843 0.2777
21 0.4119 0.3506 0.3171 0.298 0.2851 0.2775
22 0.4159 0.3519 0.3189 0.2986 0.2863 0.2781
23 0.4159 0.3531 0.3202 0.2995 0.2872 0.2791
24 0.4194 0.3539 0.3209 0.3003 0.2878 0.2797
25 0.4194 0.3548 0.3218 0.3012 0.2886 0.2801
Table 1: E A(b)/F(2)
(b) for AggΥm
as a function of k, the
optimal number of winners in F(2)
(b). The lowest value for
each column is printed in bold.
conjecture lies in the difficulty of finding a closed-form solution
for the formula of Theorem 7. We can, however, evaluate this
formula numerically for different values of m and k, assuming that the
competitive ratio for Υm matches the lower bound for m given by
Theorem 2. Table 1 shows, for each value of m and k, the fraction
of F(2)
raised by the aggregation auction for AggΥm
when there
are k winning bidders, assuming the lower bound of Theorem 2 is
tight.
5.3.2 Convex combinations of AggΥm
As can be seen in Table 1, when m > 3, the worst-case value
of k is no longer 2 and 3, but instead an increasing function of
m. An aggregation auction for Υm outperforms the aggregation
auction for Υ3 when there are two or three winning bidders, while
the aggregation auction for Υ3 outperforms the other when there
are at least six winning bidders. Thus, for instance, an auction
which randomizes between aggregation auctions for Υ3 and Υ4
will have a worst-case which is better than that of either auction
alone. Larger combinations of auctions will allow more room to
optimize the worst-case. However, we suspect that no convex
combination of aggregation auctions will have a competitive ratio lower
than 3. Furthermore, note that we cannot yet claim the existence of
a good auction via this technique as the optimal auction Υn for
n > 3 is not known and it is only conjectured that the bound given
by Theorem 2 and represented in Table 1 is correct for Υn.
6. A LOWER BOUND FOR CONSERVATIVE
AUCTIONS
In this section, we define a class of auctions that never offer a
sale price which is higher than any bid in the input and prove a
lower bound on the competitive ratio of these auctions. As this
180
lower bound is stronger than the lower bound of Theorem 2 for
n ≥ 3, it shows that the optimal auction must occasionally charge
a sales price higher than any bid in the input. Specifically, this result
partially explains the form of the optimal three bidder auction.
DEFINITION 8. We say an auction BIf is conservative if its
bidindependent function f satisfies f(b-i) ≤ max(b-i).
We can now state our lower bound for conservative auctions.
THEOREM 10. Let A be a conservative auction for n bidders.
Then the competitive ratio of A is at least 3n−2
n
.
COROLLARY 1. The competitive ratio of any conservative
auction for an arbitrary number of bidders is at least three.
For a two-bidder auction, this restriction does not prevent
optimality. Υ2, the 1-unit Vickrey auction, is conservative. For larger
numbers of bidders, however, the restriction to conservative
auctions does affect the competitive ratio. For the three-bidder case,
Υ3 has competitive ratio 2.17, while the best conservative auction
is no better than 2.33-competitive.
The k-Vickrey auction and the Random Sampling Optimal Price
auction [9] are conservative auctions. The Random Sampling Profit
Extraction auction [5] and the CORE auction [7], on the other hand,
use the ProfitExtractR mechanism as a subroutine and thus
sometimes offer a sale price which is higher than the highest input bid
value.
In [8], the authors define a restricted auction as one on which,
for any input, the sale prices are drawn from the set of input bid
values. The class of conservative auctions can be viewed as a
generalization of the class of restricted auctions and therefore our result
below gives lower bounds on the performance of a broader class of
auctions.
We will prove Theorem 10 with the aid of the following lemma:
LEMMA 1. Let A be a conservative auction with competitive
ratio 1/r for n bidders. Let h n. Let h0 = 1 and hk = kh
otherwise. Then, for all k and H ≥ kh, Pr[f(1, 1, . . . , 1, H) ≤ hk] ≥
nr−1
n−1
+ k(3nr−2r−n
n−1
).
PROOF. The lemma is proved by strong induction on k. First
some notation that will be convenient. For any particular k and
H we will be considering the bid vector b = (1, . . . , 1, hk, H)
and placing bounds on ρb-i (z). Since we can assume without loss
of generality that the auction is symmetric, we will notate b-1 as
b with any one of the 1-valued bids masked. Similarly we notate
b-hk
(resp. b-H ) as b with the hk-valued bid (resp. H-valued bid)
masked. We will also let p1(b), phk
(b), and pH (b) represent the
expected payment of a 1-valued, hk-valued, and H-valued bidder
in A on b, respectively (note by symmetry the expected payment
for all 1-valued bidders is the same).
Base case (k = 0, hk = 1): A must raise revenue of at least rn
on b = (1, . . . , 1, 1, H):
rn ≤ pH (b) + (n − 1)p1(b)
= 1 + (n − 1)
1
0
xρb-1 (x)dx
≤ 1 + (n − 1)
1
0
ρb-1 (x)dx
The second inequality follows from the conservatism of the
underlying auction. The base case follows trivially from the final
inequality.
Inductive case (k > 0, hk = kh): Let b = (1, . . . , 1, hk, H).
First, we will find an upper bound on pH(b)
pH (b) =
1
0
xρb-H (x)dx +
k
i=1
hi
hi−1
xρb-H (x)dx (1)
≤ 1 +
k
i=1
hi
hi
hi−1
ρb-H (x)dx
≤ 1 +
3nr − 2r − n
n − 1
k−1
i=1
ih
+ kh 1 −
nr − 1
n − 1
− (k − 1)
3nr − 2r − n
n − 1
(2)
= kh
n(1 − r)
n − 1
+
(k − 1)
2
3nr − 2r − n
n − 1
+ 1. (3)
Equation (1) follows from the conservatism of A and (2) is from
invoking the strong inductive hypothesis with H = kh and the
observation that the maximum possible revenue will be found by
placing exactly enough probability at each multiple of h to satisfy
the constraints of the inductive hypothesis and placing the
remaining probability at kh. Next, we will find a lower bound on phk
(b)
by considering the revenue raised by the bids b. Recall that A must
obtain a profit of at least rF(2)
(b) = 2rkh. Given upper-bounds
on the profit from the H-valued, equation bid (3), and the 1-valued
bids, the profit from the hk-valued bid must be at least:
phk
(b) ≥ 2rkh − (n − 2)p1(b) − pH(b)
≥ kh 2r −
n(1 − r)
n − 1
+
(k − 1)
2
3nr − 2r − n
n − 1
− O(n).
(4)
In order to lower bound Pr[f(b-hk
) ≤ kh], consider the auction
that minimizes it and is consistent with the lower bounds obtained
by the strong inductive hypothesis on Pr[f(b-hk
) ≤ ih]. To
minimize the constraints implied by the strong inductive hypothesis, we
place the minimal amount of probability mass required each price
level. This gives ρhk
(b) with nr−1
n−1
probability at 1 and exactly
3nr−2r−n
n−1
at each hi for 1 ≤ i < k. Thus, the profit from offering
prices at most hk−1 is nr−1
n−1
−kh(k−1)3nr−2r−n
n−1
. In order to
satisfy our lower bound, (4), on phk
(b), it must put at least 3nr−2r−n
n−1
on hk.
Therefore, the probability that the sale price will be no more than
kh on masked bid vector on bid vector b = (1, . . . , 1, kh, H) must
be at least nr−1
n−1
+ k(3nr−2r−n
n−1
).
Given Lemma 1, Theorem 10 is simple to prove.
PROOF. Let A be a conservative auction. Suppose 3nr−2r−n
n−1
=
> 0. Let k = 1/ , H ≥ kh, and h n. By Lemma 1,
Pr[f(1, . . . , 1, kh, H) ≤ hk] ≥ nr−1
n−1
+ k > 1. But this is a
contradiction, so 3nr−2r−n
n−1
≤ 0. Thus, r ≤ n
3n−2
. The theorem
follows.
7. CONCLUSIONS AND FUTURE WORK
We have found the optimal auction for the three-unit
limitedsupply case, and shown that its structure is essentially independent
of the benchmark used in its competitive analysis. We have then
used this auction to derive the best known auction for the unlimited
supply case.
Our work leaves many interesting open questions. We found that
the lower bound of [8] is matched by an auction for three bidders,
181
even when competing against generalized benchmarks. The most
interesting open question from our work is whether the lower bound
from Theorem 2 can be matched by an auction for more than three
bidders. We conjecture that it can.
Second, we consider whether our techniques can be extended
to find optimal auctions for greater numbers of bidders. The use
of our analytic solution method requires knowledge of a restricted
class of auctions which is large enough to contain an optimal
auction but small enough that the optimal auction in this class can be
found explicitly through analytic methods. No class of auctions
which meets these criteria is known even for the four bidder case.
Also, when the number of bidders is greater than three, it might
be the case that the optimal auction is not expressible in terms of
elementary functions.
Another interesting set of open questions concerns aggregation
auctions. As we show, the aggregation auction for Υ3 outperforms
the aggregation auction for Υ2 and it appears that the aggregation
auction for Υ3 is better than Υm for m > 3. We leave
verification of this conjecture for future work. We also show that Υ3 is
not the best three-bidder auction for use in an aggregation auction,
but the auction that beats it is able to reduce the competitive
ratio of the overall auction only a little bit. It would be interesting
to know whether for any m there is an m-aggregation auction that
substantially improves on the competitive ratio of AggΥm
.
Finally, we remark that very little is known about the structure
of the optimal competitive auction. In our auction Υ3, the sales
price for a given bidder is restricted either to be one of the other bid
values or to be higher than all other bid values. The optimal
auction for two bidders, the 1-unit Vickrey auction, also falls within
this class of auctions, as its sales prices are restricted to bid values.
We conjecture that an optimal auction for any number of bidders
lies within this class. Our paper provides partial evidence for this
conjecture: the lower bound of Section 6 on conservative auctions
shows that the optimal auction must offer sales prices higher than
any bid value if the lower bound of Theorem 2 is tight, as is
conjectured. It remains to show that optimal auctions otherwise only
offer sales prices at bid values.
8. ACKNOWLEDGEMENTS
The authors wish to thank Yoav Shoham and Noga Alon for
helpful discussions.
9. REFERENCES
[1] A. Archer and E. Tardos. Truthful mechanisms for
one-parameter agents. In Proc. of the 42nd IEEE Symposium
on Foundations of Computer Science, 2001.
[2] S. Baliga and R. Vohra. Market research and market design.
Advances in Theoretical Economics, 3, 2003.
[3] A. Borodin and R. El-Yaniv. Online Computation and
Competitive Analysis. Cambridge University Press, 1998.
[4] J. Bulow and J. Roberts. The Simple Economics of Optimal
Auctions. The Journal of Political Economy, 97:1060-90,
1989.
[5] A. Fiat, A. V. Goldberg, J. D. Hartline, and A. R. Karlin.
Competitive generalized auctions. In Proc. 34th ACM
Symposium on the Theory of Computing, pages 72-81.
ACM, 2002.
[6] A. Goldberg, J. Hartline, A. Karlin, M. Saks, and A. Wright.
Competitive auctions and digital goods. Games and
Economic Behavior, 2002. Submitted for publication. An
earlier version available as InterTrust Technical Report at
URL http://www.star-lab.com/tr/tr-99-01.html.
[7] A. V. Goldberg and J. D. Hartline. Competitiveness via
consensus. In Proc. 14th Symposium on Discrete Algorithms,
pages 215-222. ACM/SIAM, 2003.
[8] A. V. Goldberg, J. D. Hartline, A. R. Karlin, and M. E. Saks.
A lower bound on the competitive ratio of truthful auctions.
In Proc. 21st Symposium on Theoretical Aspects of
Computer Science, pages 644-655. Springer, 2004.
[9] A. V. Goldberg, J. D. Hartline, and A. Wright. Competitive
auctions and digital goods. In Proc. 12th Symposium on
Discrete Algorithms, pages 735-744. ACM/SIAM, 2001.
[10] D. Lehmann, L. I. O"Callaghan, and Y. Shoham. Truth
Revelation in Approximately Efficient Combinatorial
Auctions. In Proc. of 1st ACM Conf. on E-Commerce, pages
96-102. ACM Press, New York, 1999.
[11] H. Moulin and S. Shenker. Strategyproof Sharing of
Submodular Costs: Budget Balance Versus Efficiency.
Economic Theory, 18:511-533, 2001.
[12] R. Myerson. Optimal Auction Design. Mathematics of
Operations Research, 6:58-73, 1981.
[13] N. Nisan and A. Ronen. Algorithmic Mechanism Design. In
Proc. of 31st Symp. on Theory of Computing, pages
129-140. ACM Press, New York, 1999.
[14] I. Segal. Optimal pricing mechanisms with unknown
demand. American Economic Review, 16:50929, 2003.
[15] W. Vickrey. Counterspeculation, Auctions, and Competitive
Sealed Tenders. J. of Finance, 16:8-37, 1961.
APPENDIX
A. PROOF OF THEOREM 4
We wish to prove that Υs,t
3 , the optimal auction for three bidders
against Gs,t, has competitive ratio at least s2
+t2
2t
. Our proof
follows the outline of the proof of Lemma 5 and Theorem 1 from [8];
however, our case is simpler because we only looking for a bound
when n = 3. Define the random bid vector B = (B1, B2, B3)
with Pr[Bi > z] = 1/z. We compute EB[Gs,t(B)] by integrating
Pr[Gs,t(B) > z]. Then we use the fact that no auction can have
expected profit greater than 3 on B to find a lower bound on the
competitive ratio against Gs,t for any auction.
For the input distribution B defined above, let B(i) be the ith
largest bid. Define the disjoint events H2 = B(2) ≥ z/s ∧ B(3) <
z/t, and H3 = B(3) ≥ z/t. Intuitively, H3 corresponds to the
event that all three bidders win in Gs,t, while H2 corresponds to
the event that only the top two bidders win. Gs,t(B) will be greater
than z if either event occurs:
Pr[Gs,t(B) > z] = Pr[H2] + Pr[H3] (5)
= 3
s
z
2
1 −
t
z
+
t
z
3
(6)
Using the identity defined for non-negative continuous random
variables that E[X] =
∞
0
Pr[X > x] dx, we have
EB[Gs,t(B)] = t +
∞
t
3
s
z
2
1 −
t
z
+
t
z
3
dz (7)
= 3
s2
+ t2
2t
(8)
Given that, for any auction A, EB[EA[A(B)]] ≤ 3 [8], it is clear
that
EB[Gs,t(B)]
EB[EA[A(B)]]
≥ s2
+t2
2t
. Therefore, there exists some input b
for each auction A such that
Gs,t(b)
EA[A(b)]
≥ s2+t2
2t
.
182 | unlimited supply;ratio;benchmark;bound;distribution;competitive analysis;auction;preference;aggregation auction;mechanism design |
train_J-56 | Robust Solutions for Combinatorial Auctions | Bids submitted in auctions are usually treated as enforceable commitments in most bidding and auction theory literature. In reality bidders often withdraw winning bids before the transaction when it is in their best interests to do so. Given a bid withdrawal in a combinatorial auction, finding an alternative repair solution of adequate revenue without causing undue disturbance to the remaining winning bids in the original solution may be difficult or even impossible. We have called this the Bid-taker"s Exposure Problem. When faced with such unreliable bidders, it is preferable for the bid-taker to preempt such uncertainty by having a solution that is robust to bid withdrawal and provides a guarantee that possible withdrawals may be repaired easily with a bounded loss in revenue. In this paper, we propose an approach to addressing the Bidtaker"s Exposure Problem. Firstly, we use the Weighted Super Solutions framework [13], from the field of constraint programming, to solve the problem of finding a robust solution. A weighted super solution guarantees that any subset of bids likely to be withdrawn can be repaired to form a new solution of at least a given revenue by making limited changes. Secondly, we introduce an auction model that uses a form of leveled commitment contract [26, 27], which we have called mutual bid bonds, to improve solution reparability by facilitating backtracking on winning bids by the bid-taker. We then examine the trade-off between robustness and revenue in different economically motivated auction scenarios for different constraints on the revenue of repair solutions. We also demonstrate experimentally that fewer winning bids partake in robust solutions, thereby reducing any associated overhead in dealing with extra bidders. Robust solutions can also provide a means of selectively discriminating against distrusted bidders in a measured manner. | 1. INTRODUCTION
A combinatorial auction (CA) [5] provides an efficient means of
allocating multiple distinguishable items amongst bidders whose
perceived valuations for combinations of items differ. Such
auctions are gaining in popularity and there is a proliferation in their
usage across various industries such as telecoms, B2B procurement
and transportation [11, 19].
Revenue is the most obvious optimization criterion for such
auctions, but another desirable attribute is solution robustness. In terms
of combinatorial auctions, a robust solution is one that can
withstand bid withdrawal (a break) by making changes easily to form
a repair solution of adequate revenue. A brittle solution to a CA
is one in which an unacceptable loss in revenue is unavoidable if
a winning bid is withdrawn. In such situations the bid-taker may
be left with a set of items deemed to be of low value by all other
bidders. These bidders may associate a higher value for these items
if they were combined with items already awarded to others, hence
the bid-taker is left in an undesirable local optimum in which a form
of backtracking is required to reallocate the items in a manner that
results in sufficient revenue. We have called this the Bid-taker"s
Exposure Problem that bears similarities to the Exposure
Problem faced by bidders seeking multiple items in separate single-unit
auctions but holding little or no value for a subset of those items.
However, reallocating items may be regarded as disruptive to a
solution in many real-life scenarios. Consider a scenario where
procurement for a business is conducted using a CA. It would be
highly undesirable to retract contracts from a group of suppliers
because of the failure of a third party. A robust solution that is
tolerant of such breaks is preferable. Robustness may be regarded as a
preventative measure protecting against future uncertainty by
sacrificing revenue in place of solution stability and reparability. We
assume a probabilistic approach whereby the bid-taker has
knowledge of the reliability of bidders from which the likelihood of an
incomplete transaction may be inferred.
Repair solutions are required for bids that are seen as brittle
(i.e. likely to break). Repairs may also be required for sets of
bids deemed brittle. We propose the use of the Weighted Super
183
Solutions (WSS) framework [13] for constraint programming, that
is ideal for establishing such robust solutions. As we shall see,
this framework can enforce constraints on solutions so that
possible breakages are reparable.
This paper is organized as follows. Section 2 presents the
Winner Determination Problem (WDP) for combinatorial auctions,
outlines some possible reasons for bid withdrawal and shows how
simply maximizing expected revenue can lead to intolerable revenue
losses for risk-averse bid-takers. This motivates the use of robust
solutions and Section 3 introduces a constraint programming (CP)
framework, Weighted Super Solutions [13], that finds such
solutions. We then propose an auction model in Section 4 that enhances
reparability by introducing mandatory mutual bid bonds, that may
be seen as a form of leveled commitment contract [26, 27].
Section 5 presents an extensive empirical evaluation of the approach
presented in this paper, in the context of a number of well-known
combinatorial auction distributions, with very encouraging results.
Section 6 discusses possible extensions and questions raised by our
research that deserve future work. Finally, in Section 7 a number
of concluding remarks are made.
2. COMBINATORIAL AUCTIONS
Before presenting the technical details of our solution to the
Bid-taker"s Exposure Problem, we shall present a brief survey
of combinatorial auctions and existing techniques for handling bid
withdrawal.
Combinatorial auctions involve a single bid-taker allocating
multiple distinguishable items amongst a group of bidders. The
bidtaker has a set of m items for sale, M = {1, 2, . . . , m}, and
bidders submit a set of bids, B = {B1, B2, . . . , Bn}. A bid is a tuple
Bj = Sj, pj where Sj ⊆ M is a subset of the items for sale and
pj ≥ 0 is a price. The WDP for a CA is to label all bids as either
winning or losing so as to maximize the revenue from winning bids
without allocating any item to more than one bid. The following is
the integer programming formulation for the WDP:
max
n
j=1
pjxj s.t.
j|i∈Sj
xj ≤ 1, ∀i ∈ {1 . . . m}, xj ∈ {0, 1}.
This problem is NP-complete [23] and inapproximable [25],
and is otherwise known as the Set Packing Problem. The above
problem formulation assumes the notion of free disposal. This
means that the optimal solution need not necessarily sell all of the
items. If the auction rules stipulate that all items must be sold, the
problem becomes a Set Partition Problem [5]. The WDP has been
extensively studied in recent years. The fastest search algorithms
that find optimal solutions (e.g. CABOB [25]) can, in practice,
solve very large problems involving thousands of bids very quickly.
2.1 The Problem of Bid Withdrawal
We assume an auction protocol with a three stage process
involving the submission of bids, winner determination, and finally
a transaction phase. We are interested in bid withdrawals that
occur between the announcement of winning bids and the end of the
transaction phase. All bids are valid until the transaction is
complete, so we anticipate an expedient transaction process1
.
1
In some instances the transaction period may be so lengthy that
consideration of non-winning bids as still being valid may not be
fair. Breaks that occur during a lengthy transaction phase are more
difficult to remedy and may require a subsequent auction. For
example, if the item is a service contract for a given period of time and
the break occurs after partial fulfilment of this contract, the other
An example of a winning bid withdrawal occurred in an FCC
spectrum auction [32]. Withdrawals, or breaks, may occur for
various reasons. Bid withdrawal may be instigated by the bid-taker
when Quality of Service agreements are broken or payment
deadlines are not met. We refer to bid withdrawal by the bid-taker as
item withdrawal in this paper to distinguish between the actions
of a bidder and the bid-taker. Harstad and Rothkopf [8] outlined
several possibilities for breaks in single item auctions that include:
1. an erroneous initial valuation/bid;
2. unexpected events outside the winning bidder"s control;
3. a desire to have the second-best bid honored;
4. information obtained or events that occurred after the auction
but before the transaction that reduces the value of an item;
5. the revelation of competing bidders" valuations infers reduced
profitability, a problem known as the Winner"s Curse.
Kastner et al. [15] examined how to handle perturbations given
a solution whilst minimizing necessary changes to that solution.
These perturbations may include bid withdrawals, change of
valuation/items of a bid or the submission of a new bid. They looked at
the problem of finding incremental solutions to restructure a
supply chain whose formation is determined using combinatorial
auctions [30]. Following a perturbation in the optimal solution they
proceed to impose involuntary item withdrawals from winning
bidders. They formulated an incremental integer linear program (ILP)
that sought to maximize the valuation of the repair solution whilst
preserving the previous solution as much as possible.
2.2 Being Proactive against Bid Withdrawal
When a bid is withdrawn there may be constraints on how the
solution can be repaired. If the bid-taker was freely able to revoke
the awarding of items to other bidders then the solution could be
repaired easily by reassigning all the items to the optimal solution
without the withdrawn bid. Alternatively, the bidder who reneged
upon a bid may have all his other bids disqualified and the items
could be reassigned based on the optimum solution without that
bidder present. However, the bid-taker is often unable to freely
reassign the items already awarded to other bidders. When items
cannot be withdrawn from winning bidders, following the failure of
another bidder to honor his bid, repair solutions are restricted to the
set of bids whose items only include those in the bid(s) that were
reneged upon. We are free to award items to any of the previously
unsuccessful bids when finding a repair solution.
When faced with uncertainty over the reliability of bidders a
possible approach is to maximize expected revenue. This approach
does not make allowances for risk-averse bid-takers who may view
a small possibility of very low revenue as unacceptable.
Consider the example in Table 1, and the optimal expected
revenue in the situation where a single bid may be withdrawn. There
are three submitted bids for items A and B, the third being a
combination bid for the pair of items at a value of 190. The optimal
solution has a value of 200, with the first and second bids as
winners. When we consider the probabilities of failure, in the fourth
column, the problem of which solution to choose becomes more
difficult.
Computing the expected revenue for the solution with the first
and second bids winning the items, denoted 1, 1, 0 , gives:
(200×0.9×0.9)+(2×100×0.9×0.1)+(190×0.1×0.1) = 181.90.
bidders" valuations for the item may have decreased in a non-linear
fashion.
184
Table 1: Example Combinatorial Auction.
Items
Bids A B AB Withdrawal prob
x1 100 0 0 0.1
x2 0 100 0 0.1
x3 0 0 190 0.1
If a single bid is withdrawn there is probability of 0.18 of a revenue
of 100, given the fact that we cannot withdraw an item from the
other winning bidder. The expected revenue for 0, 0, 1 is:
(190 × 0.9) + (200 × 0.1) = 191.00.
We can therefore surmise that the second solution is preferable to
the first based on expected revenue.
Determining the maximum expected revenue in the presence of
such uncertainty becomes computationally infeasible however, as
the number of brittle bids grows. A WDP needs to be solved for all
possible combinations of bids that may fail. The possible loss in
revenue for breaks is also not tightly bounded using this approach,
therefore a large loss may be possible for a small number of breaks.
Consider the previous example where the bid amount for x3
becomes 175. The expected revenue of 1, 1, 0 (181.75) becomes
greater than that of 0, 0, 1 (177.50). There are some bid-takers
who may prefer the latter solution because the revenue is never less
than 175, but the former solution returns revenue of only 100 with
probability 0.18. A risk-averse bid-taker may not tolerate such a
possibility, preferring to sacrifice revenue for reduced risk.
If we modify our repair search so that a solution of at least a
given revenue is guaranteed, the search for a repair solution
becomes a satisfiability test rather than an optimization problem. The
approaches described above are in contrast to that which we
propose in the next section. Our approach can be seen as preventative
in that we find an initial allocation of items to bidders which is
robust to bid withdrawal. Possible losses in revenue are bounded by
a fixed percentage of the true optimal allocation. Perturbations to
the original solution are also limited so as to minimize disruption.
We regard this as the ideal approach for real-world combinatorial
auctions.
DEFINITION 1 (ROBUST SOLUTION FOR A CA). A robust
solution for a combinatorial auction is one where any subset of
successful bids whose probability of withdrawal is greater than or
equal to α can be repaired by reassigning items at a cost of at most
β to other previously losing bids to form a repair solution.
Constraints on acceptable revenue, e.g. being a minimum
percentage of the optimum, are defined in the problem model and are
thus satisfied by all solutions. The maximum cost of repair, β, may
be a fixed value that may be thought of as a fund for
compensating winning bidders whose items are withdrawn from them when
creating a repair solution. Alternatively, β may be a function of
the bids that were withdrawn. Section 4 will give an example of
such a mechanism. In the following section we describe an ideal
constraint-based framework for the establishment of such robust
solutions.
3. FINDING ROBUST SOLUTIONS
In constraint programming [4] (CP), a constraint satisfaction
problem (CSP) is modeled as a set of n variables X = {x1, . . . , xn},
a set of domains D = {D(x1), . . . , D(xn)}, where D(xi) is
the set of finite possible values for variable xi and a set C =
{C1, . . . , Cm} of constraints, each restricting the assignments of
some subset of the variables in X. Constraint satisfaction involves
finding values for each of the problem variables such that all
constraints are satisfied. Its main advantages are its declarative nature
and flexibility in tackling problems with arbitrary side constraints.
Constraint optimization seeks to find a solution to a CSP that
optimizes some objective function. A common technique for solving
constraint optimization problems is to use branch-and-bound
techniques that avoid exploring sub-trees that are known not to contain
a better solution than the best found so far. An initial bound can be
determined by finding a solution that satisfies all constraints in C
or by using some heuristic methods.
A classical super solution (SS) is a solution to a CSP in which,
if a small number of variables lose their values, repair solutions
are guaranteed with only a few changes, thus providing solution
robustness [9, 10]. It is a generalization of both fault tolerance in
CP [31] and supermodels in propositional satisfiability (SAT) [7].
An (a,b)-super solution is one in which if at most a variables lose
their values, a repair solution can be found by changing at most b
other variables [10].
Super solutions for combinatorial auctions minimize the number
of bids whose status needs to be changed when forming a repair
solution [12]. Only a particular set of variables in the solution may
be subject to change and these are said to be members of the
breakset. For each combination of brittle assignments in the break-set, a
repair-set is required that comprises the set of variables whose
values must change to provide another solution. The cardinality of the
repair set is used to measure the cost of repair. In reality,
changing some variable assignments in a repair solution incurs a lower
cost than others thereby motivating the use of a different metric for
determining the legality of repair sets.
The Weighted Super Solution (WSS) framework [13] considers
the cost of repair required, rather than simply the number of
assignments modified, to form an alternative solution. For CAs this
may be a measure of the compensation penalties paid to winning
bidders to break existing agreements. Robust solutions are
particularly desirable for applications where unreliability is a problem
and potential breakages may incur severe penalties. Weighted
super solutions offer a means of expressing which variables are
easily re-assigned and those that incur a heavy cost [13]. Hebrard et
al. [9] describe how some variables may fail (such as machines in a
job-shop problem) and others may not. A WSS generalizes this
approach so that there is a probability of failure associated with each
assignment and sets of variables whose assignments have
probabilities of failure greater than or equal to a threshold value, α, require
repair solutions.
A WSS measures the cost of repairing, or reassigning, other
variables using inertia as a metric. Inertia is a measure of a variable"s
aversion to change and depends on its current assignment, future
assignment and the breakage variable(s).
It may be desirable to reassign items to different bidders in order
to find a repair solution of satisfactory revenue. Compensation may
have to be paid to bidders who lose items during the formation of a
repair solution. The inertia of a bid reflects the cost of changing its
state. For winning bids this may reflect the necessary compensation
penalty for the bid-taker to break the agreement (if such breaches
are permitted), whereas for previously losing bids this is a free
operation. The total amount of compensation payable to bidders may
depend upon other factors, such as the cause of the break. There is
a limit to how much these overall repair costs should be, and this is
given by the value β. This value may not be known in advance and
185
Algorithm 1: WSS(int level, double α, double β):Boolean
begin
if level > number of variables then return true
choose unassigned variable x
foreach value v in the domain of x do
assign x : v
if problem is consistent then
foreach combination of brittle assignments, A do
if ¬reparable(A, β) then return false;
if WSS(level+1) then return true
unassign x
return false
end
may depend upon the break. Therefore, β may be viewed as the
fund used to compensate winning bidders for the unilateral
withdrawal of their bids by the bid-taker. In summary, an (α,β)-WSS
allows any set of variables whose probability of breaking is greater
than or equal to α be repaired with changes to the original robust
solution with a cost of at most β.
The depth-first search for a WSS (see pseudo-code description
in Algorithm 1) maintains arc-consistency [24] at each node of the
tree. As search progresses, the reparability of each previous
assignment is verified at each node by extending a partial repair
solution to the same depth as the current partial solution. This may
be thought of as maintaining concurrent search trees for repairs.
A repair solution is provided for every possible set of break
variables, A. The WSS algorithm attempts to extend the current partial
assignment by choosing a variable and assigning it a value.
Backtracking may then occur for one of two reasons: we cannot extend
the assignment to satisfy the given constraints, or the current
partial assignment cannot be associated with a repair solution whose
cost of repair is less than β should a break occur. The procedure
reparable searches for partial repair solutions using
backtracking and attempts to extend the last repair found, just as in
(1,b)super solutions [9]; the differences being that a repair is provided
for a set of breakage variables rather than a single variable and the
cost of repair is considered. A summation operator is used to
determine the overall cost of repair. If a fixed bound upon the size of
any potential break-set can be formed, the WSS algorithm is
NPcomplete. For a more detailed description of the WSS search
algorithm, the reader is referred to [13], since a complete description of
the algorithm is beyond the scope of this paper.
EXAMPLE 1. We shall step through the example given in
Table 1 when searching for a WSS. Each bid is represented by a
single variable with domain values of 0 and 1, the former representing
bid-failure and the latter bid-success. The probability of failure of
the variables are 0.1 when they are assigned to 1 and 0.0
otherwise. The problem is initially solved using an ILP solver such as
lp_solve [3] or CPLEX, and the optimal revenue is found to be
200. A fixed percentage of this revenue can be used as a threshold
value for a robust solution and its repairs. The bid-taker wishes to
have a robust solution so that if a single winning bid is withdrawn,
a repair solution can be formed without withdrawing items from
any other winning bidder. This example may be seen as searching
for a (0.1,0)-weighted super solution, β is 0 because no funds are
available to compensate the withdrawal of items from winning
bidders. The bid-taker is willing to compromise on revenue, but only
by 5%, say, of the optimal value.
Bids 1 and 3 cannot both succeed, since they both require item
A, so a constraint is added precluding the assignment in which both
variables take the value 1. Similarly, bids 2 and 3 cannot both win
so another constraint is added between these two variables.
Therefore, in this example the set of CSP variables is V = {x1, x2, x3},
whose domains are all {0, 1}. The constraints are x1 + x3 ≤ 1,
x2 + x3 ≤ 1 and xi∈V aixi ≥ 190, where ai reflects the
relevant bid-amounts for the respective bid variables. In order to find a
robust solution of optimal revenue we seek to maximize the sum of
these amounts, max xi∈V aixi.
When all variables are set to 0 (see Figure 1(a) branch 3), this is
not a solution because the minimum revenue of 190 has not been
met, so we try assigning bid3 to 1 (branch 4). This is a valid
solution but this variable is brittle because there is a 10% chance that
this bid may be withdrawn (see Table 1). Therefore we need to
determine if a repair can be formed should it break. The search for a
repair begins at the first node, see Figure 1(b). Notice that value 1
has been removed from bid3 because this search tree is simulating
the withdrawal of this bid. When bid1 is set to 0 (branch 4.1), the
maximum revenue solution in the remaining subtree has revenue of
only 100, therefore search is discontinued at that node of the tree.
Bid1 and bid2 are both assigned to 1 (branches 4.2 and 4.4) and
the total cost of both these changes is still 0 because no
compensation needs to be paid for bids that change from losing to winning.
With bid3 now losing (branch 4.5), this gives a repair solution of
200. Hence 0, 0, 1 is reparable and therefore a WSS. We continue
our search in Figure 1(a) however, because we are seeking a robust
solution of optimal revenue.
When bid1 is assigned to 1 (branch 6) we seek a partial repair
for this variable breaking (branch 5 is not considered since it offers
insufficient revenue). The repair search sets bid1 to 0 in a separate
search tree, (not shown), and control is returned to the search for
a WSS. Bid2 is set to 0 (branch 7), but this solution would not
produce sufficient revenue so bid2 is then set to 1 (branch 8). We
then attempt to extend the repair for bid1 (not shown). This fails
because the repair for bid1 cannot assign bid2 to 0 because the
cost of repairing such an assignment would be ∞, given that the
auction rules do not permit the withdrawal of items from winning
bids. A repair for bid1 breaking is therefore not possible because
items have already been awarded to bid2. A repair solution with
bid2 assigned to 1 does not produce sufficient revenue when bid1
is assigned to 0. The inability to withdraw items from winning bids
implies that 1, 1, 0 is an irreparable solution when the minimum
tolerable revenue is greater than 100. The italicized comments and
dashed line in Figure 1(a) illustrate the search path for a WSS if
both of these bids were deemed reparable.
Section 4 introduces an alternative auction model that will allow
the bid-taker to receive compensation for breakages and in turn use
this payment to compensate other bidders for withdrawal of items
from winning bids. This will enable the reallocation of items and
permit the establishment of 1, 1, 0 as a second WSS for this
example.
4. MUTUAL BID BONDS: A
BACKTRACKING MECHANISM
Some auction solutions are inherently brittle and it may be
impossible to find a robust solution. If we can alter the rules of an
auction so that the bid-taker can retract items from winning
bidders, then the reparability of solutions to such auctions may be
improved. In this section we propose an auction model that permits
bid and item withdrawal by the bidders and bid-taker, respectively.
We propose a model that incorporates mutual bid bonds to enable
solution reparability for the bid-taker, a form of insurance against
186
0
0
0
0
0 0 0
1
1 1
1 1 1 1
Insufficient
revenue
Find repair solution
for bid 3 breakage
Find partial repair for
bid 1 breakage
Insufficient
revenue
(a) Extend partial repair
for bid 1 breakage
(b) Find partial repair
for bid 2 breakage
Bid 1
Bid 2
Bid 3
Find repair solutions for
bid 1 & 2 breakages
[0] [190] [100] [100] [200]
1
2
3 4
5
6
7 8
9
Insufficient
revenue
(a) Search for WSS.
0
0
0
0
0 0 0
1
1 1
1 1 1 1
Insufficient
revenue
Insufficient
revenue
Bid 1
Bid 2
Bid 3
inertia=0
inertia=0
inertia=0
4.1 4.2
4.3 4.4
4.5
(b) Search for a repair for bid 3 breakage.
Figure 1: Search Tree for a WSS without item withdrawal.
the winner"s curse for the bidder whilst also compensating bidders
in the case of item withdrawal from winning bids. We propose that
such Winner"s Curse & Bid-taker"s Exposure insurance comprise
a fixed percentage, κ, of the bid amount for all bids. Such mutual
bid bonds are mandatory for each bid in our model2
. The conditions
attached to the bid bonds are that the bid-taker be allowed to annul
winning bids (item withdrawal) when repairing breaks elsewhere
in the solution. In the interests of fairness, compensation is paid
to bidders from whom items are withdrawn and is equivalent to the
penalty that would have been imposed on the bidder should he have
withdrawn the bid.
Combinatorial auctions impose a heavy computational burden
on the bidder so it is important that the hedging of risk should be
a simple and transparent operation for the bidder so as not to
further increase this burden unnecessarily. We also contend that it
is imperative that the bidder knows the potential penalty for
withdrawal in advance of bid submission. This information is essential
for bidders when determining how aggressive they should be in
their bidding strategy. Bid bonds are commonplace in procurement
for construction projects. Usually they are mandatory for all bids,
are a fixed percentage, κ, of the bid amount and are unidirectional
in that item withdrawal by the bid-taker is not permitted. Mutual
bid bonds may be seen as a form of leveled commitment contract
in which both parties may break the contract for the same fixed
penalty. Such contracts permit unilateral decommitment for
prespecified penalties. Sandholm et al. showed that this can increase
the expected payoffs of all parties and enables deals that would be
impossible under full commitment [26, 28, 29].
In practice a bid bond typically ranges between 5 and 20% of the
2
Making the insurance optional may be beneficial in some
instances. If a bidder does not agree to the insurance, it may be
inferred that he may have accurately determined the valuation for the
items and therefore less likely to fall victim to the winner"s curse.
The probability of such a bid being withdrawn may be less, so a
repair solution may be deemed unnecessary for this bid. On the other
hand it decreases the reparability of solutions.
bid amount [14, 18]. If the decommitment penalties are the same
for both parties in all bids, κ does not influence the reparability of
a given set of bids. It merely influences the levels of penalties and
compensation transacted by agents. Low values of κ incur low bid
withdrawal penalties and simulate a dictatorial bid-taker who does
not adequately compensate bidders for item withdrawal. Andersson
and Sandholm [1] found that myopic agents reach a higher social
welfare quicker if they act selfishly rather than cooperatively when
penalties in leveled commitment contracts are low. Increased levels
of bid withdrawal are likely when the penalties are low also.
High values of κ tend towards full-commitment and reduce the
advantages of such Winner"s Curse & Bid-taker"s Exposure
insurance. The penalties paid are used to fund a reassignment of
items to form a repair solution of sufficient revenue by
compensating previously successful bidders for withdrawal of the items from
them.
EXAMPLE 2. Consider the example given in Table 1 once more,
where the bids also comprise a mutual bid bond of 5% of the bid
amount. If a bid is withdrawn, the bidder forfeits this amount and
the bid-taker can then compensate winning bidders whose items are
withdrawn when trying to form a repair solution later. The search
for repair solutions for breaks to bid1 and bid2 appear in Figures
2(a) and 2(b), respectively3
.
When bid1 breaks, there is a compensation penalty paid to the
bid-taker equal to 5 that can be used to fund a reassignment of the
items. We therefore set β to 5 and this becomes the maximum
expenditure allowed to withdraw items from winning bidders. β
may also be viewed as the size of the fund available to facilitate
backtracking by the bid-taker. When we extend the partial repair
for bid1 so that bid2 loses an item (branch 8.1), the overall cost of
repair increases to 5, due to this item withdrawal by the bid-taker,
3
The actual implementation of WSS search checks previous
solutions to see if they can repair breaks before searching for a new
repair solution. 0, 0, 1 is a solution that has already been found
so the search for a repair in this example is not strictly necessary
but is described for pedagogical reasons.
187
0
0
0
1
1
Bid 1
Bid 2
Bid 3
Insufficient revenue
inertia=5
=5
inertia=0
=5
inertia=5
=5
1
6.1
8.1
9.1
9.2
(a) Search for a repair for bid 1 breakage.
0
0
0
1
1
Bid 1
Bid 2
Bid 3
Insufficient revenue
inertia=10
=10
inertia=10
=10
inertia=10
=10
1
8.2
8.3
9.3
9.4
(b) Search for a repair for bid 2 breakage.
Figure 2: Repair Search Tree for breaks 1 and 2, κ = 0.05.
and is just within the limit given by β. In Figure 1(a) the search path
follows the dashed line and sets bid3 to be 0 (branch 9). The repair
solutions for bids 1 and 2 can be extended further by assigning bid3
to 1 (branches 9.2 and 9.4). Therefore, 1, 1, 0 may be considered
a robust solution. Recall, that previously this was not the case.
Using mutual bid bonds thus increases reparability and allows a
robust solution of revenue 200 as opposed to 190, as was previously
the case.
5. EXPERIMENTS
We have used the Combinatorial Auction Test Suite (CATS) [16]
to generate sample auction data. We generated 100 instances of
problems in which there are 20 items for sale and 100-2000 bids
that may be dominated in some instances4
. Such dominated bids
can participate in repair solutions although they do not feature in
optimal solutions. CATS uses economically motivated bidding
patterns to generate auction data in various scenarios. To motivate the
research presented in this paper we use sensitivity analysis to
examine the brittleness of optimal solutions and hence determine the
types of auctions most likely to benefit from a robust solution. We
then establish robust solutions for CAs using the WSS framework.
5.1 Sensitivity Analysis for the WDP
We have performed sensitivity analysis of the following four
distributions: airport take-off/landing slots (matching), electronic
components (arbitrary), property/spectrum-rights (regions)
and transportation (paths). These distributions were chosen
because they describe a broad array of bidding patterns in different
application domains.
The method used is as follows. We first of all determined the
optimal solution using lp_solve, a mixed integer linear program
solver [3]. We then simulated a single bid withdrawal and re-solved
the problem with the other winning bids remaining fixed, i.e. there
were no involuntary dropouts. The optimal repair solution was
then determined. This process is repeated for all winning bids in
the overall optimal solution, thus assuming that all bids are brittle.
Figure 3 shows the average revenue of such repair solutions as a
percentage of the optimum. Also shown is the average worst-case
scenario over 100 auctions. We also implemented an auction rule
that disallows bids from the reneging bidder participate in a repair5
.
Figure 3(a) illustrates how the paths distribution is inherently
the most robust distribution since when any winning bid is
withdrawn the solution can be repaired to achieve over 98.5% of the
4
The CATS flags included int prices with the bid alpha parameter
set to 1000.
5
We assumed that all bids in a given XOR bid with the same
dummy item were from the same bidder.
optimal revenue on average for auctions with more than 250 bids.
There are some cases however when such withdrawals result in
solutions whose revenue is significantly lower than optimum. Even
in auctions with as many as 2000 bids there are occasions when a
single bid withdrawal can result in a drop in revenue of over 5%,
although the average worst-case drop in revenue is only 1%.
Figure 3(b) shows how the matching distribution is more brittle on
average than paths and also has an inferior worst-case revenue on
average. This trend continues as the regions-npv (Figure 3(c))
and arbitrary-npv (Figure 3(d)) distributions are more
brittle still. These distributions are clearly sensitive to bid withdrawal
when no other winning bids in the solution may be involuntarily
withdrawn by the bid-taker.
5.2 Robust Solutions using WSS
In this section we focus upon both the arbitrary-npv and
regions-npv distributions because the sensitivity analysis
indicated that these types of auctions produce optimal solutions that
tend to be most brittle, and therefore stand to benefit most from
solution robustness. We ignore the auctions with 2000 bids because
the sensitivity analysis has indicated that these auctions are
inherently robust with a very low average drop in revenue following a bid
withdrawal. They would also be very computationally expensive,
given the extra complexity of finding robust solutions.
A pure CP approach needs to be augmented with global
constraints that incorporate operations research techniques to increase
pruning sufficiently so that thousands of bids may be examined.
Global constraints exploit special-purpose filtering algorithms to
improve performance [21]. There are a number of ways to speed
up the search for a weighted super solution in a CA, although this
is not the main focus of our current work. Polynomial matching
algorithms may be used in auctions whose bid length is short, such
as those for airport landing/take-off slots for example. The integer
programming formulation of the WDP stipulates that a bid either
loses or wins. If we relax this constraint so that bids can partially
win, this corresponds to the linear relaxation of the problem and is
solvable in polynomial time. At each node of the search tree we
can quickly solve the linear relaxation of the remaining problem in
the subtree below the current node to establish an upper bound on
remaining revenue. If this upper bound plus revenue in the parent
tree is less than the current lower bound on revenue, search at that
node can cease. The (continuous) LP relaxation thus provides a
vital speed-up in the search for weighted super solutions, which we
have exploited in our implementation. The LP formulation is as
follows:
max
xi∈V
aixi
188
100
95
90
85
80
75
250 500 750 1000 1250 1500 1750 2000
Revenue(%ofoptimum)
Bids
Average Repair Solution Revenue
Worst-case Repair Solution Revenue
(a) paths
100
95
90
85
80
75
250 500 750 1000 1250 1500 1750 2000
Revenue(%ofoptimum)
Bids
Average Repair Solution Revenue
Worst-case Repair Solution Revenue
(b) matching
100
95
90
85
80
75
250 500 750 1000 1250 1500 1750 2000
Revenue(%ofoptimum)
Bids
Average Repair Solution Revenue
Worst-case Repair Solution Revenue
(c) regions-npv
100
95
90
85
80
75
250 500 750 1000 1250 1500 1750 2000
Revenue(%ofoptimum)
Bids
Average Repair Solution Revenue
Worst-case Repair Solution Revenue
(d) arbitrary-npv
Figure 3: Sensitivity of bid distributions to single bid withdrawal.
s.t.
j|i∈Sj
xj ≤ 1, ∀i ∈ {1 . . . m}, xj ≥ 0, xj ∈ R.
Additional techniques, that are outlined in [25], can aid the
scalability of a CP approach but our main aim in these experiments is
to examine the robustness of various auction distributions and
consider the tradeoff between robustness and revenue. The WSS solver
we have developed is an extension of the super solution solver
presented in [9, 10]. This solver is, in turn, based upon the EFC
constraint solver [2].
Combinatorial auctions are easily modeled as a constraint
optimization problems. We have chosen the branch-on-bids
formulation because in tests it worked faster than a branch-on-items
formulation for the arbitrary-npv and regions-npv
distributions. All variables are binary and our search mechanism uses a
reverse lexicographic value ordering heuristic. This complements our
dynamic variable ordering heuristic that selects the most promising
unassigned variable as the next one in the search tree. We use the
product of the solution of the LP relaxation and the degree of a
variable to determine the likelihood of its participation in a robust
solution. High values in the LP solution are a strong indication of
variables most likely to form a high revenue solution whilst the a
variable"s degree reflects the number of other bids that overlap in
terms of desired items. Bids for large numbers of items tend to be
more robust, which is why we weight our robust solution search in
this manner. We found this heuristic to be slightly more effective
than the LP solution alone. As the number of bids in the auction
increases however, there is an increase in the inherent robustness
of solutions so the degree of a variable loses significance as the
auction size increases.
5.3 Results
Our experiments simulate three different constraints on repair
solutions. The first is that no winning bids are withdrawn by the
bid-taker and a repair solution must return a revenue of at least 90%
of the optimal overall solution. Secondly, we relaxed the revenue
constraint to 85% of optimum. Thirdly, we allowed backtracking
by the bid-taker on winning bids using mutual bid bonds but
maintaining the revenue constraint at 90% of optimum.
Prior to finding a robust solution we solved the WDP optimally
using lp_solve [3]. We then set the minimum tolerable revenue
for a solution to be 90% (then 85%) of the revenue of this
optimal solution. We assumed that all bids were brittle, thus a repair
solution is required for every bid in the solution. Initially we
assume that no backtracking was permitted on assignments of items
to other winning bids given a bid withdrawal elsewhere in the
solution. Table 2 shows the percentage of optimal solutions that are
robust for minimum revenue constraints for repair solutions of 90%
and 85% of optimal revenue. Relaxing the revenue constraint on
repair solutions to 85% of the optimum revenue greatly increases the
number of optimal solutions that are robust. We also conducted
experiments on the same auctions in which backtracking by the
bid-taker is permitted using mutual bid bonds. This significantly
improves the reparability of optimal solutions whilst still
maintaining repair solutions of 90% of optimum. An interesting feature
of the arbitrary-npv distribution is that optimal solutions can
become more brittle as the number of bids increases. The reason
for this is that optimal solutions for larger auctions have more
winning bids. Some of the optimal solutions for the smallest auctions
with 100 bids have only one winning bidder. If this bid is
withdrawn it is usually easy to find a new repair solution within 90% of
the previous optimal revenue. Also, repair solutions for bids that
contain a small number of items may be made difficult by the fact
that a reduced number of bids cover only a subset of those items. A
mitigating factor is that such bids form a smaller percentage of the
revenue of the optimal solution on average.
We also implemented a rule stipulating that any losing bids from
189
Table 2: Optimal Solutions that are Inherently Robust (%).
#Bids
Min Revenue 100 250 500 1000 2000
arbitrary-npv
repair ≥ 90% 21 5 3 37 93
repair ≥ 85% 26 15 40 87 100
MBB & repair ≥ 90% 41 35 60 94 ≥ 93
regions-npv
repair ≥ 90% 30 33 61 91 98
repair ≥ 85% 50 71 95 100 100
MBB & repair ≥ 90% 60 78 96 99 ≥ 98
Table 3: Occurrence of Robust Solutions (%).
#Bids
Min Revenue 100 250 500 1000
arbitrary-npv
repair ≥ 90% 58 39 51 98
repair ≥ 85% 86 88 94 99
MBB & repair ≥ 90% 78 86 98 100
regions-npv
repair ≥ 90% 61 70 97 100
repair ≥ 85% 89 99 99 100
MBB & repair ≥ 90% 83 96 100 100
a withdrawing bidder cannot participate in a repair solution. This
acts as a disincentive for strategic withdrawal and was also used
previously in the sensitivity analysis. In some auctions, a robust
solution may not exist. Table 3 shows the percentage of auctions that
support robust solutions for the arbitrary-npv and regions
-npv distributions. It is clear that finding robust solutions for the
former distribution is particularly difficult for auctions with 250 and
500 bids when revenue constraints are 90% of optimum. This
difficulty was previously alluded to by the low percentage of optimal
solutions that were robust for these auctions. Relaxing the revenue
constraint helps increase the percentage of auctions in which robust
solutions are achievable to 88% and 94%, respectively. This
improves the reparability of all solutions thereby increasing the
average revenue of the optimal robust solution. It is somewhat
counterintuitive to expect a reduction in reparability of auction solutions as
the number of bids increases because there tends to be an increased
number of solutions above a revenue threshold in larger auctions.
The MBB auction model performs very well however, and ensures
that robust solutions are achievable for such inherently brittle
auctions without sacrificing over 10% of optimal revenue to achieve
repair solutions.
Figure 4 shows the average revenue of the optimal robust
solution as a percentage of the overall optimum. Repair solutions found
for a WSS provide a lower bound on possible revenue following a
bid withdrawal. Note that in some instances it is possible for a
repair solution to have higher revenue than the original solution.
When backtracking on winning bids by the bid-taker is disallowed,
this can only happen when the repair solution includes two or more
bids that were not in the original. Otherwise the repair bids would
participate in the optimal robust solution in place of the bid that
was withdrawn. A WSS guarantees minimum levels of revenue for
repair solutions but this is not to say that repair solutions cannot be
improved upon. It is possible to use an incremental algorithm to
100
98
96
94
92
250 500 750 1000 1250 1500 1750 2000
Revenue(%ofoptimum)
Bids
Repair Revenue: Min 90% Optimal
Repair Revenue: Min 85% Optimal
MBB: Repair Revenue: Min 90% Optimal
(a) regions-npv
100
98
96
94
92
250 500 750 1000 1250 1500 1750 2000
Revenue(%ofoptimum)
Bids
Repair Revenue: Min 90% Optimal
Repair Revenue: Min 85% Optimal
MBB: Repair Revenue: Min 90% Optimal
(b) arbitrary-npv
Figure 4: Revenue of optimal robust solutions.
determine an optimal repair solution following a break, whilst safe
in the knowledge that in advance of any possible bid withdrawal
we can establish a lower bound on the revenue of a repair. Kastner
et al. have provided such an incremental ILP formulation [15].
Mutual bid bonds facilitate backtracking by the bid-taker on
already assigned items. This improves the reparability of all
possible solutions thus increasing the revenue of the optimal robust
solution on average. Figure 4 shows the increase in revenue of
robust solutions in such instances. The revenues of repair
solutions are bounded by at least 90% of the optimum in our
experiments thereby allowing a direct comparison with robust solutions
already found using the same revenue constraint but not providing
for backtracking. It is immediately obvious that such a mechanism
can significantly increase revenue whilst still maintaining solution
robustness.
Table 4 shows the number of winning bids participating in
optimal and optimal robust solutions given the three different
constraints on repairing solutions listed at the beginning of this
section. As the number of bids increases, more of the optimal overall
solutions are robust. This leads to a convergence in the number of
winning bids. The numbers in brackets are derived from the
sensitivity analysis of optimal solutions that reveals the fact that almost
all optimal solutions for auctions of 2000 bids are robust. We can
therefore infer that the average number of winning bids in
revenuemaximizing robust solutions converges towards that of the optimal
overall solutions.
A notable side-effect of robust solutions is that fewer bids
participate in the solutions. It can be clearly seen from Table 4 that when
revenue constraints on repair solutions are tight, there are fewer
winning bids in the optimal robust solution on average. This is
particularly pronounced for smaller auctions in both distributions.
This can win benefits for the bid-taker such as reduced overheads in
dealing with fewer suppliers. Although MBBs aid solution
repara190
Table 4: Number of winning bids.
#Bids
Solution 100 250 500 1000 2000
arbitrary-npv
Optimal 3.31 5.60 7.17 9.31 10.63
Repair ≥ 90% 1.40 2.18 6.10 9.03 (≈ 10.63)
Repair ≥ 85% 1.65 3.81 6.78 9.31 (10.63)
MBB (≥ 90%) 2.33 5.49 7.33 9.34 (≈ 10.63)
regions-npv
Optimal 4.34 7.05 9.10 10.67 12.76
Repair ≥ 90% 3.03 5.76 8.67 10.63 (≈ 12.76)
Repair ≥ 85% 3.45 6.75 9.07 (10.67) (12.76)
MBB (≥ 90%) 3.90 6.86 9.10 10.68 (≈ 12.76)
bility, the number of bids in the solutions increases on average.
This is to be expected because a greater fraction of these solutions
are in fact optimal, as we saw in Table 2.
6. DISCUSSION AND FUTURE WORK
Bidding strategies can become complex in
non-incentive-compatible mechanisms where winner determination is no longer
necessarily optimal. The perceived reparability of a bid may influence the
bid amount, with reparable bids reaching a lower equilibrium point
and perceived irreparable bids being more aggressive.
Penalty payments for bid withdrawal also create an incentive for
more aggressive bidding by providing a form of insurance against
the winner"s curse [8]. If a winning bidder"s revised valuation for
a set of items drops by more than the penalty for withdrawal of
the bid, then it is in his best interests to forfeit the item(s) and pay
the penalty. Should the auction rules state that the bid-taker will
refuse to sell the items to any of the remaining bidders in the event
of a withdrawal, then insurance against potential losses will
stimulate more aggressive bidding. However, in our case we are seeking
to repair the solution with the given bids. A side-effect of such a
policy is to offset the increased aggressiveness by incentivizing
reduced valuations in expectation that another bidder"s successful bid
is withdrawn. Harstad and Rothkopf [8] examined the conditions
required to ensure an equilibrium position in which bidding was at
least as aggressive as if no bid withdrawal was permitted, given this
countervailing incentive to under-estimate a valuation. Three
major results arose from their study of bid withdrawal in a single item
auction:
1. Equilibrium bidding is more aggressive with withdrawal for
sufficiently small probabilities of an award to the second
highest bidder in the event of a bid withdrawal;
2. Equilibrium bidding is more aggressive with withdrawal if
the number of bidders is large enough;
3. For many distributions of costs and estimates, equilibrium
bidding is more aggressive with withdrawal if the variability
of the estimating distribution is sufficiently large.
It is important that mutual bid bonds do not result in depressed
bidding in equilibrium. An analysis of the resultant behavior of
bidders must incorporate the possibility of a bidder winning an item
and having it withdrawn in order for the bid-taker to formulate a
repair solution after a break elsewhere. Harstad and Rothkopf have
analyzed bidder aggressiveness [8] using a strictly game-theoretic
model in which the only reason for bid withdrawal is the winner"s
curse. They assumed all bidders were risk-neutral, but surmised
that it is entirely possible for the bid-taker to collect a risk premium
from risk-averse bidders with the offer of such insurance.
Combinatorial auctions with mutual bid bonds add an extra incentive to
bid aggressively because of the possibility of being compensated
for having a winning bid withdrawn by a bid-taker. This is militated
against by the increased probability of not having items withdrawn
in a repair solution. We leave an in-depth analysis of the sufficient
conditions for more aggressive bidding for future work.
Whilst the WSS framework provides ample flexibility and
expressiveness, scalability becomes a problem for larger auctions.
Although solutions to larger auctions tend to be naturally more
robust, some bid-takers in such auctions may require robustness. A
possible extension of our work in this paper may be to examine the
feasibility of reformulating integer linear programs so that the
solutions are robust. Hebrard et al. [10] examined reformulation of
CSPs for finding super solutions. Alternatively, it may be
possible to use a top-down approach by looking at the k-best solutions
sequentially, in terms of revenue, and performing sensitivity
analysis upon each solution until a robust one is found. In procurement
settings the principle of free disposal is often discounted and all
items must be sold. This reduces the number of potential solutions
and thereby reduces the reparability of each solution. The impact
of such a constraint on revenue of robust solutions is also left for
future work.
There is another interesting direction this work may take, namely
robust mechanism design. Porter et al. introduced the notion of
fault tolerant mechanism design in which agents have private
information regarding costs for task completion, but also their
probabilities of failure [20]. When the bid-taker has combinatorial
valuations for task completions it may be desirable to assign the same
task to multiple agents to ensure solution robustness. It is desirable
to minimize such potentially redundant task assignments but not to
the detriment of completed task valuations. This problem could be
modeled using the WSS framework in a similar manner to that of
combinatorial auctions.
In the case where no robust solutions are found, it is possible
to optimize robustness, instead of revenue, by finding a solution
of at least a given revenue that minimizes the probability of an
irreparable break. In this manner the least brittle solution of adequate
revenue may be chosen.
7. CONCLUSION
Fairness is often cited as a reason for choosing the optimal
solution in terms of revenue only [22]. Robust solutions militate against
bids deemed brittle, therefore bidders must earn a reputation for
being reliable to relax the reparability constraint attached to their
bids. This may be seen as being fair to long-standing business
partners whose reliability is unquestioned. Internet-based auctions
are often seen as unwelcome price-gouging exercises by suppliers
in many sectors [6, 17]. Traditional business partnerships are
being severed by increased competition amongst suppliers. Quality
of Service can suffer because of the increased focus on short-term
profitability to the detriment of the bid-taker in the long-term.
Robust solutions can provide a means of selectively discriminating
against distrusted bidders in a measured manner. As
combinatorial auction deployment moves from large value auctions with a
small pool of trusted bidders (e.g. spectrum-rights sales) towards
lower value auctions with potentially unknown bidders (e.g.
Supply Chain Management [30]), solution robustness becomes more
relevant. As well as being used to ensure that the bid-taker is not
left vulnerable to bid withdrawal, it may also be used to cement
relationships with preferred, possibly incumbent, suppliers.
191
We have shown that it is possible to attain robust solutions for
CAs with only a small loss in revenue. We have also illustrated
how such solutions tend to have fewer winning bids than overall
optimal solutions, thereby reducing any overheads associated with
dealing with more bidders. We have also demonstrated that
introducing mutual bid bonds, a form of leveled commitment contract,
can significantly increase the revenue of optimal robust solutions
by improving reparability. We contend that robust solutions
using such a mechanism can allow a bid-taker to offer the possibility
of bid withdrawal to bidders whilst remaining confident about
postrepair revenue and also facilitating increased bidder aggressiveness.
8. REFERENCES
[1] Martin Andersson and Tuomas Sandholm. Leveled
commitment contracts with myopic and strategic agents.
Journal of Economic Dynamics and Control, 25:615-640,
2001. Special issue on Agent-Based Computational
Economics.
[2] Fahiem Bacchus and George Katsirelos. EFC solver.
www.cs.toronto.edu/˜gkatsi/efc/efc.html.
[3] Michael Berkelaar, Kjell Eikland, and Peter Notebaert.
lp solve version 5.0.10.0.
http://groups.yahoo.com/group/lp_solve/.
[4] Rina Dechter. Constraint Processing. Morgan Kaufmann,
2003.
[5] Sven DeVries and Rakesh Vohra. Combinatorial auctions: A
survey. INFORMS Journal on Computing, pages 284-309,
2003.
[6] Jim Ericson. Reverse auctions: Bad idea. Line 56, Sept 2001.
[7] Matthew L. Ginsberg, Andrew J. Parkes, and Amitabha Roy.
Supermodels and Robustness. In Proceedings of AAAI-98,
pages 334-339, Madison, WI, 1998.
[8] Ronald M. Harstad and Michael H. Rothkopf. Withdrawable
bids as winner"s curse insurance. Operations Research,
43(6):982-994, November-December 1995.
[9] Emmanuel Hebrard, Brahim Hnich, and Toby Walsh. Robust
solutions for constraint satisfaction and optimization. In
Proceedings of the European Conference on Artificial
Intelligence, pages 186-190, 2004.
[10] Emmanuel Hebrard, Brahim Hnich, and Toby Walsh. Super
solutions in constraint programming. In Proceedings of
CP-AI-OR 2004, pages 157-172, 2004.
[11] Gail Hohner, John Rich, Ed Ng, Grant Reid, Andrew J.
Davenport, Jayant R. Kalagnanam, Ho Soo Lee, and Chae
An. Combinatorial and quantity-discount procurement
auctions benefit Mars Incorporated and its suppliers.
Interfaces, 33(1):23-35, 2003.
[12] Alan Holland and Barry O"Sullivan. Super solutions for
combinatorial auctions. In Ercim-Colognet Constraints
Workshop (CSCLP 04). Springer LNAI, Lausanne,
Switzerland, 2004.
[13] Alan Holland and Barry O"Sullivan. Weighted super
solutions for constraint programs, December 2004. Technical
Report: No. UCC-CS-2004-12-02.
[14] Selective Insurance. Business insurance.
http://www.selectiveinsurance.com/psApps
/Business/Ins/bonds.asp?bc=13.16.127.
[15] Ryan Kastner, Christina Hsieh, Miodrag Potkonjak, and
Majid Sarrafzadeh. On the sensitivity of incremental
algorithms for combinatorial auctions. In WECWIS, pages
81-88, June 2002.
[16] Kevin Leyton-Brown, Mark Pearson, and Yoav Shoham.
Towards a universal test suite for combinatorial auction
algorithms. In ACM Conference on Electronic Commerce,
pages 66-76, 2000.
[17] Associated General Contractors of America. Associated
general contractors of America white paper on reverse
auctions for procurement of construction.
http://www.agc.org/content/public/pdf
/Member_Resources/
ReverseAuctionWhitePaper.pdf, 2003.
[18] National Society of Professional Engineers. A basic guide to
surety bonds. http://www.nspe.org/pracdiv
/76-02surebond.asp.
[19] Martin Pesendorfer and Estelle Cantillon. Combination
bidding in multi-unit auctions. Harvard Business School
Working Draft, 2003.
[20] Ryan Porter, Amir Ronen, Yoav Shoham, and Moshe
Tennenholtz. Mechanism design with execution uncertainty.
In Proceedings of UAI-02, pages 414-421, 2002.
[21] Jean-Charles R´egin. Global constraints and filtering
algorithms. In Constraint and Integer
ProgrammingTowards a Unified Methodology, chapter 4, pages 89-129.
Kluwer Academic Publishers, 2004.
[22] Michael H. Rothkopf and Aleksandar Peke˘c. Combinatorial
auction design. Management Science, 4(11):1485-1503,
November 2003.
[23] Michael H. Rothkopf, Aleksandar Peke˘c, and Ronald M.
Harstad. Computationally manageable combinatorial
auctions. Management Science, 44(8):1131-1147, 1998.
[24] Daniel Sabin and Eugene C. Freuder. Contradicting
conventional wisdom in constraint satisfaction. In A. Cohn,
editor, Proceedings of ECAI-94, pages 125-129, 1994.
[25] Tuomas Sandholm. Algorithm for optimal winner
determination in combinatorial auctions. Artificial
Intelligence, 135(1-2):1-54, 2002.
[26] Tuomas Sandholm and Victor Lesser. Leveled Commitment
Contracts and Strategic Breach. Games and Economic
Behavior, 35:212-270, January 2001.
[27] Tuomas Sandholm and Victor Lesser. Leveled commitment
contracting: A backtracking instrument for multiagent
systems. AI Magazine, 23(3):89-100, 2002.
[28] Tuomas Sandholm, Sandeep Sikka, and Samphel Norden.
Algorithms for optimizing leveled commitment contracts. In
Proceedings of the IJCAI-99, pages 535-541. Morgan
Kaufmann Publishers Inc., 1999.
[29] Tuomas Sandholm and Yunhong Zhou. Surplus equivalence
of leveled commitment contracts. Artificial Intelligence,
142:239-264, 2002.
[30] William E. Walsh, Michael P. Wellman, and Fredrik Ygge.
Combinatorial auctions for supply chain formation. In ACM
Conference on Electronic Commerce, pages 260-269, 2000.
[31] Rainier Weigel and Christian Bliek. On reformulation of
constraint satisfaction problems. In Proceedings of ECAI-98,
pages 254-258, 1998.
[32] Margaret W. Wiener. Access spectrum bid withdrawal.
http://wireless.fcc.gov/auctions/33
/releases/da011719.pdf, July 2001.
192 | bid;bid withdrawal;winner determination problem;mutual bid bond;robustness;mandatory mutual bid bond;constraint programming;combinatorial auction;bid-taker's exposure problem;set partition problem;weighted super solution;constraint program;exposure problem;enforceable commitment;weight super solution |
train_J-57 | Marginal Contribution Nets: A Compact Representation Scheme for Coalitional Games | We present a new approach to representing coalitional games based on rules that describe the marginal contributions of the agents. This representation scheme captures characteristics of the interactions among the agents in a natural and concise manner. We also develop efficient algorithms for two of the most important solution concepts, the Shapley value and the core, under this representation. The Shapley value can be computed in time linear in the size of the input. The emptiness of the core can be determined in time exponential only in the treewidth of a graphical interpretation of our representation. | 1. INTRODUCTION
Agents can often benefit by coordinating their actions.
Coalitional games capture these opportunities of
coordination by explicitly modeling the ability of the agents to take
joint actions as primitives. As an abstraction, coalitional
games assign a payoff to each group of agents in the game.
This payoff is intended to reflect the payoff the group of
agents can secure for themselves regardless of the actions
of the agents not in the group. These choices of primitives
are in contrast to those of non-cooperative games, of which
agents are modeled independently, and their payoffs depend
critically on the actions chosen by the other agents.
1.1 Coalitional Games and E-Commerce
Coalitional games have appeared in the context of
e-commerce. In [7], Kleinberg et al. use coalitional games to study
recommendation systems. In their model, each individual
knows about a certain set of items, is interested in learning
about all items, and benefits from finding out about them.
The payoffs to groups of agents are the total number of
distinct items known by its members. Given this coalitional
game setting, Kleinberg et al. compute the value of the
private information of the agents is worth to the system using
the solution concept of the Shapley value (definition can be
found in section 2). These values can then be used to
determine how much each agent should receive for participating
in the system.
As another example, consider the economics behind
supply chain formation. The increased use of the Internet as a
medium for conducting business has decreased the costs for
companies to coordinate their actions, and therefore
coalitional game is a good model for studying the supply chain
problem. Suppose that each manufacturer purchases his raw
materials from some set of suppliers, and that the suppliers
offer higher discount with more purchases. The decrease in
communication costs will let manufacturers find others
interested in the same set of suppliers cheaper, and facilitates
formation of coalitions to bargain with the suppliers.
Depending on the set of suppliers and how much from each
supplier each coalition purchases, we can assign payoffs to
the coalitions depending on the discount it receives. The
resulting game can be analyzed using coalitional game
theory, and we can answer questions such as the stability of
coalitions, and how to fairly divide the benefits among the
participating manufacturers. A similar problem,
combinatorial coalition formation, has previously been studied in [8].
1.2 Evaluation Criteria for Coalitional Game
Representation
To capture the coalitional games described above and
perform computations on them, we must first find a
representation for these games. The na¨ıve solution is to enumerate
the payoffs to each set of agents, therefore requiring space
193
exponential in the number of agents in the game. For the
two applications described, the number of agents in the
system can easily exceed a hundred; this na¨ıve approach will
not be scalable to such problems. Therefore, it is critical to
find good representation schemes for coalitional games.
We believe that the quality of a representation scheme
should be evaluated by four criteria.
Expressivity: the breadth of the class of coalitional games
covered by the representation.
Conciseness: the space requirement of the representation.
Efficiency: the efficiency of the algorithms we can develop
for the representation.
Simplicity: the ease of use of the representation by users
of the system.
The ideal representation should be fully expressive, i.e., it
should be able to represent any coalitional games, use as
little space as possible, have efficient algorithms for
computation, and be easy to use. The goal of this paper is to
develop a representation scheme that has properties close to
the ideal representation.
Unfortunately, given that the number of degrees of
freedom of coalitional games is O(2n
), not all games can be
represented concisely using a single scheme due to information
theoretic constraints. For any given class of games, one may
be able to develop a representation scheme that is tailored
and more compact than a general scheme. For example, for
the recommendation system game, a highly compact
representation would be one that simply states which agents know
of which products, and let the algorithms that operate on
the representation to compute the values of coalitions
appropriately. For some problems, however, there may not be
efficient algorithms for customized representations. By
having a general representation and efficient algorithms that go
with it, the representation will be useful as a prototyping
tool for studying new economic situations.
1.3 Previous Work
The question of coalitional game representation has only
been sparsely explored in the past [2, 3, 4]. In [4], Deng
and Papadimitriou focused on the complexity of different
solution concepts on coalitional games defined on graphs.
While the representation is compact, it is not fully
expressive. In [2], Conitzer and Sandholm looked into the problem
of determining the emptiness of the core in superadditive
games. They developed a compact representation scheme
for such games, but again the representation is not fully
expressive either. In [3], Conitzer and Sandholm developed a
fully expressive representation scheme based on
decomposition. Our work extends and generalizes the representation
schemes in [3, 4] through decomposing the game into a set of
rules that assign marginal contributions to groups of agents.
We will give a more detailed review of these papers in section
2.2 after covering the technical background.
1.4 Summary of Our Contributions
• We develop the marginal contribution networks
representation, a fully expressive representation scheme
whose size scales according to the complexity of the
interactions among the agents. We believe that the
representation is also simple and intuitive.
• We develop an algorithm for computing the Shapley
value of coalitional games under this representation
that runs in time linear in the size of the input.
• Under the graphical interpretation of the
representation, we develop an algorithm for determining the
whether a payoff vector is in the core and the emptiness
of the core in time exponential only in the treewidth
of the graph.
2. PRELIMINARIES
In this section, we will briefly review the basics of
coalitional game theory and its two primary solution concepts,
the Shapley value and the core.1
We will also review
previous work on coalitional game representation in more detail.
Throughout this paper, we will assume that the payoff to
a group of agents can be freely distributed among its
members. This assumption is often known as the transferable
utility assumption.
2.1 Technical Background
We can represent a coalition game with transferable utility
by the pair N, v , where
• N is the set of agents; and
• v : 2N
→ R is a function that maps each group of
agents S ⊆ N to a real-valued payoff.
This representation is known as the characteristic form. As
there are exponentially many subsets, it will take space
exponential in the number of agents to describe a coalitional
game.
An outcome in a coalitional game specifies the utilities
the agents receive. A solution concept assigns to each
coalitional game a set of reasonable outcomes. Different
solution concepts attempt to capture in some way outcomes
that are stable and/or fair. Two of the best known solution
concepts are the Shapley value and the core.
The Shapley value is a normative solution concept. It
prescribes a fair way to divide the gains from cooperation
when the grand coalition (i.e., N) is formed. The division
of payoff to agent i is the average marginal contribution of
agent i over all possible permutations of the agents.
Formally, let φi(v) denote the Shapley value of i under
characteristic function v, then2
φi(v) =
S⊂N
s!(n − s − 1)!
n!
(v(S ∪ {i}) − v(S)) (1)
The Shapley value is a solution concept that satisfies many
nice properties, and has been studied extensively in the
economic and game theoretic literature. It has a very useful
axiomatic characterization.
Efficiency (EFF) A total of v(N) is distributed to the
agents, i.e., i∈N φi(v) = v(N).
Symmetry (SYM) If agents i and j are interchangeable,
then φi(v) = φj(v).
1
The materials and terminology are based on the textbooks
by Mas-Colell et al. [9] and Osborne and Rubinstein [11].
2
As a notational convenience, we will use the lower-case
letter to represent the cardinality of a set denoted by the
corresponding upper-case letter.
194
Dummy (DUM) If agent i is a dummy player, i.e., his
marginal contribution to all groups S are the same,
φi(v) = v({i}).
Additivity (ADD) For any two coalitional games v and
w defined over the same set of agents N, φi(v + w) =
φi(v) + φi(w) for all i ∈ N, where the game v + w is
defined as (v + w)(S) = v(S) + w(S) for all S ⊆ N.
We will refer to these axioms later in our proof of correctness
of the algorithm for computing the Shapley value under our
representation in section 4.
The core is another major solution concept for coalitional
games. It is a descriptive solution concept that focuses on
outcomes that are stable. Stability under core means that
no set of players can jointly deviate to improve their payoffs.
Formally, let x(S) denote i∈S xi. An outcome x ∈ Rn
is
in the core if
∀S ⊆ N x(S) ≥ v(S) (2)
The core was one of the first proposed solution concepts
for coalitional games, and had been studied in detail. An
important question for a given coalitional game is whether
the core is empty. In other words, whether there is any
outcome that is stable relative to group deviation. For a
game to have a non-empty core, it must satisfy the property
of balancedness, defined as follows. Let 1S ∈ Rn
denote the
characteristic vector of S given by
(1S)i =
1 if i ∈ S
0 otherwise
Let (λS)S⊆N be a set of weights such that each λS is in the
range between 0 and 1. This set of weights, (λS)S⊆N , is a
balanced collection if for all i ∈ N,
S⊆N
λS(1S)i = 1
A game is balanced if for all balanced collections of weights,
S⊆N
λSv(S) ≤ v(N) (3)
By the Bondereva-Shapley theorem, the core of a
coalitional game is non-empty if and only if the game is
balanced. Therefore, we can use linear programming to
determine whether the core of a game is empty.
maximize
λ∈R2n S⊆N λSv(S)
subject to S⊆N λS1S = 1 ∀i ∈ N
λS ≥ 0 ∀S ⊆ N
(4)
If the optimal value of (4) is greater than the value of the
grand coalition, then the core is empty. Unfortunately, this
program has an exponential number of variables in the
number of players in the game, and hence an algorithm that
operates directly on this program would be infeasible in practice.
In section 5.4, we will describe an algorithm that answers
the question of emptiness of core that works on the dual of
this program instead.
2.2 Previous Work Revisited
Deng and Papadimitriou looked into the complexity of
various solution concepts on coalitional games played on
weighted graphs in [4]. In their representation, the set of
agents are the nodes of the graph, and the value of a set of
agents S is the sum of the weights of the edges spanned by
them. Notice that this representation is concise since the
space required to specify such a game is O(n2
). However,
this representation is not general; it will not be able to
represent interactions among three or more agents. For example,
it will not be able to represent the majority game, where a
group of agents S will have value of 1 if and only if s > n/2.
On the other hand, there is an efficient algorithm for
computing the Shapley value of the game, and for determining
whether the core is empty under the restriction of positive
edge weights. However, in the unrestricted case,
determining whether the core is non-empty is coNP-complete.
Conitzer and Sandholm in [2] considered coalitional games
that are superadditive. They described a concise
representation scheme that only states the value of a coalition if the
value is strictly superadditive. More precisely, the semantics
of the representation is that for a group of agents S,
v(S) = max
{T1,T2,...,Tn}∈Π
i
v(Ti)
where Π is the set of all possible partitions of S. The value
v(S) is only explicitly specified for S if v(S) is greater than
all partitioning of S other than the trivial partition ({S}).
While this representation can represent all games that are
superadditive, there are coalitional games that it cannot
represent. For example, it will not be able to represent any
games with substitutability among the agents. An
example of a game that cannot be represented is the unit game,
where v(S) = 1 as long as S = ∅. Under this
representation, the authors showed that determining whether the core
is non-empty is coNP-complete. In fact, even determining
the value of a group of agents is NP-complete.
In a more recent paper, Conitzer and Sandholm described
a representation that decomposes a coalitional game into a
number of subgames whose sum add up to the original game
[3]. The payoffs in these subgames are then represented by
their respective characteristic functions. This scheme is fully
general as the characteristic form is a special case of this
representation. For any given game, there may be multiple
ways to decompose the game, and the decomposition may
influence the computational complexity. For computing the
Shapley value, the authors showed that the complexity is
linear in the input description; in particular, if the largest
subgame (as measured by number of agents) is of size n and
the number of subgames is m, then their algorithm runs
in O(m2n
) time, where the input size will also be O(m2n
).
On the other hand, the problem of determining whether a
certain outcome is in the core is coNP-complete.
3. MARGINAL CONTRIBUTION NETS
In this section, we will describe the Marginal Contribution
Networks representation scheme. We will show that the idea
is flexible, and we can easily extend it to increase its
conciseness. We will also show how we can use this scheme to
represent the recommendation game from the introduction.
Finally, we will show that this scheme is fully expressive,
and generalizes the representation schemes in [3, 4].
3.1 Rules and MarginalContributionNetworks
The basic idea behind marginal contribution networks
(MC-nets) is to represent coalitional games using sets of
rules. The rules in MC-nets have the following syntactic
195
form:
Pattern → value
A rule is said to apply to a group of agents S if S meets
the requirement of the Pattern. In the basic scheme, these
patterns are conjunctions of agents, and S meets the
requirement of the given pattern if S is a superset of it. The
value of a group of agents is defined to be the sum over the
values of all rules that apply to the group. For example, if
the set of rules are
{a ∧ b} → 5
{b} → 2
then v({a}) = 0, v({b}) = 2, and v({a, b}) = 5 + 2 = 7.
MC-nets is a very flexible representation scheme, and can
be extended in different ways. One simple way to extend
it and increase its conciseness is to allow a wider class of
patterns in the rules. A pattern that we will use throughout
the remainder of the paper is one that applies only in the
absence of certain agents. This is useful for expressing
concepts such as substitutability or default values. Formally,
we express such patterns by
{p1 ∧ p2 ∧ . . . ∧ pm ∧ ¬n1 ∧ ¬n2 ∧ . . . ∧ ¬nn}
which has the semantics that such rule will apply to a group
S only if {pi}m
i=1 ∈ S and {nj}n
j=1 /∈ S. We will call
the {pi}m
i=1 in the above pattern the positive literals, and
{nj}n
j=1 the negative literals. Note that if the pattern of
a rule consists solely of negative literals, we will consider
that the empty set of agents will also satisfy such pattern,
and hence v(∅) may be non-zero in the presence of negative
literals.
To demonstrate the increase in conciseness of
representation, consider the unit game described in section 2.2. To
represent such a game without using negative literals, we
will need 2n
rules for n players: we need a rule of value 1
for each individual agent, a rule of value −1 for each pair of
agents to counter the double-counting, a rule of value 1 for
each triplet of agents, etc., similar to the inclusion-exclusion
principle. On the other hand, using negative literals, we
only need n rules: value 1 for the first agent, value 1 for the
second agent in the absence of the first agent, value 1 for the
third agent in the absence of the first two agents, etc. The
representational savings can be exponential in the number
of agents.
Given a game represented as a MC-net, we can interpret
the set of rules that make up the game as a graph. We call
this graph the agent graph. The nodes in the graph will
represent the agents in the game, and for each rule in the
MCnet, we connect all the agents in the rule together and assign
a value to the clique formed by the set of agents. Notice that
to accommodate negative literals, we will need to annotate
the clique appropriately. This alternative view of MC-nets
will be useful in our algorithm for Core-Membership in
section 5.
We would like to end our discussion of the representation
scheme by mentioning a trade-off between the
expressiveness of patterns and the space required to represent them.
To represent a coalitional game in characteristic form, one
would need to specify all 2n
− 1 values. There is no
overhead on top of that since there is a natural ordering of the
groups. For MC-nets, however, specification of the rules
requires specifying both the patterns and the values. The
patterns, if not represented compactly, may end up
overwhelming the savings from having fewer values to specify.
The space required for the patterns also leads to a
tradeoff between the expressiveness of the allowed patterns and
the simplicity of representing them. However, we believe
that for most naturally arising games, there should be
sufficient structure in the problem such that our representation
achieves a net saving over the characteristic form.
3.2 Example: Recommendation Game
As an example, we will use MC-net to represent the
recommendation game discussed in the introduction. For each
product, as the benefit of knowing about the product will
count only once for each group, we need to capture
substitutability among the agents. This can be captured by a
scaled unit game. Suppose the value of the knowledge about
product i is vi, and there are ni agents, denoted by {xj
i },
who know about the product, the game for product i can
then be represented as the following rules:
{x1
i } → vi
{x2
i ∧ ¬x1
i } → vi
...
{xni
i ∧ ¬xni−1
i ∧ · · · ∧ ¬x1
i } → vi
The entire game can then be built up from the sets of rules
of each product. The space requirement will be O(mn∗
),
where m is the number of products in the system, and n∗
is the maximum number of agents who knows of the same
product.
3.3 Representation Power
We will discuss the expressiveness and conciseness of our
representation scheme and compare it with the previous
works in this subsection.
Proposition 1. Marginal contribution networks
constitute a fully expressive representation scheme.
Proof. Consider an arbitrary coalitional game N, v in
characteristic form representation. We can construct a set
of rules to describe this game by starting from the singleton
sets and building up the set of rules. For any singleton set
{i}, we create a rule {i} → v(i). For any pair of agents {i, j},
we create a rule {i ∧ j} → v({i, j}) − v({i}) − v({j}. We
can continue to build up rules in a manner similar to the
inclusion-exclusion principle. Since the game is arbitrary,
MC-nets are fully expressive.
Using the construction outlined in the proof, we can show
that our representation scheme can simulate the multi-issue
representation scheme of [3] in almost the same amount of
space.
Proposition 2. Marginal contribution networks use at
most a linear factor (in the number of agents) more space
than multi-issue representation for any game.
Proof. Given a game in multi-issue representation, we
start by describing each of the subgames, which are
represented in characteristic form in [3], with a set of rules.
196
We then build up the grand game by including all the rules
from the subgames. Note that our representation may
require a space larger by a linear factor due to the need to
describe the patterns for each rule. On the other hand, our
approach may have fewer than exponential number of rules
for each subgame, depending on the structure of these
subgames, and therefore may be more concise than multi-issue
representation.
On the other hand, there are games that require
exponentially more space to represent under the multi-issue scheme
compared to our scheme.
Proposition 3. Marginal contribution networks are
exponentially more concise than multi-issue representation for
certain games.
Proof. Consider a unit game over all the agents N. As
explained in 3.1, this game can be represented in linear space
using MC-nets with negative literals. However, as there is
no decomposition of this game into smaller subgames, it will
require space O(2n
) to represent this game under the
multiissue representation.
Under the agent graph interpretation of MC-nets, we can
see that MC-nets is a generalization of the graphical
representation in [4], namely from weighted graphs to weighted
hypergraphs.
Proposition 4. Marginal contribution networks can
represent any games in graphical form (under [4]) in the same
amount of space.
Proof. Given a game in graphical form, G, for each edge
(i, j) with weight wij in the graph, we create a rule {i, j} →
wij. Clearly this takes exactly the same space as the size of
G, and by the additive semantics of the rules, it represents
the same game as G.
4. COMPUTING THE SHAPLEY VALUE
Given a MC-net, we have a simple algorithm to compute
the Shapley value of the game. Considering each rule as a
separate game, we start by computing the Shapley value of
the agents for each rule. For each agent, we then sum up
the Shapley values of that agent over all the rules. We first
show that this final summing process correctly computes the
Shapley value of the agents.
Proposition 5. The Shapley value of an agent in a marginal
contribution network is equal to the sum of the Shapley
values of that agent over each rule.
Proof. For any group S, under the MC-nets
representation, v(S) is defined to be the sum over the values of all the
rules that apply to S. Therefore, considering each rule as a
game, by the (ADD) axiom discussed in section 2, the
Shapley value of the game created from aggregating all the rules
is equal to the sum of the Shapley values over the rules.
The remaining question is how to compute the Shapley
values of the rules. We can separate the analysis into two
cases, one for rules with only positive literals and one for
rules with mixed literals.
For rules that have only positive literals, the Shapley value
of the agents is v/m, where v is the value of the rule and
m is the number of agents in the rule. This is a direct
consequence of the (SYM) axiom of the Shapley value, as
the agents in a rule are indistinguishable from each other.
For rules that have both positive and negative literals, we
can consider the positive and the negative literals separately.
For a given positive literal i, the rule will apply only if i
occurs in a given permutation after the rest of the positive
literals but before any of the negative literals. Formally, let
φi denote the Shapley value of i, p denote the cardinality of
the positive set, and n denote the cardinality of the negative
set, then
φi =
(p − 1)!n!
(p + n)!
v =
v
p p+n
n
For a given negative literal j, j will be responsible for
cancelling the application of the rule if all positive literals come
before the negative literals in the ordering, and j is the first
among the negative literals. Therefore,
φj =
p!(n − 1)!
(p + n)!
(−v) =
−v
n p+n
p
By the (SYM) axiom, all positive literals will have the value
of φi and all negative literals will have the value of φj.
Note that the sum over all agents in rules with mixed
literals is 0. This is to be expected as these rules contribute
0 to the grand coalition. The fact that these rules have no
effect on the grand coalition may appear odd at first. But
this is because the presence of such rules is to define the
values of coalitions smaller than the grand coalition.
In terms of computational complexity, given that the
Shapley value of any agent in a given rule can be computed in
time linear in the pattern of the rule, the total running time
of the algorithm for computing the Shapley value of the
game is linear in the size of the input.
5. ANSWERING CORE-RELATED
QUESTIONS
There are a few different but related computational
problems associated with the solution concept of the core. We
will focus on the following two problems:
Definition 1. (Core-Membership) Given a coalitional game
and a payoff vector x, determine if x is in the core.
Definition 2. (Core-Non-Emptiness) Given a coalitional
game, determine if the core is non-empty.
In the rest of the section, we will first show that these
two problems are coNP-complete and coNP-hard
respectively, and discuss some complexity considerations about
these problems. We will then review the main ideas of tree
decomposition as it will be used extensively in our algorithm
for Core-Membership. Next, we will present the algorithm
for Core-Membership, and show that the algorithm runs
in polynomial time for graphs of bounded treewidth. We end
by extending this algorithm to answer the question of
CoreNon-Emptiness in polynomial time for graphs of bounded
treewidth.
5.1 Computational Complexity
The hardness of Core-Membership and
Core-NonEmptiness follows directly from the hardness results of games
over weighted graphs in [4].
197
Proposition 6. Core-Membership for games represented
as marginal contribution networks is coNP-complete.
Proof. Core-Membership in MC-nets is in the class
of coNP since any set of agents S of which v(S) > x(S)
will serve as a certificate to show that x does not belong to
the core. As for its hardness, given any instance of
CoreMembership for a game in graphical form of [4], we can
encode the game in exactly the same space using MC-net
due to Proposition 4. Since Core-Membership for games
in graphical form is coNP-complete, Core-Membership in
MC-nets is coNP-hard.
Proposition 7. Core-Non-Emptiness for games
represented as marginal contribution networks is coNP-hard.
Proof. The same argument for hardness between games
in graphical frm and MC-nets holds for the problem of
CoreNon-Emptiness.
We do not know of a certificate to show that
Core-NonEmptiness is in the class of coNP as of now. Note that
the obvious certificate of a balanced set of weights based
on the Bondereva-Shapley theorem is exponential in size. In
[4], Deng and Papadimitriou showed the coNP-completeness
of Core-Non-Emptiness via a combinatorial
characterization, namely that the core is non-empty if and only if
there is no negative cut in the graph. In MC-nets, however,
there need not be a negative hypercut in the graph for the
core to be empty, as demonstrated by the following game
(N = {1, 2, 3, 4}):
v(S) =
1 if S = {1, 2, 3, 4}
3/4 if S = {1, 2}, {1, 3}, {1, 4}, or {2, 3, 4}
0 otherwise
(5)
Applying the Bondereva-Shapley theorem, if we let λ12 =
λ13 = λ14 = 1/3, and λ234 = 2/3, this set of weights
demonstrates that the game is not balanced, and hence the core
is empty. On the other hand, this game can be represented
with MC-nets as follows (weights on hyperedges):
w({1, 2}) = w({1, 3}) = w({1, 4}) = 3/4
w({1, 2, 3}) = w({1, 2, 4}) = w({1, 3, 4}) = −6/4
w({2, 3, 4}) = 3/4
w({1, 2, 3, 4}) = 10/4
No matter how the set is partitioned, the sum over the
weights of the hyperedges in the cut is always non-negative.
To overcome the computational hardness of these
problems, we have developed algorithms that are based on tree
decomposition techniques. For Core-Membership, our
algorithm runs in time exponential only in the treewidth of the
agent graph. Thus, for graphs of small treewidth, such as
trees, we have a tractable solution to determine if a payoff
vector is in the core. By using this procedure as a
separation oracle, i.e., a procedure for returning the inequality
violated by a candidate solution, to solving a linear
program that is related to Core-Non-Emptiness using the
ellipsoid method, we can obtain a polynomial time algorithm
for Core-Non-Emptiness for graphs of bounded treewidth.
5.2 Review of Tree Decomposition
As our algorithm for Core-Membership relies heavily
on tree decomposition, we will first briefly review the main
ideas in tree decomposition and treewidth.3
Definition 3. A tree decomposition of a graph G = (V, E)
is a pair (X, T), where T = (I, F) is a tree and X = {Xi | i ∈
I} is a family of subsets of V , one for each node of T, such
that
• i∈I Xi = V ;
• For all edges (v, w) ∈ E, there exists an i ∈ I with
v ∈ Xi and w ∈ Xi; and
• (Running Intersection Property) For all i, j, k ∈ I: if j
is on the path from i to k in T, then Xi ∩ Xk ⊆ Xj.
The treewidth of a tree decomposition is defined as the
maximum cardinality over all sets in X, less one. The treewidth
of a graph is defined as the minimum treewidth over all tree
decompositions of the graph.
Given a tree decomposition, we can convert it into a nice
tree decomposition of the same treewidth, and of size linear
in that of T.
Definition 4. A tree decomposition T is nice if T is rooted
and has four types of nodes:
Leaf nodes i are leaves of T with |Xi| = 1.
Introduce nodes i have one child j such that Xi = Xj ∪
{v} of some v ∈ V .
Forget nodes i have one child j such that Xi = Xj \ {v}
for some v ∈ Xj.
Join nodes i have two children j and k with Xi = Xj =
Xk.
An example of a (partial) nice tree decomposition together
with a classification of the different types of nodes is in
Figure 1. In the following section, we will refer to nodes in the
tree decomposition as nodes, and nodes in the agent graph
as agents.
5.3 Algorithm for Core Membership
Our algorithm for Core-Membership takes as an input
a nice tree decomposition T of the agent graph and a payoff
vector x. By definition, if x belongs to the core, then for
all groups S ⊆ N, x(S) ≥ v(S). Therefore, the difference
x(S)−v(S) measures how close the group S is to violating
the core condition. We call this difference the excess of group
S.
Definition 5. The excess of a coalition S, e(S), is defined
as x(S) − v(S).
A brute-force approach to determine if a payoff vector
belongs to the core will have to check that the excesses of all
groups are non-negative. However, this approach ignores the
structure in the agent graph that will allow an algorithm to
infer that certain groups have non-negative excesses due to
3
This is based largely on the materials from a survey paper
by Bodlaender [1].
198
i
j
k l
nm
Introduce Node:
Xj = {1, 4}
Xk = {1, 4}
Forget Node:
Xl = {1, 4}
Introduce Node:
Xm = {1, 2, 4} Xn = {4}
Leaf Node:
Join Node:
Xi = {1, 3, 4}
Join Node:
Figure 1: Example of a (partial) nice tree
decomposition
the excesses computed elsewhere in the graph. Tree
decomposition is the key to take advantage of such inferences in a
structured way.
For now, let us focus on rules with positive literals.
Suppose we have already checked that the excesses of all sets
R ⊆ U are non-negative, and we would like to check if the
addition of an agent i to the set U will create a group with
negative excess. A na¨ıve solution will be to compute the
excesses of all sets that include i. The excess of the group
(R ∪ {i}) for any group R can be computed as follows
e(R ∪ {i}) = e(R) + xi − v(c) (6)
where c is the cut between R and i, and v(c) is the sum of
the weights of the edges in the cut.
However, suppose that from the tree decomposition, we
know that i is only connected to a subset of U, say S, which
we will call the entry set to U. Ideally, because i does not
share any edges with members of ¯U = (U \ S), we would
hope that an algorithm can take advantage of this structure
by checking only sets that are subsets of (S ∪ {i}). This
computational saving may be possible since (xi −v(c)) in the
update equation of (6) does not depend on ¯U. However, we
cannot simply ignore ¯U as members of ¯U may still influence
the excesses of groups that include agent i through group
S. Specifically, if there exists a group T ⊃ S such that
e(T) < e(S), then even when e(S ∪ {i}) has non-negative
excess, e(T ∪{i}) may have negative excess. In other words,
the excess available at S may have been drained away due
to T. This motivates the definition of the reserve of a group.
Definition 6. The reserve of a coalition S relative to a
coalition U is the minimum excess over all coalitions between
S and U, i.e., all T : S ⊆ T ⊆ U. We denote this value by
r(S, U). We will refer to the group T that has the minimum
excess as arg r(S, U). We will also call U the limiting set of
the reserve and S the base set of the reserve.
Our algorithm works by keeping track of the reserves of
all non-empty subsets that can be formed by the agents of a
node at each of the nodes of the tree decomposition. Starting
from the leaves of the tree and working towards the root,
at each node i, our algorithm computes the reserves of all
groups S ⊆ Xi, limited by the set of agents in the subtree
rooted at i, Ti, except those in (Xi\S). The agents in (Xi\S)
are excluded to ensure that S is an entry set. Specifically,
S is the entry set to ((Ti \ Xi) ∪ S).
To accomodate for negative literals, we will need to make
two adjustments. Firstly, the cut between an agent m and a
set S at node i now refers to the cut among agent m, set S,
and set ¬(Xi \ S), and its value must be computed
accordingly. Also, when an agent m is introduced to a group at an
introduce node, we will also need to consider the change in
the reserves of groups that do not include m due to possible
cut involving ¬m and the group.
As an example of the reserve values we keep track of at a
tree node, consider node i of the tree in Figure 1. At node
i, we will keep track of the following:
r({1}, {1, 2, . . .})
r({3}, {2, 3, . . .})
r({4}, {2, 4, . . .})
r({1, 3}, {1, 2, 3, . . .})
r({1, 4}, {1, 2, 4, . . .})
r({3, 4}, {2, 3, 4, . . .})
r({1, 3, 4}, {1, 2, 3, 4, . . .}
where the dots . . . refer to the agents rooted under node m.
For notational use, we will use ri(S) to denote r(S, U) at
node i where U is the set of agents in the subtree rooted at
node i excluding agents in (Xi \ S). We sometimes refer to
these values as the r-values of a node. The details of the
r-value computations are in Algorithm 1.
To determine if the payoff vector x is in the core, during
the r-value computation at each node, we can check if all of
the r-values are non-negative. If this is so for all nodes in
the tree, the payoff vector x is in the core. The correctness
of the algorithm is due to the following proposition.
Proposition 8. The payoff vector x is not in the core if
and only if the r-values at some node i for some group S is
negative.
Proof. (⇐) If the reserve at some node i for some group
S is negative, then there exists a coalition T for which
e(T) = x(T) − v(T) < 0, hence x is not in the core.
(⇒) Suppose x is not in the core, then there exists some
group R∗
such that e(R∗
) < 0. Let Xroot be the set of nodes
at the root. Consider any set S ∈ Xroot, rroot(S) will have
the base set of S and the limiting set of ((N \ Xroot) ∪ S).
The union over all of these ranges includes all sets U for
which U ∩ Xroot = ∅. Therefore, if R∗
is not disjoint from
Xroot, the r-value for some group in the root is negative.
If R∗
is disjoint from U, consider the forest {Ti} resulting
from removal of all tree nodes that include agents in Xroot.
199
Algorithm 1 Subprocedures for Core Membership
Leaf-Node(i)
1: ri(Xi) ← e(Xi)
Introduce-Node(i)
2: j ← child of i
3: m ← Xi \ Xj {the introduced node}
4: for all S ⊆ Xj, S = ∅ do
5: C ← all hyperedges in the cut of m, S, and ¬(Xi \ S)
6: ri(S ∪ {x}) ← rj(S) + xm − v(C)
7: C ← all hyperedges in the cut of ¬m, S, and ¬(Xi \S)
8: ri(S) ← rj(S) − v(C)
9: end for
10: r({m}) ← e({m})
Forget-Node(i)
11: j ← child of i
12: m ← Xj \ Xi {the forgotten node}
13: for all S ⊆ Xi, S = ∅ do
14: ri(S) = min(rj(S), rj(S ∪ {m}))
15: end for
Join-Node(i)
16: {j, k} ← {left, right} child of i
17: for all S ⊆ Xi, S = ∅ do
18: ri(S) ← rj(S) + rk(S) − e(S)
19: end for
By the running intersection property, the sets of nodes in
the trees Ti"s are disjoint. Thus, if the set R∗
= i Si for
some Si ∈ Ti, e(R∗
) = i e(Si) < 0 implies some group
S∗
i has negative excess as well. Therefore, we only need to
check the r-values of the nodes on the individual trees in the
forest.
But for each tree in the forest, we can apply the same
argument restricted to the agents in the tree. In the base
case, we have the leaf nodes of the original tree
decomposition, say, for agent i. If R∗
= {i}, then r({i}) = e({i}) < 0.
Therefore, by induction, if e(R∗
) < 0, some reserve at some
node would be negative.
We will next explain the intuition behind the correctness
of the computations for the r-values in the tree nodes. A
detailed proof of correctness of these computations can be
found in the appendix under Lemmas 1 and 2.
Proposition 9. The procedure in Algorithm 1 correctly
compute the r-values at each of the tree nodes.
Proof. (Sketch) We can perform a case analysis over
the four types of tree nodes in a nice tree decomposition.
Leaf nodes (i) The only reserve value to be computed is
ri(Xi), which equals r(Xi, Xi), and therefore it is just
the excess of group Xi.
Forget nodes (i with child j) Let m be the forgotten node.
For any subset S ⊆ Xi, arg ri(S) must be chosen
between the groups of S and S ∪ {m}, and hence we
choose between the lower of the two from the r-values
at node j.
Introduce nodes (i with child j) Let m be the introduced
node. For any subset T ⊆ Xi that includes m, let S
denote (T \ {m}). By the running intersection
property, there are no rules that involve m and agents of
the subtree rooted at node i except those involving
m and agents in Xi. As both the base set and the
limiting set of the r-values of node j and node i
differ by {m}, for any group V that lies between the
base set and the limiting set of node i, the excess of
group V will differ by a constant amount from the
corresponding group (V \ {m}) at node j. Therefore,
the set arg ri(T) equals the set arg rj(S) ∪ {m}, and
ri(T) = rj(S) + xm − v(cut), where v(cut) is the value
of the rules in the cut between m and S. For any
subset S ⊂ Xi that does not include m, we need to
consider the values of rules that include ¬m as a literal
in the pattern. Also, when computing the reserve, the
payoff xm will not contribute to group S. Therefore,
together with the running intersection property as
argued above, we can show that ri(S) = rj(S) − v(cut).
Join nodes (i with left child j and right child k) For any
given set S ⊆ Xi, consider the r-values of that set
at j and k. If arg rj(S) or arg rk(S) includes agents
not in S, then argrj(S) and argrk(S) will be
disjoint from each other due to the running intersection
property. Therefore, we can decompose arg ri(S) into
three sets, (arg rj(S) \ S) on the left, S in the middle,
and (arg rk(S) \ S) on the right. The reserve rj(S)
will cover the excesses on the left and in the middle,
whereas the reserve rk(S) will cover those on the right
and in the middle, and so the excesses in the middle is
double-counted. We adjust for the double-counting by
subtracting the excesses in the middle from the sum
of the two reserves rj(S) and rk(S).
Finally, note that each step in the computation of the
rvalues of each node i takes time at most exponential in the
size of Xi, hence the algorithm runs in time exponential only
in the treewidth of the graph.
5.4 Algorithm for Core Non-emptiness
We can extend the algorithm for Core-Membership into
an algorithm for Core-Non-Emptiness. As described in
section 2, whether the core is empty can be checked using
the optimization program based on the balancedness
condition (3). Unfortunately, that program has an exponential
number of variables. On the other hand, the dual of the
program has only n variables, and can be written as follows:
minimize
x∈Rn
n
i=1 xi
subject to x(S) ≥ v(S), ∀S ⊆ N
(7)
By strong duality, optimal value of (7) is equal to
optimal value of (4), the primal program described in section
2. Therefore, by the Bondereva-Shapley theorem, if the
optimal value of (7) is greater than v(N), the core is empty.
We can solve the dual program using the ellipsoid method
with Core-Membership as a separation oracle, i.e., a
procedure for returning a constraint that is violated. Note that
a simple extension to the Core-Membership algorithm will
allow us to keep track of the set T for which e(T) < 0
during the r-values computation, and hence we can return the
inequality about T as the constraint violated. Therefore,
Core-Non-Emptiness can run in time polynomial in the
running time of Core-Membership, which in turn runs in
200
time exponential only in the treewidth of the graph. Note
that when the core is not empty, this program will return
an outcome in the core.
6. CONCLUDING REMARKS
We have developed a fully expressive representation scheme
for coalitional games of which the size depends on the
complexity of the interactions among the agents. Our focus
on general representation is in contrast to the approach
taken in [3, 4]. We have also developed an efficient
algorithm for the computation of the Shapley values for this
representation. While Core-Membership for MC-nets is
coNP-complete, we have developed an algorithm for
CoreMembership that runs in time exponential only in the treewidth
of the agent graph. We have also extended the algorithm
to solve Core-Non-Emptiness. Other than the algorithm
for Core-Non-Emptiness in [4] under the restriction of
non-negative edge weights, and that in [2] for
superadditive games when the value of the grand coalition is given,
we are not aware of any explicit description of algorithms
for core-related problems in the literature.
The work in this paper is related to a number of areas
in computer science, especially in artificial intelligence. For
example, the graphical interpretation of MC-nets is closely
related to Markov random fields (MRFs) of the Bayes nets
community. They both address the issue of of conciseness
of representation by using the combinatorial structure of
weighted hypergraphs. In fact, Kearns et al. first apply
these idea to games theory by introducing a representation
scheme derived from Bayes net to represent non-cooperative
games [6]. The representational issues faced in coalitional
games are closely related to the problem of expressing
valuations in combinatorial auctions [5, 10]. The OR-bid
language, for example, is strongly related to superadditivity.
The question of the representation power of different
patterns is also related to Boolean expression complexity [12].
We believe that with a better understanding of the
relationships among these related areas, we may be able to develop
more efficient representations and algorithms for coalitional
games.
Finally, we would like to end with some ideas for
extending the work in this paper. One direction to increase the
conciseness of MC-nets is to allow the definition of
equivalent classes of agents, similar to the idea of extending Bayes
nets to probabilistic relational models. The concept of
symmetry is prevalent in games, and the use of classes of agents
will allow us to capture symmetry naturally and concisely.
This will also address the problem of unpleasing assymetric
representations of symmetric games in our representation.
Along the line of exploiting symmetry, as the agents within
the same class are symmetric with respect to each other, we
can extend the idea above by allowing functional description
of marginal contributions. More concretely, we can specify
the value of a rule as dependent on the number of agents
of each relevant class. The use of functions will allow
concise description of marginal diminishing returns (MDRs).
Without the use of functions, the space needed to describe
MDRs among n agents in MC-nets is O(n). With the use
of functions, the space required can be reduced to O(1).
Another idea to extend MC-nets is to augment the
semantics to allow constructs that specify certain rules cannot be
applied simultaneously. This is useful in situations where a
certain agent represents a type of exhaustible resource, and
therefore rules that depend on the presence of the agent
should not apply simultaneously. For example, if agent i in
the system stands for coal, we can either use it as fuel for
a power plant or as input to a steel mill for making steel,
but not for both at the same time. Currently, to represent
such situations, we have to specify rules to cancel out the
effects of applications of different rules. The augmented
semantics can simplify the representation by specifying when
rules cannot be applied together.
7. ACKNOWLEDGMENT
The authors would like to thank Chris Luhrs, Bob
McGrew, Eugene Nudelman, and Qixiang Sun for fruitful
discussions, and the anonymous reviewers for their helpful
comments on the paper.
8. REFERENCES
[1] H. L. Bodlaender. Treewidth: Algorithmic techniques
and results. In Proc. 22nd Symp. on Mathematical
Foundation of Copmuter Science, pages 19-36.
Springer-Verlag LNCS 1295, 1997.
[2] V. Conitzer and T. Sandholm. Complexity of
determining nonemptiness of the core. In Proc. 18th
Int. Joint Conf. on Artificial Intelligence, pages
613-618, 2003.
[3] V. Conitzer and T. Sandholm. Computing Shapley
values, manipulating value division schemes, and
checking core membership in multi-issue domains. In
Proc. 19th Nat. Conf. on Artificial Intelligence, pages
219-225, 2004.
[4] X. Deng and C. H. Papadimitriou. On the complexity
of cooperative solution concepts. Math. Oper. Res.,
19:257-266, May 1994.
[5] Y. Fujishima, K. Leyton-Brown, and Y. Shoham.
Taming the computational complexity of
combinatorial auctions: Optimal and approximate
approaches. In Proc. 16th Int. Joint Conf. on
Artificial Intelligence, pages 548-553, 1999.
[6] M. Kearns, M. L. Littman, and S. Singh. Graphical
models for game theory. In Proc. 17th Conf. on
Uncertainty in Artificial Intelligence, pages 253-260,
2001.
[7] J. Kleinberg, C. H. Papadimitriou, and P. Raghavan.
On the value of private information. In Proc. 8th
Conf. on Theoretical Aspects of Rationality and
Knowledge, pages 249-257, 2001.
[8] C. Li and K. Sycara. Algoirthms for combinatorial
coalition formation and payoff division in an electronic
marketplace. Technical report, Robotics Insititute,
Carnegie Mellon University, November 2001.
[9] A. Mas-Colell, M. D. Whinston, and J. R. Green.
Microeconomic Theory. Oxford University Press, New
York, 1995.
[10] N. Nisan. Bidding and allocation in combinatorial
auctions. In Proc. 2nd ACM Conf. on Electronic
Commerce, pages 1-12, 2000.
[11] M. J. Osborne and A. Rubinstein. A Course in Game
Theory. The MIT Press, Cambridge, Massachusetts,
1994.
[12] I. Wegener. The Complexity of Boolean Functions.
John Wiley & Sons, New York, October 1987.
201
APPENDIX
We will formally show the correctness of the r-value
computation in Algorithm 1 of introduce nodes and join nodes.
Lemma 1. The procedure for computing the r-values of
introduce nodes in Algorithm 1 is correct.
Proof. Let node m be the newly introduced agent at i.
Let U denote the set of agents in the subtree rooted at i.
By the running intersection property, all interactions (the
hyperedges) between m and U must be in node i. For all
S ⊆ Xi : m ∈ S, let R denote (U \ Xi) ∪ S), and Q denote
(R \ {m}).
ri(S) = r(S, R)
= min
T :S⊆T ⊆R
e(T)
= min
T :S⊆T ⊆R
x(T) − v(T)
= min
T :S⊆T ⊆R
x(T \ {m}) + xm − v(T \ {m}) − v(cut)
= min
T :S\{m}⊆T ⊆Q
e(T ) + xm − v(cut)
= rj(S) + xm − v(cut)
The argument for sets S ⊆ Xi : m /∈ S is symmetric except
xm will not contribute to the reserve due to the absence of
m.
Lemma 2. The procedure for computing the r-values of
join nodes in Algorithm 1 is correct.
Proof. Consider any set S ⊆ Xi. Let Uj denote the
subtree rooted at the left child, Rj denote ((Uj \ Xj) ∪ S),
and Qj denote (Uj \ Xj). Let Uk, Rk, and Qk be defined
analogously for the right child. Let R denote (U \ Xi) ∪ S).
ri(S) = r(S, R)
= min
T :S⊆T ⊆R
x(T) − v(T)
= min
T :S⊆T ⊆R
x(S) + x(T ∩ Qj) + x(T ∩ Qk)
− v(S) − v(cut(S, T ∩ Qj) − v(cut(S, T ∩ Qk)
= min
T :S⊆T ⊆R
x(T ∩ Qj) − v(cut(S, T ∩ Qj))
+ min
T :S⊆T ⊆R
x(T ∩ Qk) − v(cut(S, T ∩ Qk))
+ (x(S) − v(S)) (*)
= min
T :S⊆T ⊆R
x(T ∩ Qj) + x(S) − v(cut(S, T ∩ Qj)) − v(S)
+ min
T :S⊆T ⊆R
x(T ∩ Qk) + x(S) − v(cut(S, T ∩ Qk)) − v(S)
− (x(S) − v(S))
= min
T :S⊆T ⊆R
e(T ∩ Rj) + min
T :S⊆T ⊆R
e(T ∩ Rk) − e(S)
= min
T :S⊆T ⊆Rj
e(T ) + min
T :S⊆T ⊆Rk
e(T ) − e(S)
= rj(S) + rk(S) − e(S)
where (*) is true as T ∩ Qj and T ∩ Qk are disjoint due
to the running intersection property of tree decomposition,
and hence the minimum of the sum can be decomposed into
the sum of the minima.
202 | marginal contribution;mc-net;coalitional game theory;coalitional game;treewidth;coremembership;agent;markov random field;shapley value;representation;interaction;core;marginal diminishing return;compact representation scheme |
train_J-58 | Towards Truthful Mechanisms for Binary Demand Games: A General Framework | The family of Vickrey-Clarke-Groves (VCG) mechanisms is arguably the most celebrated achievement in truthful mechanism design. However, VCG mechanisms have their limitations. They only apply to optimization problems with a utilitarian (or affine) objective function, and their output should optimize the objective function. For many optimization problems, finding the optimal output is computationally intractable. If we apply VCG mechanisms to polynomial-time algorithms that approximate the optimal solution, the resulting mechanisms may no longer be truthful. In light of these limitations, it is useful to study whether we can design a truthful non-VCG payment scheme that is computationally tractable for a given allocation rule O. In this paper, we focus our attention on binary demand games in which the agents" only available actions are to take part in the a game or not to. For these problems, we prove that a truthful mechanism M = (O, P) exists with a proper payment method P iff the allocation rule O satisfies a certain monotonicity property. We provide a general framework to design such P. We further propose several general composition-based techniques to compute P efficiently for various types of output. In particular, we show how P can be computed through or/and combinations, round-based combinations, and some more complex combinations of the outputs from subgames. | 1. INTRODUCTION
In recent years, with the rapid development of the Internet, many
protocols and algorithms have been proposed to make the Internet
more efficient and reliable. The Internet is a complex distributed
system where a multitude of heterogeneous agents cooperate to
achieve some common goals, and the existing protocols and
algorithms often assume that all agents will follow the prescribed rules
without deviation. However, in some settings where the agents are
selfish instead of altruistic, it is more reasonable to assume these
agents are rational - maximize their own profits - according to the
neoclassic economics, and new models are needed to cope with the
selfish behavior of such agents.
Towards this end, Nisan and Ronen [14] proposed the framework
of algorithmic mechanism design and applied VCG mechanisms to
some fundamental problems in computer science, including
shortest paths, minimum spanning trees, and scheduling on unrelated
machines. The VCG mechanisms [5, 11, 21] are applicable to
mechanism design problems whose outputs optimize the
utilitarian objective function, which is simply the sum of all agents"
valuations. Unfortunately, some objective functions are not utilitarian;
even for those problems with a utilitarian objective function,
sometimes it is impossible to find the optimal output in polynomial time
unless P=NP. Some mechanisms other than VCG mechanism are
needed to address these issues.
Archer and Tardos [2] studied a scheduling problem where it
is NP-Hard to find the optimal output. They pointed out that a
certain monotonicity property of the output work load is a
necessary and sufficient condition for the existence of a truthful
mechanism for their scheduling problem. Auletta et al. [3] studied a
similar scheduling problem. They provided a family of
deterministic truthful (2 + )-approximation mechanisms for any fixed
number of machines and several (1 + )-truthful mechanisms for some
NP-hard restrictions of their scheduling problem. Lehmann et al.
[12] studied the single-minded combinatorial auction and gave a√
m-approximation truthful mechanism, where m is the number of
goods. They also pointed out that a certain monotonicity in the
allocation rule can lead to a truthful mechanism. The work of Mu"alem
and Nisan [13] is the closest in spirit to our work. They
characterized all truthful mechanisms based on a certain monotonicity
property in a single-minded auction setting. They also showed how to
used MAX and IF-THEN-ELSE to combine outputs from
subproblems. As shown in this paper, the MAX and IF-THEN-ELSE
combinations are special cases of the composition-based techniques
that we present in this paper for computing the payments in
polynomial time under mild assumptions.
More generally, we study how to design truthful mechanisms for
binary demand games where the allocation of an agent is either
selected or not selected. We also assume that the valuations
213
of agents are uncorrelated, i.e., the valuation of an agent only
depends on its own allocation and type. Recall that a mechanism
M = (O, P) consists of two parts, an allocation rule O and a
payment scheme P. Previously, it is often assumed that there is an
objective function g and an allocation rule O, that either optimizes
g exactly or approximately. In contrast to the VCG mechanisms,
we do not require that the allocation should optimize the objective
function. In fact, we do not even require the existence of an
objective function. Given any allocation rule O for a binary demand
game, we showed that a truthful mechanism M = (O, P) exists
for the game if and only if O satisfies a certain monotonicity
property. The monotonicity property only guarantees the existence of a
payment scheme P such that (O, P) is truthful. We complement
this existence theorem with a general framework to design such a
payment scheme P. Furthermore, we present general techniques
to compute the payment when the output is a composition of the
outputs of subgames through the operators or and and; through
round-based combinations; or through intermediate results, which
may be themselves computed from other subproblems.
The remainder of the paper is organized as follows. In Section
2, we discuss preliminaries and previous works, define binary
demand games and discuss the basic assumptions about binary
demand games. In Section 3, we show that O satisfying a certain
monotonicity property is a necessary and sufficient condition for
the existence of a truthful mechanism M = (O, P). A framework
is then proposed in Section 4 to compute the payment P in
polynomial time for several types of allocation rules O. In Section 5, we
provide several examples to demonstrate the effectiveness of our
general framework. We conclude our paper in Section 6 with some
possible future directions.
2. PRELIMINARIES
2.1 Mechanism Design
As usually done in the literatures about the designing of
algorithms or protocols with inputs from individual agents, we adopt
the assumption in neoclassic economics that all agents are rational,
i.e., they respond to well-defined incentives and will deviate from
the protocol only if the deviation improves their gain.
A standard model for mechanism design is as follows. There
are n agents 1, . . . , n and each agent i has some private
information ti, called its type, only known to itself. For example, the type
ti can be the cost that agent i incurs for forwarding a packet in
a network or can be a payment that the agent is willing to pay
for a good in an auction. The agents" types define the type
vector t = (t1, t2, . . . , tn). Each agent i has a set of strategies Ai
from which it can choose. For each input vector a = (a1, . . . , an)
where agent i plays strategy ai ∈ Ai, the mechanism M = (O, P)
computes an output o = O(a) and a payment vector p(a) =
(p1(a), . . . , pn(a)). Here the payment pi(·) is the money given to
agent i and depends on the strategies used by the agents. A game
is defined as G = (S, M), where S is the setting for the game
G. Here, S consists the parameters of the game that are set before
the game starts and do not depend on the players" strategies. For
example, in a unicast routing game [14], the setting consists of the
topology of the network, the source node and the destination node.
Throughout this paper, unless explicitly mentioned otherwise, the
setting S of the game is fixed and we are only interested in how to
design P for a given allocation rule O.
A valuation function v(ti, o) assigns a monetary amount to agent
i for each possible output o. Everything about a game S, M ,
including the setting S, the allocation rule O and the payment scheme
P, is public knowledge except the agent i"s actual type ti, which
is private information to agent i. Let ui(ti, o) denote the utility of
agent i at the outcome of the game o, given its preferences ti. Here,
following a common assumption in the literature, we assume the
utility for agent i is quasi-linear, i.e., ui(ti, o) = v(ti, o) + Pi(a).
Let a|i
ai = (a1, · · · , ai−1, ai, ai+1, · · · , an), i.e., each agent
j = i plays an action aj except that the agent i plays ai. Let a−i =
(a1, · · · , ai−1, ai+1, · · · , an) denote the actions of all agents
except i. Sometimes, we write (a−i, bi) as a|i
bi. An action ai is
called dominant for i if it (weakly) maximizes the utility of i for all
possible strategies b−i of other agents, i.e., ui(ti, O(b−i, ai)) ≥
ui(ti, O(b−i, ai)) for all ai = ai and b−i.
A direct-revelation mechanism is a mechanism in which the only
actions available to each agent are to report its private type either
truthfully or falsely to the mechanism. An incentive compatible
(IC) mechanism is a direct-revelation mechanism in which if an
agent reports its type ti truthfully, then it will maximize its
utility. Then, in a direct-revelation mechanism satisfying IC, the
payment scheme should satisfy the property that, for each agent i,
v(ti, O(t)) + pi(t) ≥ v(ti, O(t|i
ti)) + pi(t|i
ti). Another
common requirement in the literature for mechanism design is so called
individual rationality or voluntary participation: the agent"s utility
of participating in the output of the mechanism is not less than the
utility of the agent of not participating. A direct-revelation
mechanism is strategproof if it satisfies both IC and IR properties.
Arguably the most important positive result in mechanism
design is the generalized Vickrey-Clarke-Groves (VCG) mechanism
by Vickrey [21], Clarke [5], and Groves [11]. The VCG
mechanism applies to (affine) maximization problems where the
objective function is utilitarian g(o, t) =
P
i v(ti, o) (i.e., the sum of
all agents" valuations) and the set of possible outputs is assumed
to be finite. A direct revelation mechanism M = (O(t), P(t))
belongs to the VCG family if (1) the allocation O(t) maximizesP
i v(ti, o), and (2) the payment to agent i is pi(t) =
P
j=i vj(tj, O(t))+
hi
(t−i), where hi
() is an arbitrary function of t−i. Under mild
assumptions, VCG mechanisms are the only truthful implementations
for utilitarian problems [10].
The allocation rule of a VCG mechanism is required to
maximize the objective function in the range of the allocation function.
This makes the mechanism computationally intractable in many
cases. Furthermore, replacing an optimal algorithm for
computing the output with an approximation algorithm usually leads to
untruthful mechanisms if a VCG payment scheme is used. In this
paper, we study how to design a truthful mechanism that does not
optimize a utilitarian objective function.
2.2 Binary Demand Games
A binary demand game is a game G = (S, M), where M =
(O, P) and the range of O is {0, 1}n
. In other words, the
output is a n-tuple vector O(t) = (O1(t), O2(t), . . . , On(t)), where
Oi(t) = 1 (respectively, 0) means that agent i is (respectively, is
not) selected. Examples of binary demand games include: unicast
[14, 22, 9] and multicast [23, 24, 8] (generally subgraph
construction by selecting some links/nodes to satisfy some property),
facility location [7], and a certain auction [12, 2, 13].
Hereafter, we make the following further assumptions.
1. The valuation of the agents are not correlated, i.e., v(ti, o) is
a function of v(ti, oi) only is denoted as v(ti, oi).
2. The valuation v(ti, oi) is a publicly known value and is
normalized to 0. This assumption is needed to guarantee the IR
property.
Thus, throughout his paper, we only consider these direct-revelation
mechanisms in which every agent only needs to reveal its valuation
vi = v(ti, 1).
214
Notice that in applications where agents providing service and
receiving payment, e.g., unicast and job scheduling, the valuation
vi of an agent i is usually negative. For the convenience of
presentation, we define the cost of agent as ci = −v(ti, 1), i.e., it costs
agent i ci to provide the service. Throughout this paper, we will
use ci instead of vi in our analysis. All our results can apply to
the case where the agents receive the service rather than provide by
setting ci to negative, as in auction.
In a binary demand game, if we want to optimize an
objective function g(o, t), then we call it a binary optimization demand
game. The main differences between the binary demand games and
those problems that can be solved by VCG mechanisms are:
1. The objective function is utilitarian (or affine maximization
problem) for a problem solvable by VCG while there is no
restriction on the objective function for a binary demand game.
2. The allocation rule O studied here does not necessarily
optimize an objective function, while a VCG mechanism only
uses the output that optimizes the objective function. We
even do not require the existence of an objective function.
3. We assume that the agents" valuations are not correlated in
a binary demand game, while the agents" valuations may be
correlated in a VCG mechanism.
In this paper, we assume for technical convenience that the
objective function g(o, c), if exists, is continuous with respect to the
cost ci, but most of our results are directly applicable to the discrete
case without any modification.
2.3 Previous Work
Lehmann et al. [12] studied how to design an efficient truthful
mechanism for single-minded combinatorial auction. In a
singleminded combinatorial auction, each agent i (1 ≤ i ≤ n) only wants
to buy a subset Si ⊆ S with private price ci. A single-minded
bidder i declares a bid bi = Si, ai with Si ⊆ S and ai ∈ R+
.
In [12], it is assumed that the set of goods allocated to an agent
i is either Si or ∅, which is known as exactness. Lehmann et al.
gave a greedy round-based allocation algorithm, based on the rank
ai
|Si|1/2 , that has an approximation ratio
√
m, where m is the
number of goods in S. Based on the approximation algorithm, they
gave a truthful payment scheme. For an allocation rule satisfying
(1) exactness: the set of goods allocated to an agent i is either Si or
∅; (2) monotonicity: proposing more money for fewer goods
cannot cause a bidder to lose its bid, they proposed a truthful payment
scheme as follows: (1) charge a winning bidder a certain amount
that does not depend on its own bidding; (2) charge a losing
bidder 0. Notice the assumption of exactness reveals that the single
minded auction is indeed a binary demand game. Their payment
scheme inspired our payment scheme for binary demand game.
In [1], Archer et al. studied the combinatorial auctions where
multiple copies of many different items are on sale, and each
bidder i desires only one subset Si. They devised a randomized
rounding method that is incentive compatible and gave a truthful
mechanism for combinatorial auctions with single parameter agents that
approximately maximizes the social value of the auction. As they
pointed out, their method is strongly truthful in sense that it is
truthful with high probability 1 − , where is an error probability. On
the contrary, in this paper, we study how to design a deterministic
mechanism that is truthful based on some given allocation rules.
In [2], Archer and Tardos showed how to design truthful
mechanisms for several combinatorial problems where each agent"s
private information is naturally expressed by a single positive real
number, which will always be the cost incurred per unit load. The
mechanism"s output could be arbitrary real number but their
valuation is a quasi-linear function t · w, where t is the private per
unit cost and w is the work load. Archer and Tardos characterized
that all truthful mechanism should have decreasing work curves
w and that the truthful payment should be Pi(bi) = Pi(0) +
biwi(bi) −
R bi
0
wi(u)du Using this model, Archer and Tardos
designed truthful mechanisms for several scheduling related
problems, including minimizing the span, maximizing flow and
minimizing the weighted sum of completion time problems. Notice
when the load of the problems is w = {0, 1}, it is indeed a binary
demand game. If we apply their characterization of the truthful
mechanism, their decreasing work curves w implies exactly the
monotonicity property of the output. But notice that their proof
is heavily based on the assumption that the output is a continuous
function of the cost, thus their conclusion can"t directly apply to
binary demand games.
The paper of Ahuva Mu"alem and Noam Nisan [13] is closest
in spirit to our work. They clearly stated that we only discussed a
limited class of bidders, single minded bidders, that was introduced
by [12]. They proved that all truthful mechanisms should have
a monotonicity output and their payment scheme is based on the
cut value. With a simple generalization, we get our conclusion for
general binary demand game. They proposed several combination
methods including MAX, IF-THEN-ELSE construction to perform
partial search. All of their methods required the welfare function
associated with the output satisfying bitonic property.
Distinction between our contributions and previous results:
It has been shown in [2, 6, 12, 13] that for the single minded
combinatorial auction, there exists a payment scheme which
results in a truthful mechanism if the allocation rule satisfies a certain
monotonicity property. Theorem 4 also depends on the
monotonicity property, but it is applicable to a broader setting than the
single minded combinatorial auction. In addition, the binary demand
game studied here is different from the traditional packing IP"s: we
only require that the allocation to each agent is binary and the
allocation rule satisfies a certain monotonicity property; we do not put
any restrictions on the objective function. Furthermore, the main
focus of this paper is to design some general techniques to find the
truthful payment scheme for a given allocation rule O satisfying a
certain monotonicity property.
3. GENERAL APPROACHES
3.1 Properties of Strategyproof Mechanisms
We discuss several properties that mechanisms need to satisfy in
order to be truthful.
THEOREM 1. If a mechanism M = (O, P) satisfies IC, then
∀i, if Oi(t|i
ti1 ) = Oi(t|i
ti2 ), then pi(t|i
ti1 ) = pi(t|i
ti2 ).
COROLLARY 2. For any strategy-proof mechanism for a binary
demand game G with setting S, if we fix the cost c−i of all agents
other than i, the payment to agent i is a constant p1
i if Oi(c) = 1,
and it is another constant p0
i if Oi(c) = 0.
THEOREM 3. Fixed the setting S for a binary demand game,
if mechanism M = (O, P) satisfies IC, then mechanism M =
(O, P ) with the same output method O and pi(c) = pi(c) −
δi(c−i) for any function δi(c−i) also satisfies IC.
The proofs of above theorems are straightforward and thus
omitted due to space limit. This theorem implies that for the binary
demand games we can always normalize the payment to an agent
i such that the payment to the agent is 0 when it is not selected.
Hereafter, we will only consider normalized payment schemes.
215
3.2 Existence of Strategyproof Mechanisms
Notice, given the setting S, a mechanism design problem is
composed of two parts: the allocation rule O and a payment scheme P.
In this paper, given an allocation rule O we focus our attention
on how to design a truthful payment scheme based on O. Given
an allocation rule O for a binary demand game, we first present
a sufficient and necessary condition for the existence of a truthful
payment scheme P.
DEFINITION 1 (MONOTONE NON-INCREASING PROPERTY (MP)).
An output method O is said to satisfy the monotone non-increasing
property if for every agent i and two of its possible costs ci1 < ci2 ,
Oi(c|i
ci2 ) ≤ Oi(c|i
ci1 ).
This definition is not restricted only to binary demand games.
For binary demand games, this definition implies that if Oi(c|i
ci2 ) =
1 then Oi(c|i
ci1 ) = 1.
THEOREM 4. Fix the setting S, c−i in a binary demand game
G with the allocation rule O, the following three conditions are
equivalent:
1. There exists a value κi(O, c−i)(which we will call a cut value,
such that Oi(c) = 1 if ci < κi(O, c−i) and Oi(c) = 0 if
ci > κi(O, c−i). When ci = κi(O, c−i), Oi(c) can be
either 0 or 1 depending on the tie-breaker of the allocation rule
O. Hereafter, we will not consider the tie-breaker scenario
in our proofs.
2. The allocation rule O satisfies MP.
3. There exists a truthful payment scheme P for this binary
demand game.
PROOF. The proof that Condition 2 implies Condition is
straightforward and is omitted here.
We then show Condition 3 implies Condition 2. The proof of
this is similar to a proof in [13]. To prove this direction, we assume
there exists an agent i and two valuation vectors c|i
ci1 and c|i
ci2 ,
where ci1 < ci2 , Oi(c|i
ci2 ) = 1 and Oi(c|i
ci1 ) = 0. From
corollary 2, we know that pi(c|i
ci1 ) = p0
i and pi(c|i
ci2 ) = p1
i .
Now fix c−i, the utility for i when ci = ci1 is ui(ci1 ) = p0
i .
When agent i lies its valuation to ci2 , its utility is p1
i − ci1 . Since
M = (O, P) is truthful, we have p0
i > p1
i − ci1 .
Now consider the scenario when the actual valuation of agent i
is ci = ci2 . Its utility is p1
i − ci2 when it reports its true valuation.
Similarly, if it lies its valuation to ci1 , its utility is p0
i . Since M =
(O, P) is truthful, we have p0
i < p1
i − ci2 .
Consequently, we have p1
i −ci2 > p0
i > p1
i −ci1 . This inequality
implies that ci1 > ci2 , which is a contradiction.
We then show Condition 1 implies Condition 3. We prove this
by constructing a payment scheme and proving that this payment
scheme is truthful. The payment scheme is: If Oi(c) = 1, then
agent i gets payment pi(c) = κi(O, c−i); else it gets payment
pi(c) = 0.
From condition 1, if Oi(c) = 1 then ci > κi(O, c−i). Thus,
its utility is κi(O, c−i) − ci > 0, which implies that the payment
scheme satisfies the IR. In the following we prove that this payment
scheme also satisfies IC property. There are two cases here.
Case 1: ci < κ(O, c−i). In this case, when i declares its true
cost ci, its utility is κi(O, c−i) − ci > 0. Now consider the
situation when i declares a cost di = ci. If di < κi(O, c−i), then
i gets the same payment and utility since it is still selected. If
di > κi(O, c−i), then its utility becomes 0 since it is not selected
anymore. Thus, it has no incentive to lie in this case.
Case 2: ci ≥ κ(O, c−i). In this case, when i reveals its true
valuation, its payment is 0 and the utility is 0. Now consider the
situation when i declares a valuation di = ci. If di > κi(O, c−i),
then i gets the same payment and utility since it is still not selected.
If di ≤ κi(O, c−i), then its utility becomes κi(O, c−i) − ci ≤ 0
since it is selected now. Thus, it has no incentive to lie.
The equivalence of the monotonicity property of the allocation
rule O and the existence of a truthful mechanism using O can be
extended to games beyond binary demand games. The details are
omitted here due to space limit. We now summarize the process to
design a truthful payment scheme for a binary demand game based
on an output method O.
General Framework 1 Truthful mechanism design for a binary
demand game
Stage 1: Check whether the allocation rule O satisfies MP. If it
does not, then there is no payment scheme P such that mechanism
M = (O, P) is truthful. Otherwise, define the payment scheme P
as follows.
Stage 2: Based on the allocation rule O, find the cut value
κi(O, c−i) for agent i such that Oi(c|i
di) = 1 when di <
κi(O, c−i), and Oi(c|i
di) = 0 when di > κi(O, c−i).
Stage 3: The payment for agent i is 0 if Oi(c) = 0; the payment is
κi(O, c−i) if Oi(c) = 1.
THEOREM 5. The payment defined by our general framework
is minimum among all truthful payment schemes using O as output.
4. COMPUTING CUT VALUE FUNCTIONS
To find the truthful payment scheme by using General
Framework 1, the most difficult stage seems to be the stage 2. Notice
that binary search does not work generally since the valuations of
agents may be continuous. We give some general techniques that
can help with finding the cut value function under certain
circumstances. Our basic approach is as follows. First, we decompose the
allocation rule into several allocation rules. Next find the cut value
function for each of these new allocation rules. Then, we compute
the original cut value function by combining these cut value
functions of the new allocation rules.
4.1 Simple Combinations
In this subsection, we introduce techniques to compute the cut
value function by combining multiple allocation rules with
conjunctions or disconjunctions. For simplicity, given an allocation
rule O, we will use κ(O, c) to denote a n-tuple vector
(κ1(O, c−1), κ2(O, c−2), . . . , κn(O, c−n)).
Here, κi(O, c−i) is the cut value for agent i when the allocation
rule is O and the costs c−i of all other agents are fixed.
THEOREM 6. With a fixed setting S of a binary demand game,
assume that there are m allocation rules O1
, O2
, · · · , Om
satisfying the monotonicity property, and κ(Oi
, c) is the cut value vector
for Oi
. Then the allocation rule O(c) =
Wm
i=1 Oi
(c) satisfies the
monotonicity property. Moreover, the cut value for O is κ(O, c) =
maxm
i=1{κ(Oi
, c)} Here κ(O, c) = maxm
i=1{κ(Oi
, c)} means,
∀j ∈ [1, n], κj(O, c−j) = maxm
i=1{κj(Oi
, c−j)} and O(c) =Wm
i=1 Oi
(c) means, ∀j ∈ [1, n], Oj(c) = O1
j (c) ∨ O2
j (c) ∨ · · · ∨
Om
j (c).
PROOF. Assume that ci > ci and Oi(c) = 1. Without loss
of generality, we assume that Ok
i (c) = 1 for some k, 1 ≤ k ≤
m. From the assumption that Ok
i (c) satisfies MP, we obtain that
216
Ok
i (c|i
ci) = 1. Thus, Oi(c|i
ci) =
Wm
j=1 Oj
(c) = 1. This proves
that O(c) satisfies MP. The correctness of the cut value function
follows directly from Theorem 4.
Many algorithms indeed fall into this category. To demonstrate
the usefulness of Theorem 6, we discuss a concrete example here.
In a network, sometimes we want to deliver a packet to a set of
nodes instead of one. This problem is known as multicast. The
most commonly used structure in multicast routing is so called
shortest path tree (SPT). Consider a network G = (V, E, c), where
V is the set of nodes, and vector c is the actual cost of the nodes
forwarding the data. Assume that the source node is s and the
receivers are Q ⊂ V . For each receiver qi ∈ Q, we compute the
shortest path (least cost path), denoted by LCP(s, qi, d), from the
source s to qi under the reported cost profile d. The union of all
such shortest paths forms the shortest path tree. We then use
General Framework 1 to design the truthful payment scheme P when
the SPT structure is used as the output for multicast, i.e., we
design a mechanism M = (SPT, P). Notice that VCG mechanisms
cannot be applied here since SPT is not an affine maximization.
We define LCP(s,qi)
as the allocation corresponds to the path
LCP(s, qi, d), i.e., LCP
(s,qi)
k (d) = 1 if and only if node vk is in
LCP(s, qi, d). Then the output SPT is defined as
W
qi∈Q LCP(s,qi)
.
In other words, SPTk(d) = 1 if and only if qk is selected in
some LCP(s, qi, d). The shortest path allocation rule is a
utilitarian and satisfies MP. Thus, from Theorem 6, SPT also satisfies
MP, and the cut value function vector for SPT can be calculated as
κ(SPT, c) = maxqi∈Q κ(LCP(s,qi)
, c), where κ(LCP(s,qi)
, c)
is the cut value function vector for the shortest path LCP(s, qi, c).
Consequently, the payment scheme above is truthful and the
minimum among all truthful payment schemes when the allocation rule
is SPT.
THEOREM 7. Fixed the setting S of a binary demand game,
assume that there are m output methods O1
, O2
, · · · , Om
satisfying MP, and κ(Oi
, c) are the cut value functions respectively for
Oi
where i = 1, 2, · · · , m. Then the allocation rule O(c) =Vm
i=1 Oi
(c) satisfies MP. Moreover, the cut value function for O
is κ(O, c) = minm
i=1{κ(Oi
, c)}.
We show that our simple combination generalizes the
IF-THENELSE function defined in [13]. For an agent i, assume that there
are two allocation rules O1
and O2
satisfying MP. Let κi(O1
, c−i),
κi(O2
, c−i) be the cut value functions for O1
, O2
respectively.
Then the IF-THEN-ELSE function Oi(c) is actually Oi(c) = [(ci ≤
κi(O1
, c−i) + δ1(c−i)) ∧ O2
(c−i, ci)] ∨ (ci < κi(O1
, c−i) −
δ2(c−i)) where δ1(c−i) and δ2(c−i) are two positive functions. By
applying Theorems 6 and 7, we know that the allocation rule O
satisfies MP and consequently κi(O, c−i) = max{min(κi(O1
, c−i)+
δ1(c−i), κi(O2
, c−i)), κi(O1
, c−i) − δ2(c−i))}.
4.2 Round-Based Allocations
Some approximation algorithms are round-based, where each
round of an algorithm selects some agents and updates the setting
and the cost profile if necessary. For example, several
approximation algorithms for minimum weight vertex cover [19], maximum
weight independent set, minimum weight set cover [4], and
minimum weight Steiner [18] tree fall into this category.
As an example, we discuss the minimum weighted vertex cover
problem (MWVC) [16, 15] to show how to compute the cut value
for a round-based output. Given a graph G = (V, E), where the
nodes v1, v2, . . . , vn are the agents and each agent vi has a weight
ci, we want to find a node set V ⊆ V such that for every edge
(u, v) ∈ E at least one of u and v is in V . Such V is called a
vertex cover of G. The valuation of a node i is −ci if it is selected;
otherwise its valuation is 0. For a subset of nodes V ∈ V , we
define its weight as c(V ) =
P
i∈V ci.
We want to find a vertex cover with the minimum weight. Hence,
the objective function to be implemented is utilitarian. To use the
VCG mechanism, we need to find the vertex cover with the
minimum weight, which is NP-hard [16]. Since we are interested in
mechanisms that can be computed in polynomial time, we must
use polynomial-time computable allocation rules. Many algorithms
have been proposed in the literature to approximate the optimal
solution. In this paper, we use a 2-approximation algorithm given
in [16]. For the sake of completeness, we briefly review this
algorithm here. The algorithm is round-based. Each round selects
some vertices and discards some vertices. For each node i, w(i)
is initialized to its weight ci, and when w(i) drops to 0, i is
included in the vertex cover. To make the presentation clear, we say
an edge (i1, j1) is lexicographically smaller than edge (i2, j2) if
(1) min(i1, j1) < min(i2, j2), or (2) min(i1, j1) = min(i2, j2)
and max(i1, j1) < max(i2, j2).
Algorithm 2 Approximate Minimum Weighted Vertex Cover
Input: A node weighted graph G = (V, E, c).
Output: A vertex cover V .
1: Set V = ∅. For each i ∈ V , set w(i) = ci.
2: while V is not a vertex cover do
3: Pick an uncovered edge (i, j) with the least lexicographic
order among all uncovered edges.
4: Let m = min(w(i), w(j)).
5: Update w(i) to w(i) − m and w(j) to w(j) − m.
6: If w(i) = 0, add i to V . If w(j) = 0, add j to V .
Notice, selecting an edge using the lexicographic order is
crutial to guarantee the monotonicity property. Algorithm 2 outputs
a vertex cover V whose weight is within 2 times of the optimum.
For convenience, we use VC(c) to denote the vertex cover
computed by Algorithm 2 when the cost vector of vertices is c. Below
we generalize Algorithm 2 to a more general scenario. Typically, a
round-based output can be characterized as follows (Algorithm 3).
DEFINITION 2. An updating rule Ur
is said to be
crossingindependent if, for any agent i not selected in round r, (1) Sr+1
and cr+1
−i do not depend on cr
j (2) for fixed cr
−i, cr
i1
≤ cr
i2
implies
that cr+1
i1
≤ cr+1
i2
.
We have the following theorem about the existence of a truthful
payment using a round based allocation rule A.
THEOREM 8. A round-based output A, with the framework
defined in Algorithm 3, satisfies MP if the output methods Or
satisfy
MP and all updating rules Ur
are crossing-independent.
PROOF. Consider an agent i and fixed c−i. We prove that when
an agent i is selected with cost ci, then it is also selected with cost
di < ci. Assume that i is selected in round r with cost ci. Then
under cost di, if agent i is selected in a round before r, our claim
holds. Otherwise, consider in round r. Clearly, the setting Sr
and
the costs of all other agents are the same as what if agent i had cost
ci since i is not selected in the previous rounds due to the
crossindependent property. Since i is selected in round r with cost ci, i
is also selected in round r with di < ci due to the reason that Or
satisfies MP. This finishes the proof.
217
Algorithm 3 A General Round-Based Allocation Rule A
1: Set r = 0, c0
= c, and G0
= G initially.
2: repeat
3: Compute an output or
using a deterministic algorithm
Or
: Sr
× cr
→ {0, 1}n
.
Here Or
, cr
and Sr
are allocation rule, cost vector and game
setting in game Gr
, respectively.
Remark: Or
is often a simple greedy algorithm such as
selecting the agents that minimize some utilitarian function.
For the example of vertex cover, Or
will always select the
light-weighted node on the lexicographically least
uncovered edge (i, j).
4: Let r = r + 1. Update the game Gr−1
to obtain a new game
Gr
with setting Sr
and cost vector cr
according to some rule
Ur
: Or−1
× (Sr−1
, cr−1
) → (Sr
, cr
).
Here we updates the cost and setting of the game.
Remark: For the example of vertex cover, the
updating rule will decrease the weight of vertices i and j by
min(w(i), w(j)).
5: until a valid output is found
6: Return the union of the set of selected players of each round as
the final output. For the example of vertex cover, it is the union
of nodes selected in all rounds.
Algorithm 4 Compute Cut Value for Round-Based Algorithms
Input: A round-based output A, a game G1
= G, and a updating
function vector U.
Output: The cut value x for agent k.
1: Set r = 0 and ck = ζ. Recall that ζ is a value that can
guarantee Ak = 0 when an agent reports the cost ζ.
2: repeat
3: Compute an output or
using a deterministic algorithm based
on setting Sr
using allocation rule Or
: Sr
×cr
→ {0, 1}n
.
4: Find the cut value for agent k based on the allocation rule
Or
for costs cr
−k. Let r = κk(Or
, cr
−k) be the cut value.
5: Set r = r + 1 and obtain a new game Gr from Gr−1
and or
according to the updating rule Ur
.
6: Let cr
be the new cost vector for game Gr
.
7: until a valid output is found.
8: Let gi(x) be the cost of ci
k when the original cost vector is
c|k
x.
9: Find the minimum value x such that
8
>>>>><
>>>>>:
g1(x) ≥ 1;
g2(x) ≥ 2;
...
gt−1(x) ≥ t−1;
gt(x) ≥ t.
Here, t is the total number of rounds.
10: Output the value x as the cut value.
If the round-based output satisfies monotonicity property, the
cut-value always exists. We then show how to find the cut value
for a selected agent k in Algorithm 4.
The correctness of Algorithm 4 is straightforward. To compute
the cut value, we assume that (1) the cut value r for each round r
can be computed in polynomial time; (2) we can solve the equation
gr(x) = r to find x in polynomial time when the cost vector c−i
and b are given.
Now we consider the vertex cover problem. For each round r,
we select a vertex with the least weight and that is incident on the
lexicographically least uncovered edge. The output satisfies MP.
For agent i, we update its cost to cr
i − cr
j iff edge (i, j) is selected.
It is easy to verify this updating rule is crossing-independent, thus
we can apply Algorithm 4 to compute the cut value for the set cover
game as shown in Algorithm 5.
Algorithm 5 Compute Cut Value for MVC.
Input: A node weighted graph G = (V, E, c) and a node k
selected by Algorithm 2.
Output: The cut value κk(V C, c−k).
1: For each i ∈ V , set w(i) = ci.
2: Set w(k) = ∞, pk = 0 and V = ∅.
3: while V is not a vertex cover do
4: Pick an uncovered edge (i, j) with the least lexicographic
order among all uncovered edges.
5: Set m = min(w(i), w(j)).
6: Update w(i) = w(i) − m and w(j) = w(j) − m.
7: If w(i) = 0, add i to V ; else add j to V .
8: If i == k or j == k then set pk = pk + m.
9: Output pk as the cut value κk(V C, c−k).
4.3 Complex Combinations
In subsection 4.1, we discussed how to find the cut value function
when the output of the binary demand game is a simple
combination of some outputs, whose cut values can be computed through
other means (typically VCG). However, some algorithms cannot
be decomposed in the way described in subsection 4.1.
Next we present a more complex way to combine allocation
rules, and as we may expected, the way to find the cut value is
also more complicated. Assume that there are n agents 1 ≤ i ≤ n
with cost vector c, and there are m binary demand games Gi with
objective functions fi(o, c), setting Si and allocation rule ψi
where
i = 1, 2, · · · , m. There is another binary demand game with
setting S and allocation rule O, whose input is a cost vector d =
(d1, d2, · · · , dm). Let f be the function vector (f1, f2, · · · , fm),
ψ be the allocation rule vector (ψ1
, ψ2
, · · · , ψm
) and ∫ be the
setting vector (S1, S2, · · · , Sm). For notation simplicity, we
define Fi(c) = fi(ψi
(c), c), for each 1 ≤ i ≤ m, and F(c) =
(F1(c), F2(c), · · · , Fm(c)).
Let us see a concrete example of these combinations. Consider
a link weighted graph G = (V, E, c), and a subset of q nodes Q ⊆
V . The Steiner tree problem is to find a set of links with minimum
total cost to connect Q. One way to find an approximation of the
Steiner tree is as follows: (1) we build a virtual complete graph H
using Q as its vertices, and the cost of each edge (i, j) is the cost
of LCP(i, j, c) in graph G; (2) build the minimum spanning tree
of H, denoted as MST(H); (3) an edge of G is selected iff it is
selected in some LCP(i, j, c) and edge (i, j) of H is selected to
MST(H).
In this game, we define q(q − 1)/2 games Gi,j, where i, j ∈
Q, with objective functions fi,j(o, c) being the minimum cost of
218
connecting i and j in graph G, setting Si being the original graph
G and allocation rule is LCP(i, j, c). The game G corresponds to
the MST game on graph H. The cost of the pair-wise q(q − 1)/2
shortest paths defines the input vector d = (d1, d2, · · · , dm) for
game MST. More details will be given in Section 5.2.
DEFINITION 3. Given an allocation rule O and setting S, an
objective function vector f, an allocation rule vector ψ and setting
vector ∫, we define a compound binary demand game with setting
S and output O ◦ F as (O ◦ F)i(c) =
Wm
j=1(Oj(F(c)) ∧ ψj
i (c)).
The allocation rule of the above definition can be interpreted as
follows. An agent i is selected if and only if there is a j such that
(1) i is selected in ψj
(c), and (2) the allocation rule O will select
index j under cost profile F(c). For simplicity, we will use O ◦ F
to denote the output of this compound binary demand game.
Notice that a truthful payment scheme using O ◦ F as output
exists if and only if it satisfies the monotonicity property. To study
when O ◦F satisfies MP, several necessary definitions are in order.
DEFINITION 4. Function Monotonicity Property (FMP) Given
an objective function g and an allocation rule O, a function H(c) =
g(O(c), c) is said to satisfy the function monotonicity property, if,
given fixed c−i, it satisfies:
1. When Oi(c) = 0, H(c) does not increase over ci.
2. When Oi(c) = 1, H(c) does not decrease over ci.
DEFINITION 5. Strong Monotonicity Property (SMP) An
allocation rule O is said to satisfy the strong monotonicity property
if O satisfies MP, and for any agent i with Oi(c) = 1 and agent
j = i, Oi(c|j
cj) = 1 if cj ≥ cj or Oj(c|j
cj) = 0.
LEMMA 1. For a given allocation rule O satisfying SMP and
cost vectors c, c with ci = ci, if Oi(c) = 1 and Oi(c ) = 0, then
there must exist j = i such that cj < cj and Oj(c ) = 1.
From the definition of the strong monotonicity property, we have
Lemma 1 directly. We now can give a sufficient condition when
O ◦ F satisfies the monotonicity property.
THEOREM 9. If ∀i ∈ [1, m], Fi satisfies FMP, ψi
satisfies MP,
and the output O satisfies SMP, then O ◦ F satisfies MP.
PROOF. Assuming for cost vector c we have (O ◦ F)i(c) =
1, we should prove for any cost vector c = c|i
ci with ci < ci,
(O ◦ F)i(c ) = 1. Noticing that (O ◦ F)i(c) = 1, without loss
of generality, we assume that Ok(F(c)) = 1 and ψk
i (c) = 1 for
some index 1 ≤ k ≤ m.
Now consider the output O with the cost vector F(c )|k
Fk(c).
There are two scenarios, which will be studied one by one as
follows.
One scenario is that index k is not chosen by the output function
O. From Lemma 1, there must exist j = k such that
Fj(c ) < Fj(c) (1)
Oj(F(c )|k
Fk(c)) = 1 (2)
We then prove that agent i will be selected in the output ψj
(c ),
i.e., ψj
i (c ) = 1. If it is not, since ψj
(c) satisfies MP, we have
ψj
i (c) = ψj
i (c ) = 0 from ci < ci. Since Fj satisfies FMP, we
know Fj(c ) ≥ Fj(c), which is a contradiction to the inequality
(1). Consequently, we have ψj
i (c ) = 1. From Equation (2), the
fact that index k is not selected by allocation rule O and the
definition of SMP, we have Oj(F(c )) = 1, Thus, agent i is selected by
O ◦ F because of Oj(F(c )) = 1 and ψj
i (c ) = 1.
The other scenario is that index k is chosen by the output
function O. First, agent i is chosen in ψk
(c ) since the output ψk
(c)
satisfies the monotonicity property and ci < ci and ψk
i (c) = 1.
Secondly, since the function Fk satisfies FMP, we know that Fk(c ) ≤
Fk(c). Remember that output O satisfies the SMP, thus we can
obtain Ok(F(c )) = 1 from the fact that Ok(F(c )|k
Fk(c)) = 1
and Fk(c ) ≤ Fk(c). Consequently, agent i will also be selected
in the final output O ◦ F. This finishes our proof.
This theorem implies that there is a cut value for the compound
output O ◦ F. We then discuss how to find the cut value for this
output. Below we will give an algorithm to calculate κi(O ◦ F)
when (1) O satisfies SMP, (2) ψj
satisfies MP, and (3) for fixed c−i,
Fj(c) is a constant, say hj, when ψj
i (c) = 0, and Fj(c) increases
when ψj
i (c) = 1. Notice that here hj can be easily computed by
setting ci = ∞ since ψj
satisfies the monotonicity property. When
given i and fixed c−i, we define (Fi
j )−1
(y) as the smallest x such
that Fj(c|i
x) = y. For simplicity, we denote (Fi
j )−1
as F−1
j if
no confusion is caused when i is a fixed agent. In this paper, we
assume that given any y, we can find such x in polynomial time.
Algorithm 6 Find Cut Value for Compound Method O ◦ F
Input: allocation rule O, objective function vector F and inverse
function vector F−1
= {F−1
1 , · · · , F−1
m }, allocation rule vector
ψ and fixed c−i.
Output: Cut value for agent i based on O ◦ F.
1: for 1 ≤ j ≤ m do
2: Compute the outputs ψj
(ci).
3: Compute hj = Fj(c|i
∞).
4: Use h = (h1, h2, · · · , hm) as the input for the output
function O. Denote τj = κj(O, h−j) as the cut value function of
output O based on input h.
5: for 1 ≤ j ≤ m do
6: Set κi,j = F−1
j (min{τj, hj}).
7: The cut value for i is κi(O ◦ F, c−i) = maxm
j=1 κi,j.
THEOREM 10. Algorithm 6 computes the correct cut value for
agent i based on the allocation rule O ◦ F.
PROOF. In order to prove the correctness of the cut value
function calculated by Algorithm 6, we prove the following two cases.
For our convenience, we will use κi to represent κi(O ◦ F, c−i) if
no confusion caused.
First, if di < κi then (O ◦ F)i(c|i
di) = 1. Without loss of
generality, we assume that κi = κi,j for some j. Since function Fj
satisfies FMP and ψj
i (c|i
di) = 1, we have Fj(c|i
di) < Fj(κi).
Notice di < κi,j, from the definition of κi,j = F−1
j (min{τj, hj})
we have (1) ψj
i (c|i
di) = 1, (2) Fj(c|i
di) < τj due to the fact that
Fj(x) is a non-decreasing function when j is selected. Thus, from
the monotonicity property of O and τj is the cut value for output
O, we have
Oj(h|j
Fj(c|i
di)) = 1. (3)
If Oj(F(c|i
di)) = 1 then (O◦F)i(c|i
di) = 1. Otherwise, since
O satisfies SMP, Lemma 1 and equation 3 imply that there exists
at least one index k such that Ok(F(c|i
di)) = 1 and Fk(c|i
di) <
hk. Note Fk(c|i
di) < hk implies that i is selected in ψk
(c|i
di)
since hk = Fk(ci|i
∞). In other words, agent i is selected in O◦F.
219
Second, if di ≥ κi(O ◦ F, c−i) then (O ◦ F)i(c|i
di) = 0.
Assume for the sake of contradiction that (O ◦ F)i(c|i
di) = 1. Then
there exists an index 1 ≤ j ≤ m such that Oj(F(c|i
di)) = 1 and
ψj
i (c|i
di) = 1. Remember that hk ≥ Fk(c|i
di) for any k. Thus,
from the fact that O satisfies SMP, when changing the cost vector
from F(c|i
di) to h|j
Fj(c|i
di), we still have Oj(h|j
Fj(c|i
di)) =
1. This implies that
Fj(c|i
di) < τj.
Combining the above inequality and the fact that Fj(c|i
c|i
di) <
hj, we have Fj(c|i
di) < min{hj, τj}. This implies
di < F−1
j (min{hj, τj}) = κi,j < κi(O ◦ F, c−i).
which is a contradiction. This finishes our proof.
In most applications, the allocation rule ψj
implements the
objective function fj and fj is utilitarian. Thus, we can compute
the inverse of F−1
j efficiently. Another issue is that it seems the
conditions when we can apply Algorithm 6 are restrictive.
However, lots of games in practice satisfy these properties and here we
show how to deduct the MAX combination in [13]. Assume A1
and A2 are two allocation rules for single minded combinatorial
auction, then the combination MAX(A1, A2) returns the
allocation with the larger welfare. If algorithm A1 and A2 satisfy MP and
FMP, the operation max(x, y) which returns the larger element of
x and y satisfies SMP. From Theorem 9 we obtain that
combination MAX(A1, A2) also satisfies MP. Further, the cut value of the
MAX combination can be found by Algorithm 6. As we will show
in Section 5, the complex combination can apply to some more
complicated problems.
5. CONCRETE EXAMPLES
5.1 Set Cover
In the set cover problem, there is a set U of m elements needed
to be covered, and each agent 1 ≤ i ≤ n can cover a subset of
elements Si with a cost ci. Let S = {S1, S2, · · · , Sn} and c =
(c1, c2, · · · , cn). We want to find a subset of agents D such that
U ⊆
S
i∈D Si. The selected subsets is called the set cover for
U. The social efficiency of the output D is defined as
P
i∈D ci,
which is the objective function to be minimized. Clearly, this is
a utilitarian and thus VCG mechanism can be applied if we can
find the subset of S that covers U with the minimum cost. It is
well-known that finding the optimal solution is NP-hard. In [4], an
algorithm of approximation ratio of Hm has been proposed and it
has been proved that this is the best ratio possible for the set cover
problem. For the completeness of presentation, we review their
method here.
Algorithm 7 Greedy Set Cover (GSC)
Input: Agent i"s subset Si covered and cost ci. (1 ≤ i ≤ n).
Output: A set of agents that can cover all elements.
1: Initialize r = 1, T0 = ∅, and R = ∅.
2: while R = U do
3: Find the set Sj with the minimum density
cj
|Sj −Tr|
.
4: Set Tr+1 = Tr
S
Sj and R = R
S
j.
5: r = r + 1
6: Output R.
Let GSC(S) be the sets selected by the Algorithm 7.Notice that
the output set is a function of S and c. Some works assume that
the type of an agent could be ci, i.e., Si is assumed to be a
public knowledge. Here, we consider a more general case in which
the type of an agent is (Si, ci). In other words, we assume that
every agent i can not only lie about its cost ci but also can lie about
the set Si. This problem now looks similar to the combinatorial
auction with single minded bidder studied in [12], but with the
following differences: in the set cover problem we want to cover all
the elements and the sets chosen can have some overlap while in
combinatorial auction the chosen sets are disjoint.
We can show that the mechanism M = (GSC, PV CG
), using
Algorithm 7 to find a set cover and apply VCG mechanism to
compute the payment to the selected agents, is not truthful. Obviously,
the set cover problem is a binary demand game. For the moment,
we assume that agent i won"t be able to lie about Si. We will drop
this assumption later. We show how to design a truthful mechanism
by applying our general framework.
1. Check the monotonicity property: The output of
Algorithm 7 is a round-based output. Thus, for an agent i, we
first focus on the output of one round r. In round r, if i is
selected by Algorithm 7, then it has the minimum ratio ci
|Si−Tr|
among all remaining agents. Now consider the case when i
lies its cost to ci < ci, obviously
ci
|Si−Tr|
is still minimum
among all remaining agents. Consequently, agent i is still
selected in round r, which means the output of round r satisfies
MP. Now we look into the updating rules. For every round,
we only update the Tr+1 = Tr
S
Sj and R = R
S
j, which
is obviously cross-independent. Thus, by applying Theorem
8, we know the output by Algorithm 7 satisfies MP.
2. Find the cut value: To calculate the cut value for agent i
with fixed cost vector c−i, we follow the steps in Algorithm
4. First, we set ci = ∞ and apply Algorithm 7. Let ir be the
agent selected in round r and T−i
r+1 be the corresponding set.
Then the cut value of round r is
r =
cir
|Sir − T−i
r |
· |Si − T−i
r |.
Remember the updating rule only updates the game setting
but not the cost of the agent, thus we have gr(x) = x ≥ r
for 1 ≤ r ≤ t. Therefore, the final cut value for agent i is
κi(GSC, c−i) = max
r
{
cir
|Sir − T−i
r |
· |Si − T−i
r |}
The payment to an agent i is κi if i is selected; otherwise its
payment is 0.
We now consider the scenario when agent i can lie about Si.
Assume that agent i cannot lie upward, i.e., it can only report a set
Si ⊆ Si. We argue that agent i will not lie about its elements Si.
Notice that the cut value computed for round r is r
=
cir
|Sir −T −i
r |
·
|Si − T−i
r |. Obviously |Si − T−i
r | ≤ |Si − T−i
r | for any Si ⊆ Si.
Thus, lying its set as Si will not increase the cut value for each
round. Thus lying about Si will not improve agent i"s utility.
5.2 Link Weighted Steiner Trees
Consider any link weighted network G = (V, E, c), where E =
{e1, e2, · · · , em} are the set of links and ci is the weight of the link
ei. The link weighted Steiner tree problem is to find a tree rooted at
source node s spanning a given set of nodes Q = {q1, q2, · · · , qk} ⊂
V . For simplicity, we assume that qi = vi, for 1 ≤ i ≤ k. Here
the links are agents. The total cost of links in a graph H ⊆ G is
called the weight of H, denoted as ω(H). It is NP-hard to find the
minimum cost multicast tree when given an arbitrary link weighted
220
graph G [17, 20]. The currently best polynomial time method has
approximation ratio 1 + ln 3
2
[17]. Here, we review and discuss the
first approximation method by Takahashi and Matsuyama [20].
Algorithm 8 Find LinkWeighted SteinerTree (LST)
Input: Network G = (V, E, c) where c is the cost vector for link
set E. Source node s and receiver set Q.
Output: A tree LST rooted at s and spanned all receivers.
1: Set r = 1, G1 = G, Q1
= Q and s1
= s.
2: repeat
3: In graph Gr, find the receiver, say qi, that is closest to the
source s, i.e., LCP(s, qi, c) has the least cost among the
shortest paths from s to all receivers in Qr
.
4: Select all links on LCP(s, qi, c) as relay links and set their
cost to 0. The new graph is denoted as Gr+1.
5: Set tr as qi and Pr = LCP(s, qi, c).
6: Set Qr+1
= Qr
\qi and r = r + 1.
7: until all receivers are spanned.
Hereafter, let LST(G) be the final tree constructed using the
above method. It is shown in [24] that mechanism M = (LST, pV CG
)
is not truthful, where pV CG
is the payment calculated based on
VCG mechanism.
We then show how to design a truthful payment scheme
using our general framework. Observe that the output Pr, for any
round r, satisfies MP, and the update rule for every round
satisfies crossing-independence. Thus, from Theorem 8, the
roundbased output LST satisfies MP. In round r, the cut value for a
link ei can be obtained by using the VCG mechanism. Now we
set ci = ∞ and execute Algorithm 8. Let w−i
r (ci) be the cost of
the path Pr(ci) selected in the rth round and Πi
r(ci) be the
shortest path selected in round r if the cost of ci is temporarily set to
−∞. Then the cut value for round r is r = wi
r(c−i) − |Πi
r(c−i)|
where |Πi
r(c−i)| is the cost of the path Πi
r(c−i) excluding node
vi. Using Algorithm 4, we obtain the final cut value for agent i:
κi(LST, c−i) = maxr{ r}. Thus, the payment to a link ei is
κi(LST, c−i) if its reported cost is di < κi(LST, d−i);
otherwise, its payment is 0.
5.3 Virtual Minimal Spanning Trees
To connect the given set of receivers to the source node, besides
the Steiner tree constructed by the algorithms described before, a
virtual minimum spanning tree is also often used. Assume that Q is
the set of receivers, including the sender. Assume that the nodes in
a node-weighted graph are all agents. The virtual minimum
spanning tree is constructed as follows.
Algorithm 9 Construct VMST
1: for all pairs of receivers qi, qj ∈ Q do
2: Calculate the least cost path LCP(qi, qj, d).
3: Construct a virtual complete link weighted graph K(d)
using Q as its node set, where the link qiqj corresponds to the
least cost path LCP(qi, qj, d), and its weight is w(qiqj) =
|LCP(qi, qj, d)|.
4: Build the minimum spanning tree on K(d), denoted as
V MST(d).
5: for every virtual link qiqj in V MST(d) do
6: Find the corresponding least cost path LCP(qi, qj, d) in the
original network.
7: Mark the agents on LCP(qi, qj, d) selected.
The mechanism M = (V MST, pV CG
) is not truthful [24],
where the payment pV CG
to a node is based on the VCG
mechanism. We then show how to design a truthful mechanism based on
the framework we described.
1. Check the monotonicity property: Remember that in the
complete graph K(d), the weight of a link qiqj is |LCP(qi, qj, d)|.
In other words, we implicitly defined |Q|(|Q| − 1)/2
functions fi,j, for all i < j and qi ∈ Q and qj ∈ Q, with
fi,j(d) = |LCP(qi, qj, d)|. We can show that the function
fi,j(d) = |LCP(qi, qj, d)| satisfies FMP, LCP satisfies MP,
and the output MST satisfies SMP. From Theorem 9, the
allocation rule VMST satisfies the monotonicity property.
2. Find the cut value: Notice VMST is the combination of
MST and function fi,j, so cut value for VMST can be
computed based on Algorithm 6 as follows.
(a) Given a link weighted complete graph K(d) on Q, we
should find the cut value function for edge ek = (qi, qj)
based on MST. Given a spanning tree T and a pair of
terminals p and q, clearly there is a unique path
connecting them on T. We denote this path as ΠT (p, q),
and the edge with the maximum length on this path as
LE(p, q, T). Thus, the cut value can be represented as
κk(MST, d) = LE(qi, qj, MST(d|k
∞))
(b) We find the value-cost function for LCP. Assume vk ∈
LCP(qi, qj, d), then the value-cost function is xk =
yk − |LCPvk (qi, qj, d|k
0)|. Here, LCPvk (qi, qj, d) is
the least cost path between qi and qj with node vk on
this path.
(c) Remove vk and calculate the value K(d|k
∞). Set h(i,j) =
|LCP(qi, qj, d|∞
))| for every pair of node i = j and
let h = {h(i,j)} be the vector. Then it is easy to
show that τ(i,j) = |LE(qi, qj, MST(h|(i,j)
∞))| is
the cut value for output VMST. It easy to verify that
min{h(i,j), τ(i,j)} = |LE(qi, qj, MST(h)|. Thus,
we know κ
(i,j)
k (V MST, d) is |LE(qi, qj, MST(h)|−
|LCPvk (qi, qj, d|k
0)|. The cut value for agent k is
κk(V MST, d−k) = max0≤i,j≤r κij
k (V MST, d−k).
3. We pay agent k κk(V MST, d−k) if and only if k is selected
in V MST(d); else we pay it 0.
5.4 Combinatorial Auctions
Lehmann et al. [12] studied how to design an efficient truthful
mechanism for single-minded combinatorial auction. In a
singleminded combinatorial auction, there is a set of items S to be sold
and there is a set of agents 1 ≤ i ≤ n who wants to buy some of
the items: agent i wants to buy a subset Si ⊆ S with maximum
price mi. A single-minded bidder i declares a bid bi = Si, ai
with Si ⊆ S and ai ∈ R+
. Two bids Si, ai and Sj, aj conflict
if Si ∩ Sj = ∅. Given the bids b1, b2, · · · , bn, they gave a greedy
round-based algorithm as follows. First the bids are sorted by some
criterion ( ai
|Si|1/2 is used in[12]) in an increasing order and let L be
the list of sorted bids. The first bid is granted. Then the algorithm
exams each bid of L in order and grants the bid if it does not conflict
with any of the bids previously granted. If it does, it is denied. They
proved that this greedy allocation scheme using criterion ai
|Si|1/2
approximates the optimal allocation within a factor of
√
m, where
m is the number of goods in S.
In the auction settings, we have ci = −ai. It is easy to verify the
output of the greedy algorithm is a round-based output.
Remember after bidder j is selected for round r, every bidder has conflict
221
with j will not be selected in the rounds after. This equals to
update the cost of every bidder having conflict with j to 0, which
satisfies crossing-independence. In addition, in any round, if
bidder i is selected with ai then it will still be selected when it
declares ai > ai. Thus, for every round, it satisfies MP and the
cut value is |Si|1/2
·
ajr
|Sjr |1/2 where jr is the bidder selected in
round r if we did not consider the agent i at all. Notice
ajr
|Sjr |1/2
does not increase when round r increases, so the final cut value
is |Si|1/2
·
aj
|Sj |1/2 where bj is the first bid that has been denied
but would have been selected were it not only for the presence
of bidder i. Thus, the payment by agent i is |Si|1/2
·
aj
|Sj |1/2 if
ai ≥ |Si|1/2
·
aj
|Sj |1/2 , and 0 otherwise. This payment scheme is
exactly the same as the payment scheme in [12].
6. CONCLUSIONS
In this paper, we have studied how to design a truthful
mechanism M = (O, P) for a given allocation rule O for a binary
demand game. We first showed that the allocation rule O
satisfying the MP is a necessary and sufficient condition for a truthful
mechanism M to exist. We then formulate a general framework
for designing payment P such that the mechanism M = (O, P) is
truthful and computable in polynomial time. We further presented
several general composition-based techniques to compute P
efficiently for various allocation rules O. Several concrete examples
were discussed to demonstrate our general framework for
designing P and for composition-based techniques of computing P in
polynomial time.
In this paper, we have concentrated on how to compute P in
polynomial time. Our algorithms do not necessarily have the
optimal running time for computing P given O. It would be of interest
to design algorithms to compute P in optimal time. We have made
some progress in this research direction in [22] by providing an
algorithm to compute the payments for unicast in a node weighted
graph in optimal O(n log n + m) time.
Another research direction is to design an approximation
allocation rule O satisfying MP with a good approximation ratio for a
given binary demand game. Many works [12, 13] in the mechanism
design literature are in this direction. We point out here that the
goal of this paper is not to design a better allocation rule for a
problem, but to design an algorithm to compute the payments efficiently
when O is given. It would be of significance to design allocation
rules with good approximation ratios such that a given binary
demand game has a computationally efficient payment scheme.
In this paper, we have studied mechanism design for binary
demand games. However, some problems cannot be directly
formulated as binary demand games. The job scheduling problem in [2]
is such an example. For this problem, a truthful payment scheme P
exists for an allocation rule O if and only if the workload assigned
by O is monotonic in a certain manner. It wound be of interest to
generalize our framework for designing a truthful payment scheme
for a binary demand game to non-binary demand games. Towards
this research direction, Theorem 4 can be extended to a general
allocation rule O, whose range is R+
. The remaining difficulty is
then how to compute the payment P under mild assumptions about
the valuations if a truthful mechanism M = (O, P) does exist.
Acknowledgements
We would like to thank Rakesh Vohra, Tuomas Sandholm, and
anonymous reviewers for helpful comments and discussions.
7. REFERENCES
[1] A. ARCHER, C. PAPADIMITRIOU, K. T., AND TARDOS, E. An
approximate truthful mechanism for combinatorial auctions with
single parameter agents. In ACM-SIAM SODA (2003), pp. 205-214.
[2] ARCHER, A., AND TARDOS, E. Truthful mechanisms for
one-parameter agents. In Proceedings of the 42nd IEEE FOCS
(2001), IEEE Computer Society, p. 482.
[3] AULETTA, V., PRISCO, R. D., PENNA, P., AND PERSIANO, P.
Deterministic truthful approximation schemes for scheduling related
machines.
[4] CHVATAL, V. A greedy heuristic for the set covering problem.
Mathematics of Operations Research 4, 3 (1979), 233-235.
[5] CLARKE, E. H. Multipart pricing of public goods. Public Choice
(1971), 17-33.
[6] R. Muller, and R. V. Vohra. On Dominant Strategy Mechanisms.
Working paper, 2003.
[7] DEVANUR, N. R., MIHAIL, M., AND VAZIRANI, V. V.
Strategyproof cost-sharing mechanisms for set cover and facility
location games. In ACM Electronic Commerce (EC03) (2003).
[8] FEIGENBAUM, J., KRISHNAMURTHY, A., SAMI, R., AND
SHENKER, S. Approximation and collusion in multicast cost sharing
(abstract). In ACM Economic Conference (2001).
[9] FEIGENBAUM, J., PAPADIMITRIOU, C., SAMI, R., AND SHENKER,
S. A BGP-based mechanism for lowest-cost routing. In Proceedings
of the 2002 ACM Symposium on Principles of Distributed
Computing. (2002), pp. 173-182.
[10] GREEN, J., AND LAFFONT, J. J. Characterization of satisfactory
mechanisms for the revelation of preferences for public goods.
Econometrica (1977), 427-438.
[11] GROVES, T. Incentives in teams. Econometrica (1973), 617-631.
[12] LEHMANN, D., OCALLAGHAN, L. I., AND SHOHAM, Y. Truth
revelation in approximately efficient combinatorial auctions. Journal
of ACM 49, 5 (2002), 577-602.
[13] MU"ALEM, A., AND NISAN, N. Truthful approximation
mechanisms for restricted combinatorial auctions: extended abstract.
In 18th National Conference on Artificial intelligence (2002),
American Association for Artificial Intelligence, pp. 379-384.
[14] NISAN, N., AND RONEN, A. Algorithmic mechanism design. In
Proc. 31st Annual ACM STOC (1999), pp. 129-140.
[15] E. Halperin. Improved approximation algorithms for the vertex cover
problem in graphs and hypergraphs. In Proceedings of the 11th
Annual ACM-SIAM Symposium on Discrete Algorithms, pages
329-337, 2000.
[16] R. Bar-Yehuda and S. Even. A local ratio theorem for approximating
the weighted vertex cover problem. Annals of Discrete Mathematics,
Volume 25: Analysis and Design of Algorithms for Combinatorial
Problems, pages 27-46, 1985. Editor: G. Ausiello and M. Lucertini
[17] ROBINS, G., AND ZELIKOVSKY, A. Improved steiner tree
approximation in graphs. In Proceedings of the 11th annual
ACM-SIAM SODA (2000), pp. 770-779.
[18] A. Zelikovsky. An 11/6-approximation algorithm for the network
Steiner problem. Algorithmica, 9(5):463-470, 1993.
[19] D. S. Hochbaum. Efficient bounds for the stable set, vertex cover, and
set packing problems, Discrete Applied Mathematics, 6:243-254,
1983.
[20] TAKAHASHI, H., AND MATSUYAMA, A. An approximate solution
for the steiner problem in graphs. Math. Japonica 24 (1980),
573-577.
[21] VICKREY, W. Counterspeculation, auctions and competitive sealed
tenders. Journal of Finance (1961), 8-37.
[22] WANG, W., AND LI, X.-Y. Truthful low-cost unicast in selfish
wireless networks. In 4th IEEE Transactions on Mobile Computing
(2005), to appear.
[23] WANG, W., LI, X.-Y., AND SUN, Z. Design multicast protocols for
non-cooperative networks. IEEE INFOCOM 2005, to appear.
[24] WANG, W., LI, X.-Y., AND WANG, Y. Truthful multicast in selfish
wireless networks. ACM MobiCom, 2005.
222 | mechanism design;cut value function;objective function;monotonicity property;selfish wireless network;price;selfish agent;composition-based technique;demand game;truthful mechanism;pricing;vickrey-clarke-grove;combination;binary demand game |
train_J-59 | Cost Sharing in a Job Scheduling Problem Using the Shapley Value | A set of jobs need to be served by a single server which can serve only one job at a time. Jobs have processing times and incur waiting costs (linear in their waiting time). The jobs share their costs through compensation using monetary transfers. We characterize the Shapley value rule for this model using fairness axioms. Our axioms include a bound on the cost share of jobs in a group, efficiency, and some independence properties on the the cost share of a job. | 1. INTRODUCTION
A set of jobs need to be served by a server. The server can
process only one job at a time. Each job has a finite
processing time and a per unit time waiting cost. Efficient ordering
of this queue directs us to serve the jobs in increasing
order of the ratio of per unit time waiting cost and processing
time. To compensate for waiting by jobs, monetary
transfers to jobs are allowed. How should the jobs share the cost
equitably amongst themselves (through transfers)?
The problem of fair division of costs among agents in a
queue has many practical applications. For example,
computer programs are regularly scheduled on servers, data are
scheduled to be transmitted over networks, jobs are
scheduled in shop-floor on machines, and queues appear in many
public services (post offices, banks). Study of queueing
problems has attracted economists for a long time [7, 17].
Cost sharing is a fundamental problem in many settings
on the Internet. Internet can be seen as a common resource
shared by many users and the cost incured by using the
resource needs to be shared in an equitable manner. The
current surge in cost sharing literature from computer
scientists validate this claim [8, 11, 12, 6, 24]. Internet has many
settings in which our model of job scheduling appears and
the agents waiting in a queue incur costs (jobs scheduled on
servers, queries answered from a database, data scheduled
to be transmitted over a fixed bandwidth network etc.). We
hope that our analysis will give new insights on cost sharing
problems of this nature.
Recently, there has been increased interest in cost
sharing methods with submodular cost functions [11, 12, 6, 24].
While many settings do have submodular cost functions (for
example, multi-cast transmission games [8]), while the cost
function of our game is supermodular. Also, such literature
typically does not assume budget-balance (transfers adding
up to zero), while it is an inherent feature of our model.
A recent paper by Maniquet [15] is the closest to our model
and is the motivation behind our work 1
. Maniquet [15]
studies a model where he assumes all processing times are
unity. For such a model, he characterizes the Shapley value
rule using classical fairness axioms. Chun [1] interprets the
worth of a coalition of jobs in a different manner for the same
model and derives a reverse rule. Chun characterizes this
rule using similar fairness axioms. Chun [2] also studies the
envy properties of these rules. Moulin [22, 21] studies the
queueing problem from a strategic point view when per unit
waiting costs are unity. Moulin introduces new concepts in
the queueing settings such as splitting and merging of jobs,
and ways to prevent them.
Another stream of literature is on sequencing games,
first introduced by Curiel et al. [4]. For a detailed survey,
refer to Curiel et al. [3]. Curiel et al. [4] defined sequencing
games similar to our model, but in which an initial ordering
of jobs is given. Besides, their notion of worth of a coalition
is very different from the notions studied in Maniquet [15]
and Chun [1] (these are the notions used in our work too).
The particular notion of the worth of a coalition makes the
sequencing game of Curiel et al. [4] convex, whereas our
game is not convex and does not assume the presence of
any initial order. In summary, the focus of this stream of
1
The authors thank Fran¸cois Maniquet for several fruitful
discussions.
232
research is how to share the savings in costs from the
initial ordering to the optimal ordering amongst jobs (also see
Hamers et al. [9], Curiel et al. [5]). Recently, Klijn and
S´anchez [13, 14] considered sequencing games without any
initial ordering of jobs. They take two approaches to define
the worth of coalitions. One of their approaches, called the
tail game, is related to the reverse rule of Chun [1]. In the
tail game, jobs in a coalition are served after the jobs not in
the coalition are served. Klijn and S´anchez [14] showed that
the tail game is balanced. Further, they provide expressions
for the Shapley value in tail game in terms of marginal
vectors and reversed marginal vectors. We provide a simpler
expression of the Shapley value in the tail game,
generalizing the result in Chun [1]. Klijn and S´anchez [13] study the
core of this game in detail.
Strategic aspects of queueing problems have also been
researched. Mitra [19] studies the first best implementation
in queueing models with generic cost functions. First best
implementation means that there exists an efficient
mechanism in which jobs in the queue have a dominant strategy
to reveal their true types and their transfers add up to zero.
Suijs [27] shows that if waiting costs of jobs are linear then
first best implementation is possible. Mitra [19] shows that
among a more general class of queueing problems first best
implementation is possible if and only if the cost is linear.
For another queueing model, Mitra [18] shows that first best
implementation is possible if and only if the cost function
satisfies a combinatorial property and an independence
property. Moulin [22, 21] studies strategic concepts such as
splitting and merging in queueing problems with unit per unit
waiting costs.
The general cost sharing literature is vast and has a long
history. For a good survey, we refer to [20]. From the
seminal work of Shapley [25] to recent works on cost sharing in
multi-cast transmission and optimization problems [8, 6, 23]
this area has attracted economists, computer scientists, and
operations researchers.
1.1 Our Contribution
Ours is the first model which considers cost sharing when
both processing time and per unit waiting cost of jobs are
present. We take a cooperative game theory approach and
apply the classical Shapley value rule to the problem. We
show that the Shapley value rule satisfies many intuitive
fairness axioms. Due to two dimensional nature of our model
and one dimensional nature of Maniquet"s model [15], his
axioms are insufficient to characterize the Shapley value in
our setting. We introduce axioms such as independece of
preceding jobs" unit waiting cost and independence of following
jobs" processing time. A key axiom that we introduce gives
us a bound on cost share of a job in a group of jobs which
have the same ratio of unit time waiting cost and
processing time (these jobs can be ordered in any manner between
themseleves in an efficient ordering). If such a group consists
of just one job, then the axiom says that such a job should
at least pay his own processing cost (i.e., the cost it would
have incurred if it was the only job in the queue). If there
are multiple jobs in such a group, the probability of any two
jobs from such a group inflicting costs on each other is same
(1
2
) in an efficient ordering. Depending on the ordering
selected, one job inflicts cost on the other. Our fairness axiom
says that each job should at least bear such expected costs.
We characterize the Shapley value rule using these fairness
axioms. We also extend the envy results in [2] to our setting
and discuss a class of reasonable cost sharing mechanisms.
2. THE MODEL
There are n jobs that need to be served by one server
which can process only one job at a time. The set of jobs
are denoted as N = {1, . . . , n}. σ : N → N is an ordering of
jobs in N and σi denotes the position of job i in the ordering
σ. Given an ordering σ, define Fi(σ) = {j ∈ N : σi < σj}
and Pi(σ) = {j ∈ N : σi > σj}.
Every job i is identified by two parameters: (pi, θi). pi
is the processing time and θi is the cost per unit waiting
time of job i. Thus, a queueing problem is defined by a list
q = (N, p, θ) ∈ Q, where Q is the set of all possible lists. We
will denote γi = θi
pi
. Given an ordering of jobs σ, the cost
incurred by job i is given by
ci(σ) = piθi + θi
j∈Pi(σ)
pj.
The total cost incurred by all jobs due to an ordering σ
can be written in two ways: (i) by summing the cost incurred
by every job and (ii) by summing the costs inflicted by a job
on other jobs with their own processing cost.
C(N, σ) =
i∈N
ci(σ) =
i∈N
piθi +
i∈N
¡θi
j∈Pi(σ)
pj¢.
=
i∈N
piθi +
i∈N
¡pi
j∈Fi(σ)
θj¢.
An efficient ordering σ∗
is the one which minimizes the
total cost incurred by all jobs. So, C(N, σ∗
) ≤ C(N, σ) ∀ σ ∈
Σ. To achieve notational simplicity, we will write the total
cost in an efficient ordering of jobs from N as C(N)
whenever it is not confusing. Sometimes, we will deal with only
a subset of jobs S ⊆ N. The ordering σ will then be
defined on jobs in S only and we will write the total cost from
an efficient ordering of jobs in S as C(S). The following
lemma shows that jobs are ordered in decreasing γ in an
efficient ordering. This is also known as the weighted shortest
processing time rule, first introduced by Smith [26].
Lemma 1. For any S ⊆ N, let σ∗
be an efficient ordering
of jobs in S. For every i = j, i, j ∈ S, if σ∗
i > σ∗
j , then
γi ≤ γj.
Proof. Assume for contradiction that the statment of
the lemma is not true. This means, we can find two
consecutive jobs i, j ∈ S (σ∗
i = σ∗
j + 1) such that γi > γj.
Define a new ordering σ by interchanging i and j in σ∗
.
The costs to jobs in S \ {i, j} is not changed from σ∗
to σ.
The difference between total costs in σ∗
and σ is given by,
C(S, σ) − C(S, σ∗
) = θjpi − θipj. From efficiency we get
θjpi − θipj ≥ 0. This gives us γj ≥ γi, which is a
contradiction.
An allocation for q = (N, p, θ) ∈ Q has two components:
an ordering σ and a transfer ti for every job i ∈ N. ti
denotes the payment received by job i. Given a transfer ti
and an ordering σ, the cost share of job i is defined as,
πi = ci(σ) − ti = θi
j∈N:σj ≤σi
pj − ti.
233
An allocation (σ, t) is efficient for q = (N, p, θ) whenever
σ is an efficient ordering and £i∈N ti = 0. The set of
efficient orderings of q is denoted as Σ∗
(q) and σ∗
(q) will be
used to refer to a typical element of the set. The following
straightforward lemma says that for two different efficient
orderings, the cost share in one efficient allocation is
possible to achieve in the other by appropriately modifying the
transfers.
Lemma 2. Let (σ, t) be an efficient allocation and π be the
vector of cost shares of jobs from this allocation. If σ∗
= σ
be an efficient ordering and t∗
i = ci(σ∗
) − πi ∀ i ∈ N, then
(σ∗
, t∗
) is also an efficient allocation.
Proof. Since (σ, t) is efficient, £i∈N ti = 0. This gives
£i∈N πi = C(N). Since σ∗
is an efficient ordering, £i∈N ci(σ∗
) =
C(N). This means, £i∈N t∗
i = £i∈N [ci(σ∗
) − πi] = 0. So,
(σ∗
, t∗
) is an efficient allocation.
Depending on the transfers, the cost shares in different
efficient allocations may differ. An allocation rule ψ associates
with every q ∈ Q a non-empty subset ψ(q) of allocations.
3. COST SHARING USING THE SHAPLEY
VALUE
In this section, we define the coalitional cost of this game
and analyze the solution proposed by the Shapley value.
Given a queue q ∈ Q, the cost of a coalition of S ⊆ N jobs
in the queue is defined as the cost incurred by jobs in S if
these are the only jobs served in the queue using an efficient
ordering. Formally, the cost of a coalition S ⊆ N is,
C(S) =
i∈S
j∈S:σ∗
j ≤σ∗
i
θjpj,
where σ∗
= σ∗
(S) is an efficient ordering considering jobs
from S only. The worth of a coalition of S jobs is just
−C(S). Maniquet [15] observes another equivalent way to
define the worth of a coalition is using the dual function of
the cost function C(·). Other interesting ways to define the
worth of a coalition in such games is discussed by Chun [1],
who assume that a coalition of jobs are served after the jobs
not in the coalition are served.
The Shapley value (or cost share) of a job i is defined as,
SVi =
S⊆N\{i}
|S|!(|N| − |S| − 1)!
|N|!
¡C(S∪{i})−C(S)¢. (1)
The Shapley value allocation rule says that jobs are ordered
using an efficient ordering and transfers are assigned to jobs
such that the cost share of job i is given by Equation 1.
Lemma 3. Let σ∗
be an efficient ordering of jobs in set
N. For all i ∈ N, the Shapley value is given by,
SVi = piθi +
1
2
¡Li + Ri¢,
where Li = θi £j∈Pi(σ∗) pj and Ri = pi £j∈Fi(σ∗) θj.
Proof. Another way to write the Shapley value formula
is the following [10],
SVi =
S⊆N:i∈S
∆(S)
|S|
,
where ∆(S) = C(S) if |S| = 1 and ∆(S) = C(S)−£T S ∆(T).
This gives ∆({i}) = C({i}) = piθi ∀i ∈ N. For any i, j ∈ N
with i = j, we have
∆({i, j}) = C({i, j}) − C({i}) − C({j})
= min(piθi + pjθj + pjθi, piθi + pjθj + piθj)
− piθi − pjθj
= min(pjθi, piθj).
We will show by induction that ∆(S) = 0 if |S| > 2. For
|S| = 3, let S = {i, j, k}. Without loss of generality, assume
θi
pi
≥
θj
pj
≥ θk
pk
. So, ∆(S) = C(S) − ∆({i, j}) − ∆({j, k}) −
∆({i, k})−∆({i})−∆({j})−∆({k}) = C(S)−piθj −pjθk −
piθk − piθi − pjθj − pkθk = C(S) − C(S) = 0.
Now, assume for T S, ∆(T) = 0 if |T| > 2. Without
loss of generality assume that σ to be the identity mapping.
Now,
∆(S) = C(S) −
T S
∆(T)
= C(S) −
i∈S
j∈S:j<i
∆({i, j}) −
i∈S
∆({i})
= C(S) −
i∈S
j∈S:j<i
pjθi −
i∈S
piθi
= C(S) − C(S) = 0.
This proves that ∆(S) = 0 if |S| > 2. Using the Shapley
value formula now,
SVi =
S⊆N:i∈S
∆(S)
|S|
= ∆({i}) +
1
2
j∈N:j=i
∆({i, j})
= piθi +
1
2
¡
j<i
∆({i, j}) +
j>i
∆({i, j})¢
= piθi +
1
2
¡
j<i
pjθi +
j>i
piθj¢= piθi +
1
2
¡Li + Ri¢.
4. AXIOMATICCHARACTERIZATIONOF
THE SHAPLEY VALUE
In this section, we will define serveral axioms on fairness
and characterize the Shapley value using them. For a given
q ∈ Q, we will denote ψ(q) as the set of allocations from
allocation rule ψ. Also, we will denote the cost share vector
associated with an allocation rule (σ, t) as π and that with
allocation rule (σ , t ) as π etc.
4.1 The Fairness Axioms
We will define three types of fairness axioms: (i) related
to efficiency, (ii) related to equity, and (iii) related to
independence.
Efficiency Axioms
We define two types of efficiency axioms. One related to
efficiency which states that an efficient ordering should be
selected and the transfers of jobs should add up to zero
(budget balance).
Definition 1. An allocation rule ψ satisfies efficiency if
for every q ∈ Q and (σ, t) ∈ ψ(q), (σ, t) is an efficient
allocation.
234
The second axiom related to efficiency says that the
allocation rule should not discriminate between two allocations
which are equivalent to each other in terms of cost shares of
jobs.
Definition 2. An allocation rule ψ satisfies Pareto
indifference if for every q ∈ Q, (σ, t) ∈ ψ(q), and (σ , t ) ∈ Σ(q),
we have
¡πi = πi ∀ i ∈ N¢⇒
¡(σ , t ) ∈ ψ(q)¢.
An implication of Pareto indifference axiom and Lemma
2 is that for every efficient ordering there is some set of
transfers of jobs such that it is part of an efficient rule and
the cost share of a job in all these allocations are same.
Equity Axioms
How should the cost be shared between two jobs if the jobs
have some kind of similarity between them? Equity axioms
provide us with fairness properties which help us answer
this question. We provide five such axioms. Some of these
axioms (for example anonymity, equal treatment of equals)
are standard in the literature, while some are new.
We start with a well known equity axiom called anonymity.
Denote ρ : N → N as a permutation of elements in N. Let
ρ(σ, t) denote the allocation obtained by permuting elements
in σ and t according to ρ. Similarly, let ρ(p, θ) denote the
new list of (p, θ) obtained by permuting elements of p and θ
according to ρ. Our first equity axiom states that allocation
rules should be immune to such permutation of data.
Definition 3. An allocation rule ψ satisfies anonymity if
for all q ∈ Q, (σ, t) ∈ ψ(q) and every permutation ρ, we then
ρ(σ, t) ∈ ψ(N, ρ(q)).
The next equity axiom is classical in literature and says
that two similar jobs should be compensated such that their
cost shares are equal. This implies that if all the jobs are of
same type, then jobs should equally share the total system
cost.
Definition 4. An allocation rule ψ satisfies equal
treatment of equals (ETE) if for all q ∈ Q, (σ, t) ∈ ψ(q),
i, j ∈ N, then
¡pi = pj; θi = θj¢⇒
¡πi = πj¢.
ETE directs us to share costs equally between jobs if they
are of the same per unit waiting cost and processing time.
But it is silent about the cost shares of two jobs i and j
which satisfy θi
pi
=
θj
pj
. We introduce a new axiom for this.
If an efficient rule chooses σ such that σi < σj for some
i, j ∈ N, then job i is inflicting a cost of piθj on job j
and job j is inflicting zero cost on job i. Define for some
γ ≥ 0, S(γ) = {i ∈ N : γi = γ}. In an efficient rule, the
elements in S(γ) can be ordered in any manner (in |S(γ)|!
ways). If i, j ∈ S(γ) then we have pjθi = piθj. Probability
of σi < σj is 1
2
and so is the probability of σi > σj. The
expected cost i inflicts on j is 1
2
piθj and j inflicts on i is
1
2
pjθi. Our next fairness axiom says that i and j should
each be responsible for their own processing cost and this
expected cost they inflict on each other. Arguing for every
pair of jobs i, j ∈ S(γ), we establish a bound on the cost
share of jobs in S(γ). We impose this as an equity axiom
below.
Definition 5. An allocation rule satisfies expected cost
bound (ECB) if for all q ∈ Q, (σ, t) ∈ ψ(q) with π being the
resulting cost share, for any γ ≥ 0, and for every i ∈ S(γ),
we have
πi ≥ piθi +
1
2
¡
j∈S(γ):σj <σi
pjθi +
j∈S(γ):σj >σi
piθj¢.
The central idea behind this axiom is that of expected
cost inflicted. If an allocation rule chooses multiple
allocations, we can assign equal probabilities of selecting one of
the allocations. In that case, the expected cost inflicted by
a job i on another job j in the allocation rule can be
calculated. Our axiom says that the cost share of a job should
be at least its own processing cost and the total expected
cost it inflicts on others. Note that the above bound poses
no constraints on how the costs are shared among different
groups. Also observe that if S(γ) contains just one job, ECB
says that job should at least bear its own processing cost.
A direct consequence of ECB is the following lemma.
Lemma 4. Let ψ be an efficient rule which satisfies ECB.
For a q ∈ Q if S(γ) = N, then for any (σ, t) ∈ ψ(q) which
gives a cost share of π, πi = piθi + 1
2
¡Li + Ri¢∀ i ∈ N.
Proof. From ECB, we get πi ≥ piθi+1
2
¡Li+Ri¢∀ i ∈ N.
Assume for contradiction that there exists j ∈ N such that
πj > pjθj + 1
2
¡Li + Ri¢. Using efficiency and the fact
that £i∈N Li = £i∈N Ri, we get £i∈N πi = C(N) >
£i∈N piθi + 1
2
£i∈N
¡Li + Ri¢ = C(N). This gives us a
contradiction.
Next, we introduce an axiom about sharing the transfer
of a job between a set of jobs. In particular, if the last
job quits the system, then the ordering need not change.
But the transfer to the last job needs to be shared between
the other jobs. This should be done in proportion to their
processing times because every job influenced the last job
based on its processing time.
Definition 6. An allocation rule ψ satisfies
proportionate responsibility of p (PRp) if for all q ∈ Q, for all
(σ, t) ∈ ψ(q), k ∈ N such that σk = |N|, q = (N \
{k}, p , θ ) ∈ Q, such that for all i ∈ N\{k}: θi = θi, pi = pi,
there exists (σ , t ) ∈ ψ(q ) such that for all i ∈ N \ {k}:
σi = σi and
ti = ti + tk
pi
£j=k pj
.
An analogous fairness axiom results if we remove the job
from the beginning of the queue. Since the presence of the
first job influenced each job depending on their θ values, its
transfer needs to be shared in proportion to θ values.
Definition 7. An allocation rule ψ satisfies
proportionate responsibility of θ (PRθ) if for all q ∈ Q, for all
(σ, t) ∈ ψ(q), k ∈ N such that σk = 1, q = (N \{k}, p , θ ) ∈
Q, such that for all i ∈ N \{k}: θi = θi, pi = pi, there exists
(σ , t ) ∈ ψ(q ) such that for all i ∈ N \ {k}: σi = σi and
ti = ti + tk
θi
£j=k θj
.
The proportionate responsibility axioms are
generalizations of equal responsibility axioms introduced by
Maniquet [15].
235
Independence Axioms
The waiting cost of a job does not depend on the per unit
waiting cost of its preceding jobs. Similarly, the waiting cost
inflicted by a job to its following jobs is independent of the
processing times of the following jobs. These independence
properties should be carried over to the cost sharing rules.
This gives us two independence axioms.
Definition 8. An allocation rule ψ satisfies independence
of preceding jobs" θ (IPJθ) if for all q = (N, p, θ), q =
(N, p , θ ) ∈ Q, (σ, t) ∈ ψ(q), (σ , t ) ∈ ψ(q ), if for all
i ∈ N \ {k}: θi = θi, pi = pi and γk < γk, pk = pk,
then for all j ∈ N such that σj > σk: πj = πj, where π is
the cost share in (σ, t) and π is the cost share in (σ , t ).
Definition 9. An allocation rule ψ satisfies independence
of following jobs" p (IFJp) if for all q = (N, p, θ), q =
(N, p , θ ) ∈ Q, (σ, t) ∈ ψ(q), (σ , t ) ∈ ψ(q ), if for all
i ∈ N \ {k}: θi = θi, pi = pi and γk > γk, θk = θk,
then for all j ∈ N such that σj < σk: πj = πj, where π is
the cost share in (σ, t) and π is the cost share in (σ , t ).
4.2 The Characterization Results
Having stated the fairness axioms, we propose three
different ways to characterize the Shapley value rule using
these axioms. All our characterizations involve efficiency
and ECB. But if we have IPJθ, we either need IFJp or PRp.
Similarly if we have IFJp, we either need IPJθ or PRθ.
Proposition 1. Any efficient rule ψ that satisfies ECB,
IPJθ, and IFJp is a rule implied by the Shapley value rule.
Proof. Define for any i, j ∈ N, θi
j = γipj and pi
j =
θj
γi
. Assume without loss of generality that σ is an efficient
ordering with σi = i ∀ i ∈ N.
Consider the following q = (N, p , θ ) corresponding to
job i with pj = pj if j ≤ i and pj = pi
j if j > i, θj = θi
j if
j < i and θj = θj if j ≥ i. Observe that all jobs have the
same γ: γi. By Lemma 2 and efficiency, (σ, t ) ∈ ψ(q ) for
some set of transfers t . Using Lemma 4, we get cost share of
i from (σ, t ) as πi = piθi + 1
2
¡Li + Ri¢. Now, for any j < i,
if we change θj to θj without changing processing time, the
new γ of j is γj ≥ γi. Applying IPJθ, the cost share of job i
should not change. Similarly, for any job j > i, if we change
pj to pj without changing θj, the new γ of j is γj ≤ γi.
Applying IFJp, the cost share of job i should not change.
Applying this procedure for every j < i with IPJθ and for
every j > i with IFJp, we reach q = (N, p, θ) and the payoff
of i does not change from πi. Using this argument for every
i ∈ N and using the expression for the Shapley value in
Lemma 3, we get the Shapley value rule.
It is possible to replace one of the independence axioms
with an equity axiom on sharing the transfer of a job. This
is shown in Propositions 2 and 3.
Proposition 2. Any efficient rule ψ that satisfies ECB,
IPJθ, and PRp is a rule implied by the Shapley value rule.
Proof. As in the proof of Proposition 1, define θi
j =
γipj ∀ i, j ∈ N. Assume without loss of generality that σ is
an efficient ordering with σi = i ∀ i ∈ N.
Consider a queue with jobs in set K = {1, . . . , i, i + 1},
where i < n. Define q = (K, p, θ ), where θj = θi+1
j ∀ j ∈
K. Define σj = σj ∀ j ∈ K. σ is an efficient ordering
for q . By ECB and Lemma 4 the cost share of job i +
1 in any allocation rule in ψ must be πi+1 = pi+1θi+1 +
1
2
¡£j<i+1 pjθi+1¢. Now, consider q = (K, p, θ ) such that
θj = θi
j ∀ j ≤ i and θi+1 = θi+1. σ remains an efficient
ordering in q and by IPJθ the cost share of i + 1 remains
πi+1. In q = (K \ {i + 1}, p, θ ), we can calculate the
cost share of job i using ECB and Lemma 4 as πi = piθi +
1
2
£j<i pjθi. So, using PRp we get the new cost share of job
i in q as πi = πi + ti+1
pi
j<i+1 pj
= piθi + 1
2
¡£j<i pjθi +
piθi+1¢.
Now, we can set K = K ∪ {i + 2}. As before, we can
find cost share of i + 2 in this queue as πi+2 = pi+2θi+2 +
1
2
¡£j<i+2 pjθi+2¢. Using PRp we get the new cost share
of job i in the new queue as πi = piθi + 1
2
¡£j<i pjθi +
piθi+1 + piθi+2¢. This process can be repeated till we add
job n at which point cost share of i is piθi + 1
2
¡£j<i pjθi +
£j>i piθj¢. Then, we can adjust the θ of preceding jobs of
i to their original value and applying IPJθ, the payoffs of
jobs i through n will not change. This gives us the Shapley
values of jobs i through n. Setting i = 1, we get cost shares
of all the jobs from ψ as the Shapley value.
Proposition 3. Any efficient rule ψ that satisfies ECB,
IFJp, and PRθ is a rule implied by the Shapley value rule.
Proof. The proof mirrors the proof of Proposition 2. We
provide a short sketch. Analogous to the proof of
Proposition 2, θs are kept equal to original data and processing times
are initialized to pi+1
j . This allows us to use IFJp. Also,
contrast to Proposition 2, we consider K = {i, i + 1, . . . , n} and
repeatedly add jobs to the beginning of the queue
maintaining the same efficient ordering. So, we add the cost
components of preceding jobs to the cost share of jobs in each
iteration and converge to the Shapley value rule.
The next proposition shows that the Shapley value rule
satisfies all the fairness axioms discussed.
Proposition 4. The Shapley value rule satisfies efficiency,
pareto indifference, anonymity, ETE, ECB, IPJθ, IFJp, PRp,
and PRθ.
Proof. The Shapley value rule chooses an efficient
ordering and by definition the payments add upto zero. So, it
satisfies efficiency.
The Shapley value assigns same cost share to a job
irrespective of the efficient ordering chosen. So, it is pareto
indifferent.
The Shapley value is anonymous because the particular
index of a job does not effect his ordering or cost share.
For ETE, consider two jobs i, j ∈ N such that pi = pj
and θi = θj. Without loss of generality assume the efficient
ordering to be 1, . . . , i, . . . , j, . . . , n. Now, the Shapley value
of job i is
236
SVi = piθi +
1
2
¡Li + Ri¢(From Lemma 3)
= pjθj +
1
2
¡Lj + Rj¢−
1
2
¡Li − Lj + Ri − Rj¢
= SVj −
1
2
¡
i<k≤j
piθk −
i≤k<j
pkθi¢
= SVj −
1
2
i<k≤j
(piθk − pkθi) (Using pi = pj and θi = θj)
= SVj (Using
θk
pk
=
θi
pi
for all i ≤ k ≤ j).
The Shapley value satisfies ECB by its expression in Lemma
3.
Consider any job i, in an efficient ordering σ, if we increase
the value of γj for some j = i such that σj > σi, then
the set Pi ( preceding jobs) does not change in the new
efficient ordering. If γj is changed such that pj remains the
same, then the expression £j∈Pi
θipj is unchanged. If (p, θ)
values of no other jobs are changed, then the Shapley value
is unchanged by increasing γj for some j ∈ Pi while keeping
pj unchanged. Thus, the Shapley value rule satisfies IPJθ.
An analogous argument shows that the Shapley value rule
satisfies IFJp.
For PRp, assume without loss of generality that jobs are
ordered 1, . . . , n in an efficient ordering. Denote the transfer
of job i = n due to the Shapley value with set of jobs N and
set of jobs N \ {n} as ti and ti respectively. Transfer of last
job is tn = 1
2
θn £j<n pj. Now,
ti =
1
2
¡θi
j<i
pj − pi
j>i
θj¢
=
1
2
¡θi
j<i
pj − pi
j>i:j=n
θj¢−
1
2
piθn
= ti −
1
2
θn
j<n
pj
pi
£j<n pj
= ti − tn
pi
£j<n pj
.
A similar argument shows that the Shapley value rule
satisfies PRθ.
These series of propositions lead us to our main result.
Theorem 1. Let ψ be an allocation rule. The following
statements are equivalent:
1) For each q ∈ Q, ψ(q) selects all the allocation assigning
jobs cost shares implied by the Shapley value.
2) ψ satisfies efficiency, ECB, IFJp, and IPJθ.
3) ψ satisfies efficiency, ECB, IFJp, and PRθ.
4) ψ satisfies efficiency, ECB, PRp, and IPJθ.
Proof. The proof follows from Propositions 1, 2, 3, and
4.
5. DISCUSSIONS
5.1 A Reasonable Class of Cost Sharing
Mechanisms
In this section, we will define a reasonable class of cost
sharing mechanisms. We will show how these reasonable
mechanisms lead to the Shapley value mechanism.
Definition 10. An allocation rule ψ is reasonable if for
all q ∈ Q and (σ, t) ∈ ψ(q) we have for all i ∈ N,
ti = α
¡θi
j∈Pi(σ)
pj − pi
j∈Fi(σ)
θj¢∀ i ∈ N,
where 0 ≤ α ≤ 1.
The reasonable cost sharing mechanism says that every
job should be paid a constant fraction of the difference
between the waiting cost he incurs and the waiting cost he
inflicts on other jobs. If α = 0, then every job bears its
own cost. If α = 1, then every job gets compensated for its
waiting cost but compensates others for the cost he inflicts
on others. The Shapley value rule comes as a result of ETE
as shown in the following proposition.
Proposition 5. Any efficient and reasonable allocation
rule ψ that satisfies ETE is a rule implied by the Shapley
value rule.
Proof. Consider a q ∈ Q in which pi = pj and θi = θj.
Let (σ, t) ∈ ψ(q) and π be the resulting cost shares. From
ETE, we get,
πi = πj
⇒ ci(σ) − ti = cj(σ) − tj
⇒ piθi + (1 − α)Li + αRi = pjθj + (1 − α)Lj + αRj
(Since ψ is efficient and reasonable)
⇒ (1 − α)(Li − Lj) = α(Rj − Ri)
(Using pi = pj, θi = θj)
⇒ 1 − α = α
(Using Li − Lj = Rj − Ri = 0)
⇒ α =
1
2
.
This gives us the Shapley value rule by Lemma 3.
5.2 Results on Envy
Chun [2] discusses a fariness condition called no-envy for
the case when processing times of all jobs are unity.
Definition 11. An allocation rule satisfies no-envy if for
all q ∈ Q, (σ, t) ∈ ψ(q), and i, j ∈ N, we have πi ≤ ci(σij
) −
tj, where π is the cost share from allocation rule (σ, t) and
σij
is the ordering obtaining by swapping i and j.
From the result in [2], the Shapley value rule does not
satisfy no-envy in our model also. To overcome this, Chun [2]
introduces the notion of adjusted no-envy, which he shows
is satisfied in the Shapley value rule when processing times
of all jobs are unity. Here, we show that adjusted envy
continues to hold in the Shapley value rule in our model (when
processing times need not be unity).
As before denote σij
be an ordering where the position
of i and j is swapped from an ordering σ. For adjusted
noenvy, if (σ, t) is an allocation for some q ∈ Q, let tij
be the
237
transfer of job i when the transfer of i is calculated with
respect to ordering σij
. Observe that an allocation may not
allow for calculation of tij
. For example, if ψ is efficient,
then tij
cannot be calculated if σij
is also not efficient. For
simplicity, we state the definition of adjusted no-envy to
apply to all such rules.
Definition 12. An allocation rule satisfies adjusted
noenvy if for all q ∈ Q, (σ, t) ∈ ψ(q), and i, j ∈ N, we have
πi ≤ ci(σij
) − tij
i .
Proposition 6. The Shapley value rule satisfies adjusted
no-envy.
Proof. Without loss of generality, assume efficient
ordering of jobs is: 1, . . . , n. Consider two jobs i and i + k.
From Lemma 3,
SVi = piθi +
1
2
¡
j<i
θipj +
j>i
θjpi¢.
Let ˆπi be the cost share of i due to adjusted transfer tii+k
i
in the ordering σii+k
.
ˆπi = ci(σii+k
) − tii+k
i
= piθi +
1
2
¡
j<i
θipj + θipi+k +
i<j<i+k
θipj
+
j>i
θjpi − θi+kpi −
i<j<i+k
θjpi¢
= SVi +
1
2
i<j≤i+k
¡θipj − θjpi¢
≥ SVi (Using the fact that
θi
pi
≥
θj
pj
for i < j).
6. CONCLUSION
We studied the problem of sharing costs for a job
scheduling problem on a single server, when jobs have processing
times and unit time waiting costs. We took a cooperative
game theory approach and show that the famous the
Shapley value rule satisfies many nice fairness properties. We
characterized the Shapley value rule using different intuitive
fairness axioms.
In future, we plan to further simplify some of the fairness
axioms. Some initial simplifications already appear in [16],
where we provide an alternative axiom to ECB and also
discuss the implication of transfers between jobs (in stead of
transfers from jobs to a central server). We also plan to look
at cost sharing mechanisms other than the Shapley value.
Investigating the strategic power of jobs in such mechanisms
is another line of future research.
7. REFERENCES
[1] Youngsub Chun. A Note on Maniquet"s
Characterization of the Shapley Value in Queueing
Problems. Working Paper, Rochester University, 2004.
[2] Youngsub Chun. No-envy in Queuing Problems.
Working Paper, Rochester University, 2004.
[3] Imma Curiel, Herbert Hamers, and Flip Klijn.
Sequencing Games: A Survey. In Peter Borm and
Hans Peters, editors, Chapter in Game Theory.
Theory and Decision Library, Kulwer Academic
Publishers, 2002.
[4] Imma Curiel, Giorgio Pederzoli, and Stef Tijs.
Sequencing Games. European Journal of Operational
Research, 40:344-351, 1989.
[5] Imma Curiel, Jos Potters, Rajendra Prasad, Stef Tijs,
and Bart Veltman. Sequencing and Cooperation.
Operations Research, 42(3):566-568, May-June 1994.
[6] Nikhil R. Devanur, Milena Mihail, and Vijay V.
Vazirani. Strategyproof Cost-sharing Mechanisms for
Set Cover and Facility Location Games. In
Proceedings of Fourth Annual ACM Conferece on
Electronic Commerce, 2003.
[7] Robert J. Dolan. Incentive Mechanisms for Priority
Queueing Problems. Bell Journal of Economics,
9:421-436, 1978.
[8] Joan Feigenbaum, Christos Papadimitriou, and Scott
Shenker. Sharing the Cost of Multicast Transmissions.
In Proceedings of Thirty-Second Annual ACM
Symposium on Theory of Computing, 2000.
[9] Herbert Hamers, Jeroen Suijs, Stef Tijs, and Peter
Borm. The Split Core for Sequencing Games. Games
and Economic Behavior, 15:165-176, 1996.
[10] John C. Harsanyi. Contributions to Theory of Games
IV, chapter A Bargaining Model for Cooperative
n-person Games. Princeton University Press, 1959.
Editors: A. W. Tucker, R. D. Luce.
[11] Kamal Jain and Vijay Vazirani. Applications of
Approximate Algorithms to Cooperative Games. In
Proceedings of 33rd Symposium on Theory of
Computing (STOC "01), 2001.
[12] Kamal Jain and Vijay Vazirani. Equitable Cost
Allocations via Primal-Dual Type Algorithms. In
Proceedings of 34th Symposium on Theory of
Computing (STOC "02), 2002.
[13] Flip Klijn and Estela S´anchez. Sequencing Games
without a Completely Specified Initial Order. Report
in Statistics and Operations Research, pages 1-17,
2002. Report 02-04.
[14] Flip Klijn and Estela S´anchez. Sequencing Games
without Initial Order. Working Paper, Universitat
Aut´onoma de Barcelona, July 2004.
[15] Franois Maniquet. A Characterization of the Shapley
Value in Queueing Problems. Journal of Economic
Theory, 109:90-103, 2003.
[16] Debasis Mishra and Bharath Rangarajan. Cost
sharing in a job scheduling problem. Working Paper,
CORE, 2005.
[17] Manipushpak Mitra. Essays on First Best
Implementable Incentive Problems. Ph.D. Thesis,
Indian Statistical Institute, New Delhi, 2000.
[18] Manipushpak Mitra. Mechanism design in queueing
problems. Economic Theory, 17(2):277-305, 2001.
[19] Manipushpak Mitra. Achieving the first best in
sequencing problems. Review of Economic Design,
7:75-91, 2002.
[20] Herv´e Moulin. Handbook of Social Choice and
Welfare, chapter Axiomatic Cost and Surplus Sharing.
North-Holland, 2002. Publishers: Arrow, Sen,
Suzumura.
[21] Herv´e Moulin. On Scheduling Fees to Prevent
238
Merging, Splitting and Transferring of Jobs. Working
Paper, Rice University, 2004.
[22] Herv´e Moulin. Split-proof Probabilistic Scheduling.
Working Paper, Rice University, 2004.
[23] Herv´e Moulin and Rakesh Vohra. Characterization of
Additive Cost Sharing Methods. Economic Letters,
80:399-407, 2003.
[24] Martin P´al and ´Eva Tardos. Group Strategyproof
Mechanisms via Primal-Dual Algorithms. In
Proceedings of the 44th Annual IEEE Symposium on
the Foundations of Computer Science (FOCS "03),
2003.
[25] Lloyd S. Shapley. Contributions to the Theory of
Games II, chapter A Value for n-person Games, pages
307-317. Annals of Mathematics Studies, 1953.
Ediors: H. W. Kuhn, A. W. Tucker.
[26] Wayne E. Smith. Various Optimizers for Single-Stage
Production. Naval Research Logistics Quarterly,
3:59-66, 1956.
[27] Jeroen Suijs. On incentive compatibility and budget
balancedness in public decision making. Economic
Design, 2, 2002.
239 | job scheduling;cooperative game theory approach;unit waiting cost;processing time;expected cost bound;agent;monetary transfer;queueing problem;shapley value;job schedule;queue problem;fairness axiom;cost sharing;allocation rule;cost share |
train_J-60 | On Decentralized Incentive Compatible Mechanisms for Partially Informed Environments | Algorithmic Mechanism Design focuses on Dominant Strategy Implementations. The main positive results are the celebrated Vickrey-Clarke-Groves (VCG) mechanisms and computationally efficient mechanisms for severely restricted players (single-parameter domains). As it turns out, many natural social goals cannot be implemented using the dominant strategy concept [35, 32, 22, 20]. This suggests that the standard requirements must be relaxed in order to construct general-purpose mechanisms. We observe that in many common distributed environments computational entities can take advantage of the network structure to collect and distribute information. We thus suggest a notion of partially informed environments. Even if the information is recorded with some probability, this enables us to implement a wider range of social goals, using the concept of iterative elimination of weakly dominated strategies. As a result, cooperation is achieved independent of agents" belief. As a case study, we apply our methods to derive Peer-to-Peer network mechanism for file sharing. | 1. INTRODUCTION
Recently, global networks have attracted widespread study.
The emergence of popular scalable shared networks with
self-interested entities - such as peer-to-peer systems over
the Internet and mobile wireless communication ad-hoc
networks - poses fundamental challenges.
Naturally, the study of such giant decentralized systems
involves aspects of game theory [32, 34]. In particular, the
subfield of Mechanism Design deals with the construction
of mechanisms: for a given social goal the challenge is to
design rules for interaction such that selfish behavior of the
agents will result in the desired social goal [23, 33].
Algorithmic Mechanism Design (AMD) focuses on
efficiently computable constructions [32]. Distributed
Algorithmic Mechanism Design (DAMD) studies mechanism design
in inherently decentralized settings [30, 12]. The standard
model assumes rational agents with quasi-linear utilities and
private information, playing dominant strategies.
The solution concept of dominant strategies - in which
each player has a best response strategy regardless of the
strategy played by any other player - is well suited to the
assumption of private information, in which each player is
not assumed to have knowledge or beliefs regarding the other
players. The appropriateness of this set-up stems from the
strength of the solution concept, which complements the
weak information assumption. Many mechanisms have been
constructed using this set-up, e.g., [1, 4, 6, 11, 14, 22]. Most
of these apply to severely-restricted cases (e.g., single-item
auctions with no externalities) in which a player"s
preference is described by only one parameter (single-parameter
domains).
To date, Vickrey-Clarke-Groves (VCG) mechanisms are
the only known general method for designing dominant
strategy mechanisms for general domains of preferences.
However, in distributed settings without available subsidies from
outside sources, VCG mechanisms cannot be accepted as
valid solutions due to a serious lack of budget balance.
Additionally, for some domains of preferences, VCG mechanisms
and weighted VCG mechanisms are faced with
computational hardness [22, 20]. Further limitations of the set-up
are discussed in subsection 1.3.
In most distributed environments, players can take
advantage of the network structure to collect and distribute
information about other players. This paper thus studies
the effects of relaxing the private information assumption.
240
One model that has been extensively studied recently is
the Peer-to-Peer (P2P) network. A P2P network is a
distributed network with no centralized authority, in which the
participants share their individual resources (e.g.,
processing power, storage capacity, bandwidth and content). The
aggregation of such resources provides inexpensive
computational platforms. The most popular P2P networks are
those for sharing media files, such as Napster, Gnutella, and
Kazaa. Recent work on P2P Incentives include
micropayment methods [15] and reputation-based methods [9, 13].
The following description of a P2P network scenario
illustrates the relevance of our relaxed informational assumption.
Example 1. Consider a Peer-to-Peer network for file
sharing. Whenever agent B uploads a file from agent A, all peers
along the routing path know that B has loaded the file. They
can record this information about agent B. In addition, they
can distribute this information.
However, it is impossible to record all the information
everywhere. First, such duplication induces huge costs.
Second, as agents dynamically enter and exit from the network,
the information might not be always available. And so it is
seems natural to consider environments in which the
information is locally recorded, that is, the information is recorded
in the closest neighborhood with some probability p.
In this paper we shall see that if the information is
available with some probability, then this enables us to
implement a wider range of social goals. As a result, cooperation
is achieved independent of agents" belief. This demonstrates
that in some computational contexts our approach is far less
demanding than the Bayesian approach (that assumes that
players" types are drawn according to some identified
probability density function).
1.1 Implementations in Complete Information
Set-ups
In complete information environments, each agent is
informed about everyone else. That is, each agent observes his
own preference and the preferences of all other agents.
However, no outsider can observe this information. Specifically,
neither the mechanism designer nor the court. Many
positive results were shown for such arguably realistic settings.
For recent surveys see [25, 27, 18].
Moore and Repullo implement a large class of social goals
using sequential mechanisms with a small number of rounds
[28]. The concept they used is subgame-perfect
implementations (SPE).
The SPE-implementability concept seems natural for the
following reasons: the designed mechanisms usually have
non-artificial constructs and a small strategy space. As
a result, it is straightforward for a player to compute his
strategy.1
Second, sequential mechanisms avoid
simultaneous moves, and thus can be considered for distributed
networks. Third, the constructed mechanisms are often
decentralized (i.e., lacking a centralized authority or designer)
1
Interestingly, in real life players do not always use their
subgame perfect strategies. One such widely studied case is the
Ultimatum Bargaining 2-person game. In this simple game,
the proposer first makes an offer of how to divide a certain
known sum of money, and the responder either agrees or
refuses, in the latter case both players earn zero. Somewhat
surprisingly, experiments show that the responder often
rejects the suggested offer, even if it is bounded away from
zero and the game is played only once (see e.g. [38]).
and budget-balanced (i.e., transfers always sum up to zero).
This happens essentially if there are at least three players,
and a direct network link between any two agents. Finally,
Moore and Repullo observed that they actually use a
relaxed complete information assumption: it is only required
that for every player there exists only one other player who
is informed about him.
1.2 Implementations in Partially Informed
Set-ups and Our Results
The complete information assumption is realistic for small
groups of players, but not in general. In this paper we
consider players that are informed about each other with
some probability. More formally, we say that agent B is
p-informed about agent A, if B knows the type of A with
probability p.
For such partially-informed environments, we show how to
use the solution concept of iterative elimination of weakly
dominated strategies. We demonstrate this concept through
some motivating examples that (i) seem natural in distributed
settings and (ii) cannot be implemented in dominant
strategies even if there is an authorized center with a direct
connection to every agent or even if players have single-parameter
domains.
1. We first show how the subgame perfect techniques of
Moore and Repullo [28] can be applied to p-informed
environments and further adjusted to the concept of
iterative elimination of weakly dominated strategies
(for large enough p).
2. We then suggest a certificate based challenging method
that is more natural in computerized p-informed
environments and different from the one introduced by
Moore and Repullo [28] (for p ∈ (0, 1]).
3. We consider implementations in various network
structures.
As a case study we apply our methods to derive: (1)
Simplified Peer-to-Peer network for file sharing with no
payments in equilibrium. Our approach is (agent, file)-specific.
(2) Web-cache budget-balanced and economically efficient
mechanism.
Our mechanisms use reasonable punishments that inversely
depend on p. And so, if the fines are large then small p is
enough to induce cooperation. Essentially, large p implies a
large amount of recorded information.
1.2.1 Malicious Agents
Decentralized mechanisms often utilize punishing outcomes.
As a result, malicious players might cause severe harm to
others. We suggest a quantified notion of malicious player,
who benefits from his own gained surplus and from harm
caused to others. [12] suggests several categories to classify
non-cooperating players. Our approach is similar to [7] (and
the references therein), who considered independently such
players in different context. We show a simple
decentralized mechanism in which q-malicious players cooperate and
in particular, do not use their punishing actions in
equilibrium.
241
1.3 Dominant Strategy Implementations
In this subsection we shall refer to some recent results
demonstrating that the set-up of private information with
the concept of dominant strategies is restrictive in general.
First, Roberts" classical impossibility result shows that if
players" preferences are not restricted and there are at least
3 different outcomes, then every dominant-strategy
mechanism must be weighted VCG (with the social goal that
maximizes the weighted welfare) [35]. For slightly-restricted
preference domains, it is not known how to turn efficiently
computable algorithms into dominant strategy mechanisms.
This was observed and analyzed in [32, 22, 31]. Recently [20]
extends Roberts" result to some leading examples. They
showed that under mild assumptions any dominant
strategy mechanism for variety of Combinatorial Auctions over
multi-dimensional domains must be almost weighted VCG.
Additionally, it turns out that the dominant strategy
requirement implies that the social goal must be monotone
[35, 36, 22, 20, 5, 37]. This condition is very restrictive, as
many desired natural goals are non-monotone2
.
Several recent papers consider relaxations of the dominant
strategy concept: [32, 1, 2, 19, 16, 17, 26, 21]. However, most
of these positive results either apply to severely restricted
cases (e.g., single-parameter, 2 players) or amount to VCG
or almost VCG mechanisms (e.g., [19]). Recently, [8, 3]
considered implementations for generalized single-parameter
players.
Organization of this paper: In section 2 we illustrate
the concepts of subgame perfect and iterative elimination
of weakly dominated strategies in completely-informed and
partially-informed environments. In section 3 we show a
mechanism for Peer-to-Peer file sharing networks. In section
4 we apply our methods to derive a web cache mechanism.
Future work is briefly discussed in section 5.
2. MOTIVATING EXAMPLES
In this section we examine the concepts of subgame perfect
and iterative elimination of weakly dominated strategies for
completely informed and p-informed environments. We also
present the notion of q-maliciousness and some other related
considerations through two illustrative examples.
2.1 The Fair Assignment Problem
Our first example is an adjustment to computerized
context of an ancient procedure to ensure that the wealthiest
man in Athens would sponsor a theatrical production known
as the Choregia [27]. In the fair assignment problem, Alice
and Bob are two workers, and there is a new task to be
performed. Their goal is to assign the task to the least loaded
worker without any monetary transfers. The informational
assumption is that Alice and Bob know both loads and the
duration of the new task.3
2
E.g., minimizing the makespan within a factor of 2 [32] and
Rawls" Rule over some multi-dimensional domains [20].
3
In first glance one might ask why the completely informed
agents could not simply sign a contract, specifying the
desired goal. Such a contract is sometimes infeasible due to
fact that the true state cannot be observed by outsiders,
especially not the court.
Claim 1. The fair assignment goal cannot be implemented
in dominant strategies.4
2.1.1 Basic Mechanism
The following simple mechanism implements this goal in
subgame perfect equilibrium.
• Stage 1: Alice either agrees to perform the new task
or refuses.
• Stage 2: If she refuses, Bob has to choose between:
- (a) Performing the task himself.
- (b) Exchanging his load with Alice and
performing the new task as well.
Let LT
A, LT
B be the true loads of Alice and Bob, and let
t > 0 be the load of the new task. Assume that load
exchanging takes zero time and cost. We shall see that the
basic mechanism achieves the goal in a subgame perfect
equilibrium. Intuitively this means that in equilibrium each
player will choose his best action at each point he might
reach, assuming similar behavior of others, and thus every
SPE is a Nash equilibrium.
Claim 2. ([27]) The task is assigned to the least loaded
worker in subgame perfect equilibrium.
Proof. By backward induction argument (look forward
and reason backward), consider the following cases:
1. LT
B ≤ LT
A. If stage 2 is reached then Bob will not
exchange.
2. LT
A < LT
B < LT
A + t. If stage 2 is reached Bob will
exchange, and this is what Alice prefers.
3. LT
A + t ≤ LT
B. If stage 2 is reached then Bob would
exchange, as a result it is strictly preferable by Alice
to perform the task.
Note that the basic mechanism does not use monetary
transfers at all and is decentralized in the sense that no third
party is needed to run the procedure. The goal is achieved in
equilibrium (ties are broken in favor of Alice). However, in
the second case exchange do occur in an equilibrium point.
Recall the unrealistic assumption that load exchange takes
zero time and cost. Introducing fines, the next mechanism
overcomes this drawback.
2.1.2 Elicitation Mechanism
In this subsection we shall see a centralized mechanism for
the fair assignment goal without load exchange in
equilibrium. The additional assumptions are as follows. The cost
performing a load of duration d is exactly d. We assume
that the duration t of the new task is < T. The payoffs
4
proof: Assume that there exists a mechanism that
implements this goal in dominant strategies. Then by the
Revelation Principle [23] there exists a mechanism that implements
this goal for which the dominant strategy of each player is
to report his true load. Clearly, truthfully reporting cannot
be a dominant strategy for this goal (if monetary transfers
are not available), as players would prefer to report higher
loads.
242
of the utility maximizers agents are quasilinear. The
following mechanism is an adaptation of Moore and Repullo"s
elicitation mechanism [28]5
.
• Stage 1: (Elicitation of Alice"s load)
Alice announces LA.
Bob announces LA ≤ LA.
If LA = LA (Bob agrees) goto the next Stage.
Otherwise (Bob challenges), Alice is assigned the
task.
She then has to choose between:
- (a) Transferring her original load to Bob and
paying him LA − 0.5 · min{ , LA − LA}.
Alice pays to the mechanism.
Bob pays the fine of T + to the mechanism.
- (b) No load transfer. Alice pays to Bob. STOP.
• Stage 2: The elicitation of Bob"s load is similar to
Stage 1 (switching the roles of Alice and Bob).
• Stage 3: If LA < LB Alice is assigned the task,
otherwise Bob. STOP.
Observe that Alice is assigned the task and fined with
whenever Bob challenges. We shall see that the bonus of
is paid to a challenging player only in out of equilibria cases.
Claim 3. If the mechanism stops at Stage 3, then the
payoff of each agent is at least −t and at most 0.
Proposition 1. It is a subgame perfect equilibrium of the
elicitation mechanism to report the true load, and to
challenge with the true load only if the other agent overreports.
Proof. Assume w.l.o.g that the elicitation of Alice"s load
is done after Bob"s, and that Stage 2 is reached. If Alice
truly reports LA = LT
A, Bob strictly prefers to agree.
Otherwise, if Bob challenges, Alice would always strictly prefer
to transfer (as in this case Bob would perform her load for
smaller cost), as a result Bob would pay T + to the
mechanism. This punishing outcome is less preferable than the
normal outcome of Stage 3 achieved had he agreed.
If Alice misreports LA > LT
A, then Bob can ensure himself
the bonus (which is always strictly preferable than reaching
Stage 3) by challenging with LA = LT
A, and so whenever
Bob gets the bonus Alice gains the worst of all payoffs.
Reporting a lower load LA < LT
A is not beneficial for Alice.
In this case, Bob would strictly prefer to agree (and not to
announce LA < LA, as he limited to challenge with a smaller
load than what she announces). Thus such misreporting can
only increase the possibility that she is assigned the task.
And so there is no incentive for Alice to do so.
All together, Alice would prefer to report the truth in this
stage. And so Stage 2 would not abnormally end by STOP,
and similarly Stage 1.
Observe that the elicitation mechanism is almost balanced:
in all outcomes no money comes in or out, except for the
non-equilibrium outcome (a), in which both players pay to
the mechanism.
5
In [28], if an agent misreport his type then it is always
beneficial to the other agent to challenge. In particular,
even if the agent reports a lower load.
2.1.3 Elicitation Mechanism for Partially Informed
Agents
In this subsection we consider partially informed agents.
Formally:
Definition 1. An agent A is p-informed about agent B,
if A knows the type of B with probability p (independently
of what B knows).
It turns out that a version of the elicitation mechanism
works for this relaxed information assumption, if we use the
concept of iterative elimination of weakly dominated
strategies6
. We replace the fixed fine of in the elicitation
mechanism with the fine:
βp = max{L,
1 − p
2p − 1
T} + ,
and assume the bounds LT
A, LT
B ≤ L.
Proposition 2. If all agents are p-informed, p > 0.5,
the elicitation mechanism(βp) implements the fair
assignment goal with the concept of iterative elimination of weakly
dominated strategies. The strategy of each player is to report
the true load and to challenge with the true load if the other
agent overreport.
Proof. Assume w.l.o.g that the elicitation of Alice"s load
is done after Bob"s, and that Stage 2 is reached. First
observe that underreporting the true value is a dominated
strategy, whether Bob is not informed and mistakenly
challenges with a lower load (as βp ≥ L) or not, or even
if t is very small. Now we shall see that overreporting her
value is a dominated strategy, as well.
Alice"s expected payoff gained by misreporting ≤
p (payoff if she lies and Bob is informed) +(1 − p) (max
payoff if Bob is not informed) ≤
p (−t − βp) < p (−t) + (1 − p) (−t − βp) ≤
p (min payoff of true report if Bob is informed) + (1 − p)
(min payoff if Bob is not informed) ≤
Alice"s expected payoff if she truly reports.
The term (−t−βp) in the left hand side is due to the fact
that if Bob is informed he will always prefer to challenge.
In the right hand side, if he is informed, then challenging is
a dominated strategy, and if he is not informed the worst
harm he can make is to challenge. Thus in stage 2 Alice will
report her true load. This implies that challenging without
being informed is a dominated strategy for Bob.
This argument can be reasoned also for the first stage,
when Bob reports his value. Bob knows the maximum payoff
he can gain is at most zero since he cannot expect to get the
bonus in the next stage.
2.1.4 Extensions
The elicitation mechanism for partially informed agents is
rather general. As in [28], we need the capability to judge
between two distinct declarations in the elicitation rounds,
6
A strategy si of player i is weakly dominated if there exists
si such that (i) the payoff gained by si is at least as high as
the payoff gained by si, for all strategies of the other players
and all preferences, and (ii) there exist a preference and a
combination of strategies for the other players such that the
payoff gained by si is strictly higher than the payoff gained
by si.
243
and upper and lower bounds based on the possible payoffs
derived from the last stage. In addition, for p-informed
environments, some structure is needed to ensure that
underbidding is a dominated strategy.
The Choregia-type mechanisms can be applied to more
than 2 players with the same number of stages: the player in
the first stage can simply points out the name of the
wealthiest agent. Similarly, the elicitation mechanisms can be
extended in a straightforward manner. These mechanisms can
be budget-balanced, as some player might replace the role
of the designer, and collect the fines, as observed in [28].
Open Problem 1. Design a decentralized budget balanced
mechanism with reasonable fines for independently p-informed
n players, where p ≤ 1 − 1/2
1
n−1 .
2.2 Seller and Buyer Scenario
A player might cause severe harm to others by choosing a
non-equilibrium outcome. In the mechanism for the fair
assignment goal, an agent might maliciously challenge even
if the other agent truly reports his load. In this subsection
we consider such malicious scenarios. For the ease of
exposition we present a second example. We demonstrate that
equilibria remain unchanged even if players are malicious.
In the seller-buyer example there is one item to be traded
and two possible future states. The goal is to sell the item for
the average low price pl = ls+lb
2
in state L, and the higher
price ph = hs+hb
2
in the other state H, where ls is seller"s
cost and lb is buyer"s value in state L, and similarly hs, hb in
H. The players fix the prices without knowing what will be
the future state. Assume that ls < hs < lb < hb, and that
trade can occur in both prices (that is, pl, ph ∈ (hs, lb)).
Only the players can observe the realization of the true
state. The payoffs are of the form ub = xv−tb, us = ts −xvs,
where the binary variable x indicates if trade occurred, and
tb, ts are the transfers. Consider the following decentralized
trade mechanism.
• Stage 1: If seller reports H goto Stage 2. Otherwise,
trade at the low price pl. STOP.
• Stage 2: The buyer has to choose between:
- (a) Trade at the high price ph.
- (b) No trade and seller pays ∆ to the buyer.
Claim 4. Let ∆ = lb−ph+ . The unique subgame perfect
equilibrium of the trade mechanism is to report the true state
in Stage 1 and trading if Stage 2 is reached.
Note that the outcome (b) is never chosen in equilibrium.
2.2.1 Trade Mechanism for Malicious Agents
The buyer might maliciously punish the seller by
choosing the outcome (b) when the true state is H. The following
notion quantifies the consideration that a player is not
indifferent to the private surpluses of others.
Definition 2. A player is q-malicious if his payoff equals:
(1 − q) (his private surplus) − q (summation of others
surpluses), q ∈ [0, 1].
This definition appeared independently in [7] in different
context. We shall see that the traders would avoid such bad
behavior if they are q-malicious, where q < 0.5, that is if
their non-indifference impact is bounded by 0.5.
Equilibria outcomes remain unchanged, and so cooperation is
achieved as in the original case of non-malicious players.
Consider the trade mechanism with pl = (1 − q) hs + q lb ,
ph = q hs + (1 − q) lb , ∆ = (1 − q) (hb − lb − ). Note that
pl < ph for q < 0.5.
Claim 5. If q < 0.5, then the unique subgame perfect
equilibrium for q-malicious players remains unchanged.
Proof. By backward induction we consider two cases.
In state H, the q-malicious buyer would prefer to trade if
(1 − q)(hb − ph) + q(hs − ph) > (1 − q)∆ + q(∆). Indeed,
(1 − q)hb + qhs > ∆ + ph. Trivially, the seller prefers to
trade at the higher price, (1 − q)(pl − hs) + q(pl − hb) <
(1 − q)(ph − hs) + q(ph − hb).
In state L the buyer prefers the no trade outcome, as
(1−q)(lb −ph)+q(ls −ph) < ∆. The seller prefers to trade
at a low price, as (1 − q)(pl − ls) + q(pl − lb) > 0 > −∆.
2.2.2 Discussion
No mechanism can Nash-implement this trading goal if
the only possible outcomes are trade at pl and trade at ph.
To see this, it is enough to consider normal forms (as any
extensive form mechanism can be presented as a normal one).
Consider a matrix representation, where the seller is the row
player and the buyer is the column player, in which every
entry includes an outcome. Suppose there is equilibrium entry
for the state L. The associate column must be all pl,
otherwise the seller would have an incentive to deviate. Similarly,
the associate row of the H equilibrium entry must be all ph
(otherwise the buyer would deviate), a contradiction. 7 8
The buyer prefers pl and seller ph, and so the preferences
are identical in both states. Hence reporting preferences
over outcomes is not enough - players must supply
additional information. This is captured by outcome (b) in
the trade mechanism.
Intuitively, if a goal is not Nash-implementable we need
to add more outcomes. The drawback is that some new
additional equilibria must be ruled out. E.g., additional
Nash equilibrium for the trade mechanism is (trade at pl,
(b)). That is, the seller chooses to trade at low price at either
states, and the buyer always chooses the no trade option that
fines the seller, if the second stage is reached. Such buyer"s
threat is not credible, because if the mechanism is played
only once, and Stage 2 is reached in state H, the buyer would
strictly decrease his payoff if he chooses (b). Clearly, this is
not a subgame perfect equilibrium. Although each extensive
game-form is strategically equivalent to a normal form one,
the extensive form representation places more structure and
so it seems plausible that the subgame perfect equilibrium
will be played.9
7
Formally, this goal is not Maskin monotonic, a necessary
condition for Nash-implementability [24].
8
A similar argument applies for the Fair Assignment
Problem.
9
Interestingly, it is a straight forward to construct a
sequential mechanism with unique SPE, and additional NE with a
strictly larger payoff for every player.
244
3. PEER-TO-PEER NETWORKS
In this section we describe a simplified Peer-to-Peer
network for file sharing, without payments in equilibrium, using
a certificate-based challenging method. In this challenging
method - as opposed to [28] - an agent that challenges cannot
harm other agents, unless he provides a valid certificate.
In general, if agent B copied a file f from agent A, then
agent A knows that agent B holds a copy of the file. We
denote such information as a certificate(B, f) (we shall omit
cryptographic details). Such a certificate can be recorded
and distributed along the network, and so we can treat each
agent holding the certificate as an informed agent.
Assumptions: We assume an homogeneous system with
files of equal size. The benefit each agent gains by holding
a copy of any file is V . The only cost each agent has is
the uploading cost C (induced while transferring a file to
an immediate neighbor). All other costs are negligible (e.g.,
storing the certificates, forwarding messages, providing
acknowledgements, digital signatures, etc). Let upA, downA
be the numbers of agent A uploads and downloads if he
always cooperates. We assume that each agent A enters the
system if upA · C < downA · V .
Each agent has a quasilinear utility and only cares about
his current bandwidth usage. In particular, he ignores future
scenarios (e.g., whether forwarding or dropping of a packet
might affect future demand).
3.1 Basic Mechanism
We start with a mechanism for a network with 3 p-informed
agents: B, A1, A2. We assume that B is directly connected
to A1 and A2.
If B has the certificate(A1, f), then he can apply directly
to A1 and request the file (if he refuses, then B can go to
court). The following basic sequential mechanism is
applicable whenever agent B is not informed and still would like to
download the file if it exists in the network. Note that this
goal cannot be implemented in dominant strategies
without payments (similar to Claim 1, when the type of each
agent here is the set of files he holds). Define tA,B to be the
monetary amount that agent A should transfer to B.
• Stage 1: Agent B requests the file f from A1.
- If A1 replies yes then B downloads the file from
A1. STOP.
- Otherwise, agent B sends A1s no reply to agent
A2.
∗ If A2 declares agree then goto the next stage.
∗ Else, A2 sends a certificate(A1, f) to agent B.
· If the certificate is correct then tA1,A2 =
βp. STOP.
· Else tA2,A1 = |C| + . STOP.
Stage 2: Agent B requests the file f from A2. Switch
the roles of the agents A1, A2.
Claim 6. The basic mechanism is budget-balanced
(transfers always sum to zero) and decentralized.
Theorem 1. Let βp = |C|
p
+ , p ∈ (0, 1]. A strategy that
survives iterative elimination of weakly dominated strategies
is to reply yes if Ai holds the file, and to challenge only
with a valid certificate. As a result, B downloads the file if
some agent holds it, in equilibrium. There are no payments
or transfers in equilibrium.
Proof. Clearly if the mechanism ends without
challenging: −C ≤ u(Ai) ≤ 0. And so, challenging with an invalid
certificate is always a dominated strategy. Now, when Stage
2 is reached, A2 is the last to report if he has the file. If A2
has the file it is a weakly undominated strategy to misreport,
whether A1 is informed or not:
A2"s expected payoff gained by misreporting no ≤
p · (−βp) + (1 − p) · 0 < −C ≤ A2"s payoff if she reports
yes.
This argument can be reasoned also for Stage 1, when
A1 reports whether he has the file. A1 knows that A2 will
report yes if and only if she has the file in the next stage,
and so the maximum payoff he can gain is at most zero since
he cannot expect to get a bonus.
3.2 Chain Networks
In a chain network, agent B is directly connected to A1,
and Ai is directly connected to agent Ai+1. Assume that we
have an acknowledgment protocol to confirm the receipt of
a particular message. To avoid message dropping, we add
the fine (βp +2 ) to be paid by an agent who hasn"t properly
forwarded a message. The chain mechanism follows:
• Stage i: Agent B forwards a request for the file f to
Ai (through {Ak}k≤i).
• If Ai reports yes, then B downloads f from Ai.
STOP.
• Otherwise Ai reports no.
If Aj sends a certificate(Ak, f) to B, ( j, k ≤ i), then
- If certificate(Ak, f) is correct, then t(Ak, Aj) =
βp. STOP.
- Else, t(Aj, Ak) = C + . STOP.
If Ai reports that he has no copy of the file, then any agent
in between might challenge. Using digital signatures and
acknowledgements, observe that every agent must forward
each message, even if it contains a certificate showing that
he himself has misreported.
We use the same fine, βp, as in the basic mechanism,
because the protocol might end at stage 1 (clearly, the former
analysis still applies, since the actual p increases with the
number of players).
3.3 Network Mechanism
In this subsection we consider general network structures.
We need the assumption that there is a ping protocol that
checks whether a neighbor agent is on-line or not (that is,
an on-line agent cannot hide himself). To limit the amount
of information to be recorded, we assume that an agent is
committed to keep any downloaded file to at least one hour,
and so certificates are valid for a limited amount of time. We
assume that each agent has a digitally signed listing of his
current immediate neighbors. As in real P2P file sharing
applications, we restrict each request for a file to be forwarded
at most r times (that is, downloads are possible only inside
a neighborhood of radius r).
245
The network mechanism utilizes the chain mechanism in
the following way: When agent B requests a file from agent
A (at most r − 1 far), then A sends to B the list of his
neighbors and the output of the ping protocol to all of these
neighbors. As a result, B can explore the network.
Remark: In this mechanism we assumed that the
environment is p-informed. An important design issue that it
is not addressed here is the incentives for the information
propagation phase.
4. WEB CACHE
Web caches are widely used tool to improve overall
system efficiency by allowing fast local access. They were listed
in [12] as a challenging application of Distributed
Algorithmic Mechanism Design.
Nisan [30] considered a single cache shared by strategic
agents. In this problem, agent i gains the value vT
i if a
particular item is loaded to the local shared cache. The
efficient goal is to load the item if and only if ΣvT
i ≥ C,
where C is the loading cost. This goal reduces to the public
project problem analyzed by Clarke [10]. However, it is well
known that this mechanism is not budget-balanced (e.g., if
the valuation of each player is C, then everyone pays zero).
In this section we suggest informational and
environmental assumptions for which we describe a decentralized
budgetbalanced efficient mechanism. We consider environments
for which future demand of each agent depends on past
demand. The underlying informational and environmental
requirements are as follows.
1. An agent can read the content of a message only if he
is the target node (even if he has to forward the
message as an intermediate node of some routing path).
An agent cannot initiate a message on behalf of other
agents.
2. An acknowledgement protocol is available, so that
every agent can provide a certificate indicating that he
handled a certain message properly.
3. Negligible costs: we assume p-informed agents, where
p is such that the agent"s induced cost for keeping
records of information is negligible. We also assume
that the cost incurred by sending and forwarding
messages is negligible.
4. Let qi(t) denotes the number of loading requests agent
i initiated for the item during the time slot t. We
assume that vT
i (t), the value for caching the item in
the beginning of slot t depends only on most recent
slot, formally vT
i (t) = max{Vi(qi(t − 1)), C}, where
Vi(·) is a non-decreasing real function. In addition,
Vi(·) is a common knowledge among the players.
5. The network is homogeneous in the sense that if
agent j happens to handle k requests initiated by agent
i during the time slot t, then qi(t) = kα, where α
depends on the routing protocol and the environment
(α might be smaller than 1, if each request is flooded
several times). We assume that the only way agent i
can affect the true qi(t) is by superficially increasing
his demand for the cached item, but not the other way
(that is, agent"s loss, incurred by giving up a necessary
request for the item, is not negligible).
The first requirement is to avoid free riding, and also to
avoid the case that an agent superficially increases the
demand of others and as a result decreases his own demand.
The second requirement is to avoid the case that an agent
who gets a routing request for the item, records it and then
drops it. The third is to ensure that the environment stays
well informed. In addition, if the forwarding cost is
negligible each agent cooperates and forwards messages as he would
not like to decrease the future demand (that monotonically
depends on the current time slot, as assumed in the forth
requirement) of some other agent. Given that the payments
are increasing with the declared values, the forth and fifth
requirements ensure that the agent would not increase his
demand superficially and so qi(t) is the true demand.
The following Web-Cache Mechanism implements the
efficient goal that shares the cost proportionally. For simplicity
it is described for two players and w.l.o.g vT
i (t) equals the
number of requests initiated by i and observed by any
informed j (that is, α = 1 and Vi(qi(t − 1)) = qi(t − 1)).
• Stage 1: (Elicitation of vT
A(t))
Alice announces vA.
Bob announces vA ≥ vA. If vA = vA goto the next
Stage. Otherwise (Bob challenges):
- If Bob provides vA valid records then Alice pays C
to finance the loading of the item into the cache.
She also pays βp to Bob. STOP.
- Otherwise, Bob finances the loading of the item
into the cache. STOP.
• Stage 2: The elicitation of vT
B(t) is done analogously.
• Stage 3: If vA + vB < C, then STOP.
Otherwise, load the item to the cache, Alice pays pA =
vA
vA+vB
· C, and Bob pays pB = vB
vA+vB
· C.
Claim 7. It is a dominated strategy to overreport the true
value.
Proof. Let vT
A < VA. There are two cases to consider:
• If vT
A + vB < C and vA + vB ≥ C.
We need to show that if the mechanism stops normally
Alice would pay more than vT
A, that is: vA
vA+vB
·C > vT
A.
Indeed, vA C > vA (vT
A + vB) > vT
A (vA + vB).
• If vT
A + vB ≥ C, then clearly, vA
vA+vB
>
vT
A
vT
A
+vB
.
Theorem 2. Let βp = max{0, 1−2p
p
· C} + , p ∈
(0, 1]. A strategy that survives iterative elimination of weakly
dominated strategies is to report the truth and to challenge
only when the agent is informed. The mechanism is efficient,
budget-balanced, exhibits consumer sovereignty, no positive
transfer and individual rationality10
.
Proof. Challenging without being informed (that is,
without providing enough valid records) is always dominated
strategy in this mechanism. Now, assume w.l.o.g. Alice is
10
See [29] or [12] for exact definitions.
246
the last to report her value. Alice"s expected payoff gained
by underreporting ≤
p · (−C − βp) + (1 − p) · C < p · 0 + (1 − p) · 0 ≤ Alice"s
expected payoff if she honestly reports.
The right hand side equals zero as the participation costs
are negligible. Reasoning back, Bob cannot expect to get
the bonus and so misreporting is dominated strategy for
him.
5. CONCLUDING REMARKS
In this paper we have seen a new partial informational
assumption, and we have demonstrated its suitability to
networks in which computational agents can easily collect
and distribute information. We then described some
mechanisms using the concept of iterative elimination of weakly
dominated strategies. Some issues for future work include:
• As we have seen, the implementation issue in p-informed
environments is straightforward - it is easy to construct
incentive compatible mechanisms even for
non-singleparameter cases. The challenge is to find more realistic
scenarios in which the partial informational
assumption is applicable.
• Mechanisms for information propagation and
maintenance. In our examples we choose p such that the
maintenance cost over time is negligible. However,
the dynamics of the general case is delicate: an agent
can use the recorded information to eliminate data
that is not likely to be needed, in order to decrease
his maintenance costs. As a result, the probability
that the environment is informed decreases, and
selfish agents would not cooperate. Incentives for
information propagation should be considered as well (e.g.,
for P2P networks for file sharing).
• It seems that some social choice goals cannot be
implemented if each player is at least 1/n-malicious (where
n is the number of players). It would be interesting to
identify these cases.
Acknowledgements
We thank Meitav Ackerman, Moshe Babaioff, Liad
Blumrozen, Michal Feldman, Daniel Lehmann, Noam Nisan, Motty
Perry and Eyal Winter for helpful discussions.
6. REFERENCES
[1] A. Archer and E. Tardos. Truthful mechanisms for
one-parameter agents. In IEEE Symposium on
Foundations of Computer Science, pages 482-491,
2001.
[2] Aaron Archer, Christos Papadimitriou, Kunal Talwar,
and Eva Tardos. An approximate truthful mechanism
for combinatorial auctions with single parameter
agent. In SODA, 2003.
[3] Moshe Babaioff, Ron Lavi, and Elan Pavlov.
Single-parameter domains and implementation in
undominated strategies, 2004. Working paper.
[4] Yair Bartal, Rica Gonen, and Noam Nisan. Incentive
compatible multi-unit combinatorial auctions, 2003.
TARK-03.
[5] Sushil Bikhchandani, Shurojit Chatterji, and Arunava
Sen. Incentive compatibility in multi-unit auctions,
2003. Working paper.
[6] Liad Blumrosen, Noam Nisan, and Ilya Segal.
Auctions with severely bounded communication, 2004.
Working paper.
[7] F. Brandt, T. Sandholm, and Y. Shoham. Spiteful
bidding in sealed-bid auctions, 2005.
[8] Patrick Briest, Piotr Krysta, and Berthold Voecking.
Approximation techniques for utilitarian mechanism
design. In STOC, 2005.
[9] Chiranjeeb Buragohain, Divy Agrawal, and Subhash
Suri. A game-theoretic framework for incentives in
p2p systems. In IEEE P2P, 2003.
[10] E. H. Clarke. Multipart pricing of public goods. Public
Choice, 11:17-33, 1971.
[11] Joan Feigenbaum, Christos Papadimitrios, and Scott
Shenkar. Sharing the cost of multicast transmissions.
Computer and system Sciences, 63(1), 2001.
[12] Joan Feigenbaum and Scott Shenker. Distributed
algorithmic mechanism design: Recent results and
future directions. In Proceedings of the 6th
International Workshop on Discrete Algorithms and
Methods for Mobile Computing and Communications,
pages 1-13. ACM Press, New York, 2002.
[13] M. Feldman, K. Lai, I. Stoica, and J. Chuang. Robust
incentive techniques for peer-to-peer networks. In EC,
2004.
[14] A. Goldberg, J. Hartline, A. Karlin, and A. Wright.
Competitive auctions, 2004. Working paper.
[15] Philippe Golle, Kevin Leyton-Brown, Ilya Mironov,
and Mark Lillibridge. Incentives for sharing in
peer-to-peer networks. In EC, 2001.
[16] Ron Holzman, Noa Kfir-Dahav, Dov Monderer, and
Moshe Tennenholtz. Bundling equilibrium in
combinatorial auctions. Games and Economic
Behavior, 47:104-123, 2004.
[17] Ron Holzman and Dov Monderer. Characterization of
ex post equilibrium in the vcg combinatorial auctions.
Games and Economic Behavior, 47:87-103, 2004.
[18] Matthew O. Jackson. A crash course in
implementation theory, 1997. mimeo: California
Institute of Technology. 25.
[19] A. Kothari, D. Parkes, and S. Suri.
Approximately-strategyproof and tractable multi-unit
auctions. In EC, 2003.
[20] Ron Lavi, Ahuva Mu"alem, and Noam Nisan. Towards
a characterization of truthful combinatorial auctions.
In FOCS, 2003.
[21] Ron Lavi and Noam Nisan. Online ascending auctions
for gradually expiring goods. In SODA, 2005.
[22] Daniel Lehmann, Liadan O"Callaghan, and Yoav
Shoham. Truth revelation in approximately efficient
combinatorial auctions. Journal of the ACM,
49(5):577-602, 2002.
[23] A. Mas-Collel, W. Whinston, and J. Green.
Microeconomic Theory. Oxford university press, 1995.
[24] Eric Maskin. Nash equilibrium and welfare optimality.
Review of Economic Studies, 66:23-38, 1999.
[25] Eric Maskin and Tomas Sj¨ostr¨om. Implementation
theory, 2002.
247
[26] Aranyak Mehta and Vijay Vazirani. Randomized
truthful auctions of digital goods are randomizations
over truthful auctions. In EC, 2004.
[27] John Moore. Implementation, contract and
renegotiation in environments with complete
information, 1992.
[28] John Moore and Rafael Repullo. Subgame perfect
implementation. Econometrica, 56(5):1191-1220, 1988.
[29] H. Moulin and S. Shenker. Strategyproof sharing of
submodular costs: Budget balance versus efficiency.
Economic Theory, 18(3):511-533, 2001.
[30] Noam Nisan. Algorithms for selfish agents. In STACS,
1999.
[31] Noam Nisan and Amir Ronen. Computationally
feasable vcg mechanisms. In EC, 2000.
[32] Noam Nisan and Amir Ronen. Algorithmic mechanism
design. Games and Economic Behavior, 35:166-196,
2001.
[33] M. J. Osborne and A. Rubinstein. A Course in Game
Theory. MIT press, 1994.
[34] Christos H. Papadimitriou. Algorithms, games, and
the internet. In STOC, 2001.
[35] Kevin Roberts. The characterization of implementable
choice rules. In Jean-Jacques Laffont, editor,
Aggregation and Revelation of Preferences. Papers
presented at the 1st European Summer Workshop of
the Econometric Society, pages 321-349.
North-Holland, 1979.
[36] Irit Rozenshtrom. Dominant strategy implementation
with quasi-linear preferences, 1999. Master"s thesis,
Dept. of Economics, The Hebrew University,
Jerusalem, Israel.
[37] Rakesh Vohra and Rudolf Muller. On dominant
strategy mechanisms, 2003. Working paper.
[38] Shmuel Zamir. Rationality and emotions in ultimatum
bargaining. Annales D" Economie et De Statistique,
61, 2001.
248 | iterative elimination of weakly dominated strategy;cooperation;distribute algorithmic mechanism design;p-informed environment;partially informed environment;weakly dominated strategy iterative elimination;agent;dominant strategy implementation;decentralized incentive compatible mechanism;distributed algorithmic mechanism design;peer-to-peer;vickrey-clarke-grove;distributed environment;computational entity |
train_J-61 | ICE: An Iterative Combinatorial Exchange | We present the first design for an iterative combinatorial exchange (ICE). The exchange incorporates a tree-based bidding language that is concise and expressive for CEs. Bidders specify lower and upper bounds on their value for different trades. These bounds allow price discovery and useful preference elicitation in early rounds, and allow termination with an efficient trade despite partial information on bidder valuations. All computation in the exchange is carefully optimized to exploit the structure of the bid-trees and to avoid enumerating trades. A proxied interpretation of a revealedpreference activity rule ensures progress across rounds. A VCG-based payment scheme that has been shown to mitigate opportunities for bargaining and strategic behavior is used to determine final payments. The exchange is fully implemented and in a validation phase. | 1. INTRODUCTION
Combinatorial exchanges combine and generalize two
different mechanisms: double auctions and combinatorial
auctions. In a double auction (DA), multiple buyers and sellers
trade units of an identical good [20]. In a combinatorial
auction (CA), a single seller has multiple heterogeneous items
up for sale [11]. Buyers may have complementarities or
substitutabilities between goods, and are provided with an
expressive bidding language. A common goal in both market
designs is to determine the efficient allocation, which is the
allocation that maximizes total value.
A combinatorial exchange (CE) [24] is a combinatorial
double auction that brings together multiple buyers and
sellers to trade multiple heterogeneous goods. For example, in
an exchange for wireless spectrum, a bidder may declare that
she is willing to pay $1 million for a trade where she obtains
licenses for New York City, Boston, and Philadelphia, and
loses her license for Washington DC. Thus, unlike a DA, a
CE allows all participants to express complex valuations via
expressive bids. Unlike a CA, a CE allows for fragmented
ownership, with multiple buyers and sellers and agents that
are both buying and selling.
CEs have received recent attention both in the context of
wireless spectrum allocation [18] and for airport takeoff and
landing slot allocation [3]. In both of these domains there
are incumbents with property rights, and it is important
to facilitate a complex multi-way reallocation of resources.
Another potential application domain for CEs is to resource
allocation in shared distributed systems, such as PlanetLab
[13]. The instantiation of our general purpose design to
specific domains is a compelling next step in our research.
This paper presents the first design for an iterative
combinatorial exchange (ICE). The genesis of this project was a
class, CS 286r Topics at the Interface between Economics
and Computer Science, taught at Harvard University in
Spring 2004.1
The entire class was dedicated to the design
and prototyping of an iterative CE.
The ICE design problem is multi-faceted and quite hard.
The main innovation in our design is an expressive yet
concise tree-based bidding language (which generalizes known
languages such as XOR/OR [23]), and the tight coupling
of this language with efficient algorithms for price-feedback
to guide bidding, winner-determination to determine trades,
and revealed-preference activity rules to ensure progress
across rounds. The exchange is iterative: bidders express
upper and lower valuations on trades by annotating their
bid-tree, and then tighten these bounds in response to price
feedback in each round. The Threshold payment rule,
introduced by Parkes et al. [24], is used to determine final
payments.
The exchange has a number of interesting theoretical
properties. For instance, when there exist linear prices we
establish soundness and completeness: for straightforward
bidders that adjust their bounds to meet activity rules while
keeping their true value within the bounds, the exchange
will terminate with the efficient allocation. In addition, the
1
http://www.eecs.harvard.edu/∼parkes/cs286r/ice.html
249
Truth Agent Act Rule WD ACC
FAIR
BALClosing RuleVickreyThreshold
DONE
! DONE
2,2
+A
+10
+B
+10
BUYER
2,2
-A
-5
-B
-5
SELLER
2,2
+A
+15
+8
+B
+15
+8
BUYER
2,2
-A
-2
-6
-B
-2
-6
SELLER
BUYER, buy AB
SELLER, sell AB
12 < PA+PB < 16
PA+PB=14
PA=PB=7
PBUYER = 16 - (4-0) = 12
PSELLER = -12 - (4-0) = -16
PBUYER = 14
PSELLER = -14
Pessim
istic
O
ptim
istic = 1
Figure 1: ICE System Flow of Control
efficient allocation can often be determined without bidders
revealing, or even knowing, their exact value for all trades.
This is essential in complex domains where the valuation
problem can itself be very challenging for a participant [28].
While we cannot claim that straightforward bidding is an
equilibrium of the exchange (and indeed, should not expect
to by the Myerson-Satterthwaite impossibility theorem [22]),
the Threshold payment rule minimizes the ex post incentive
to manipulate across all budget-balanced payment rules.
The exchange is implemented in Java and is currently in
validation. In describing the exchange we will first provide
an overview of the main components and introduce several
working examples. Then, we introduce the basic
components for a simple one-shot variation in which bidders state
their exact values for trades in a single round. We then
describe the full iterative exchange, with upper and lower
values, price-feedback, activity rules, and termination
conditions. We state some theoretical properties of the exchange,
and end with a discussion to motivate our main design
decisions, and suggest some next steps.
2. AN OVERVIEW OF THE ICE DESIGN
The design has four main components, which we will
introduce in order through the rest of the paper:
• Expressive and concise tree-based bidding language.
The language describes values for trades, such as my value
for selling AB and buying C is $100, or my value for selling
ABC is -$50, with negative values indicating that a bidder
must receive a payment for the trade to be acceptable. The
language allows bidders to express upper and lower bounds
on value, which can be tightened across rounds.
• Winner Determination. Winner-determination (WD)
is formulated as a mixed-integer program (MIP), with the
structure of the bid-trees captured explicitly in the
formulation. Comparing the solution at upper and lower values
allows for a determination to be made about termination,
with progress in intermediate rounds driven by an
intermediate valuation and the lower values adopted on termination.
• Payments. Payments are computed using the Threshold
payment rule [24], with the intermediate valuations adopted
in early rounds and lower values adopted on termination.
• Price feedback. An approximate price is computed
for each item in the exchange in each round, in terms of
the intermediate valuations and the provisional trade. The
prices are optimized to approximate competitive equilibrium
prices, and further optimized to best approximate the
current Threshold payments with remaining ties broken to favor
prices that are balanced across different items. In computing
the prices, we adopt the methods of constraint-generation to
exploit the structure of the bidding language and avoid
enumerating all feasible trades. The subproblem to generate
new constraints is a variation of the WD problem.
• Activity rule. A revealed-preference activity rule [1]
ensures progress across rounds. In order to remain active, a
bidder must tighten bounds so that there is enough
information to define a trade that maximizes surplus at the current
prices. Another variation on the WD problem is formulated,
both to verify that the activity rule is met and also to
provide feedback to a bidder to explain how to meet the rule.
An outline of the ICE system flow of control is provided
in Figure 1. We will return to this example later in the
paper. For now, just observe in this two-agent example that
the agents state lower and upper bounds that are checked in
the activity rule, and then passed to winner-determination
(WD), and then through three stages of pricing (accuracy,
fairness, balance). On passing the closing rule (in which
parameters αeff
and αthresh
are checked for convergence of the
trade and payments), the exchange goes to a last-and-final
round. At the end of this round, the trade and payments
are finally determined, based on the lower valuations.
2.1 Related Work
Many ascending-price one-sided CAs are known in the
literature [10, 25, 29]. Direct elicitation approaches have also
been proposed for one-sided CAs in which agents respond to
explicit queries about their valuations [8, 14, 19]. A number
of ascending CAs are designed to work with simple prices
on items [12, 17]. The price generation methods that we use
in ICE generalize the methods in these earlier papers.
Parkes et al. [24] studied sealed-bid combinatorial
exchanges and introduced the Threshold payment rule.
Subsequently, Krych [16] demonstrated experimentally that the
Threshold rule promotes efficient allocations. We are not
aware of any previous studies of iterative CEs. Dominant
strategy DAs are known for unit demand [20] and also for
single-minded agents [2]. No dominant strategy mechanisms
are known for the general CE problem.
ICE is a hybrid auction design, in that it couples
simple item prices to drive bidding in early rounds with
combinatorial WD and payments, a feature it shares with the
clock-proxy design of Ausubel et al. [1] for one-sided CAs.
We adopt a variation on the clock-proxy auctions"s
revealedpreference activity rule.
The bidding language shares some structural elements
with the LGB language of Boutilier and Hoos [7], but has
very different semantics. Rothkopf et al. [27] also describe a
restricted tree-based bidding language. In LGB, the
semantics are those of propositional logic, with the same items
in an allocation able to satisfy a tree in multiple places.
Although this can make LGB especially concise in some
settings, the semantics that we propose appear to provide
useful locality, so that the value of one component in a tree
can be understood independently from the rest of the tree.
The idea of capturing the structure of our bidding language
explicitly within a mixed-integer programming formulation
follows the developments in Boutilier [6].
3. PRELIMINARIES
In our model, we consider a set of goods, indexed {1, . . . ,
m} and a set of bidders, indexed {1, . . . , n}. The initial
allocation of goods is denoted x0
= (x0
1, . . . , x0
n), with x0
i =
(x0
i1, . . . , x0
im) and x0
ij ≥ 0 for good j indicating the number
250
of units of good j held by bidder i. A trade λ = (λ1, . . . , λn)
denotes the change in allocation, with λi = (λi1, . . . , λim)
where λij ∈
is the change in the number of units of item
j to bidder i. So, the final allocation is x1
= x0
+ λ.
Each bidder has a value vi(λi) ∈ ¡ for a trade λi. This
value can be positive or negative, and represents the change
in value between the final allocation x0
i +λi and the initial
allocation x0
i . Utility is quasi-linear, with ui(λi, p) = vi(λi)−p
for trade λi and payment p ∈ ¡ . Price p can be negative,
indicating the bidder receives a payment for the trade. We
use the term payoff interchangeably with utility.
Our goal in the ICE design is to implement the efficient
trade. The efficient trade, λ∗
, maximizes the total increase
in value across bidders.
Definition 1 (Efficient trade). The efficient trade
λ∗
solves
max
(λ1,...,λn)
¢
i
vi(λi)
s.t. λij + x0
ij ≥ 0, ∀i, ∀j (1)
¢
i
λij ≤ 0, ∀j (2)
λij ∈
(3)
Constraints (1) ensure that no agent sells more items than
it has in its initial allocation. Constraints (2) provide free
disposal, and allows feasible trades to sell more items than
are purchased (but not vice versa).
Later, we adopt Feas(x0
) to denote the set of feasible
trades, given these constraints and given an initial
allocation x0
= (x0
1, . . . , x0
n).
3.1 Working Examples
In this section, we provide three simple examples of
instances that we will use to illustrate various components of
the exchange. All three examples have only one seller, but
this is purely illustrative.
Example 1. One seller and one buyer, two goods {A, B},
with the seller having an initial allocation of AB. Changes
in values for trades:
seller buyer
AND(−A, −B) AND(+A, +B)
-10 +20
The AND indicates that both the buyer and the seller
are only interested in trading both goods as a bundle. Here,
the efficient (value-maximizing) trade is for the seller to sell
AB to the buyer, denoted λ∗
= ([−1, −1], [+1, +1]).
Example 2. One seller and four buyers, four goods {A, B,
C, D}, with the seller having an initial allocation of ABCD.
Changes in values for trades:
seller buyer1 buyer 2 buyer 3 buyer 4
OR(−A, −B, AND(+A, XOR(+A, AND(+C, XOR(+C,
−C, −D) +B) +B) +D) +D)
0 +6 +4 +3 +2
The OR indicates that the seller is willing to sell any
number of goods. The XOR indicates that buyers 2 and
4 are willing to buy at most one of the two goods in which
they are interested. The efficient trade is for bundle AB
to go to buyer 1 and bundle CD to buyer 3, denoted λ∗
=
([−1, −1, −1, −1], [+1, +1, 0, 0], [0, 0, 0, 0], [0, 0, +1, +1],
[0, 0, 0, 0]).
2,2
+A
+10
+B
+10
BUYER
2,2
-A
-5
-B
-5
SELLER
Example 1: Example 3:
2,2
+C +D
BUYER 2
2,2
+A +B
BUYER 1
+11 +84,4
-B
SELLER
-A -C -D
Example 2:
1,1
+A +B
BUYER 2
2,2
+A +B
BUYER 1
+6 +40,4
-B
SELLER
-C -D-A
1,1
+C +D
BUYER 4
2,2
+C +D
+3 +2
BUYER 3
-18
Figure 2: Example Bid Trees.
Example 3. One seller and two buyers, four goods {A, B,
C, D}, with the seller having an initial allocation of ABCD.
Changes in values for trades:
seller buyer1 buyer 2
AND(−A, −B, −C, −D) AND(+A, +B) AND(+C, +D)
-18 +11 +8
The efficient trade is for bundle AB to go to buyer 1 and
bundle CD to go to buyer 2, denoted λ∗
= ([−1, −1, −1, −1],
[+1, +1, 0, 0], [0, 0, +1, +1]).
4. A ONE-SHOT EXCHANGE DESIGN
The description of ICE is broken down into two sections:
one-shot (sealed-bid) and iterative. In this section we
abstract away the iterative aspect and introduce a
specialization of the tree-based language that supports only exact
values on nodes.
4.1 Tree-Based Bidding Language
The bidding language is designed to be expressive and
concise, entirely symmetric with respect to buyers and
sellers, and to extend to capture bids from mixed buyers and
sellers, ranging from simple swaps to highly complex trades.
Bids are expressed as annotated bid trees, and define a
bidder"s value for all possible trades.
The language defines changes in values on trades, with
leaves annotated with traded items and nodes annotated
with changes in values (either positive or negative). The
main feature is that it has a general interval-choose
logical operator on internal nodes, and that it defines careful
semantics for propagating values within the tree. We
illustrate the language on each of Examples 1-3 in Figure 2.
The language has a tree structure, with trades on items
defined on leaves and values annotated on nodes and leaves.
The nodes have zero values where no value is indicated.
Internal nodes are also labeled with interval-choose (IC)
ranges. Given a trade, the semantics of the language define
which nodes in the tree can be satisfied, or switched-on.
First, if a child is on then its parent must be on. Second, if
a parent node is on, then the number of children that are on
must be within the IC range on the parent node. Finally,
leaves in which the bidder is buying items can only be on if
the items are provided in the trade.
For instance, in Example 2 we can consider the efficient
trade, and observe that in this trade all nodes in the trees of
buyers 1 and 3 (and also the seller), but none of the nodes in
the trees of buyers 2 and 4, can be on. On the other hand, in
251
the trade in which A goes to buyer 2 and D to buyer 4, then
the root and appropriate leaf nodes can be on for buyers 2
and 4, but no nodes can be on for buyers 1 and 3. Given a
trade there is often a number of ways to choose the set of
satisfied nodes. The semantics of the language require that
the nodes that maximize the summed value across satisfied
nodes be activated.
Consider bid tree Ti from bidder i. This defines nodes β ∈
Ti, of which some are leaves, Leaf (i) ⊆ Ti. Let Child(β) ⊆
Ti denote the children of a node β (that is not itself a leaf).
All nodes except leaves are labeled with the interval-choose
operator [IC x
i (β), ICy
i (β)]. Every node is also labeled with a
value, viβ ∈ ¡ . Each leaf β is labeled with a trade, qiβ ∈
m
(i.e., leaves can define a bundled trade on more than one
type of item.)
Given a trade λi to bidder i, the interval-choose operators
and trades on leaves define which nodes can be satisfied.
There will often be a choice. Ties are broken to maximize
value. Let satiβ ∈ {0, 1} denote whether node β is satisfied.
Solution sati is valid given tree Ti and trade λi, written
sati ∈ valid(Ti, λi), if and only if:
¢
β∈Leaf (i)
qiβj · satiβ ≤ λij , ∀i, ∀j (4)
ICx
i (β)satiβ ≤
¢
β ∈Child(β)
satiβ ≤ ICy
i (β)satiβ, ∀β /∈ Leaf (i) (5)
In words, a set of leaves can only be considered satisfied
given trade λi if the total increase in quantity summed across
all such leaves is covered by the trade, for all goods (Eq. 4).
This works for sellers as well as buyers: for sellers a trade
is negative and this requires that the total number of items
indicated sold in the tree is at least the total number sold as
defined in the trade. We also need upwards-propagation:
any time a node other than the root is satisfied then its
parent must be satisfied (by
β ∈Child(β) satiβ ≤ ICy
i (β)satiβ
in Eq. 5). Finally, we need downwards-propagation: any
time an internal node is satisfied then the appropriate
number of children must also be satisfied (Eq. 5). The total
value of trade λi, given bid-tree Ti, is defined as:
vi(Ti, λi) = max
sat∈valid(Ti,λi)
¢
β∈T
vβ · satβ (6)
The tree-based language generalizes existing languages.
For instance: IC(2, 2) on a node with 2 children is equivalent
to an AND operator; IC(1, 3) on a node with 3 children is
equivalent to an OR operator; and IC(1, 1) on a node with
2 children is equivalent to an XOR operator. Similarly, the
XOR/OR bidding languages can be directly expressed as a
bid tree in our language.2
4.2 Winner Determination
This section defines the winner determination problem,
which is formulated as a MIP and solved in our
implementation with a commercial solver.3
The solver uses
branchand-bound search with dynamic cut generation and
branching heuristics to solve large MIPs in economically feasible
run times.
2
The OR* language is the OR language with dummy items
to provide additional structure. OR* is known to be
expressive and concise. However, it is not known whether OR*
dominates XOR/OR in terms of conciseness [23].
3
CPLEX, www.ilog.com
In defining the MIP representation we are careful to avoid
an XOR-based enumeration of all bundles. A variation on
the WD problem is reused many times within the exchange,
e.g. for column generation in pricing and for checking
revealed preference.
Given bid trees T = (T1, . . . , Tn) and initial allocation x0
,
the mixed-integer formulation for WD is:
WD(T, x0
) : max
λ,sat
¢
i
¢
β∈Ti
viβ · satiβ
s.t. (1), (2), satiβ ∈ {0, 1}, λij ∈
sati ∈ valid(Ti, λi), ∀i
Some goods may go unassigned because free disposal is
allowed within the clearing rules of winner determination.
These items can be allocated back to agents that sold the
items, i.e. for which λij < 0.
4.3 Computing Threshold Payments
The Threshold payment rule is based on the payments
in the Vickrey-Clarke-Groves (VCG) mechanism [15], which
itself is truthful and efficient but does not satisfy budget
balance. Budget-balance requires that the total payments
to the exchange are equal to the total payments made by
the exchange. In VCG, the payment paid by agent i is
pvcg,i = ˆv(λ∗
i ) − (V ∗
− V−i) (7)
where λ∗
is the efficient trade, V ∗
is the reported value of
this trade, and V−i is the reported value of the efficient
trade that would be implemented without bidder i. We
call ∆vcg,i = V ∗
− V−i the VCG discount. For instance,
in Example 1 pvcg,seller = −10 − (+10 − 0) = −20 and
pvcg,buyer = +20 − (+10 − 0) = 10, and the exchange would
run at a budget deficit of −20 + 10 = −10.
The Threshold payment rule [24] determines
budgetbalanced payments to minimize the maximal error across all
agents to the VCG outcome.
Definition 2. The Threshold payment scheme implements
the efficient trade λ∗
given bids, and sets payments pthresh,i =
ˆvi(λ∗
i ) − ∆i, where ∆ = (∆1, . . . , ∆n) is set to minimize
maxi(∆vcg,i − ∆i) subject to ∆i ≤ ∆vcg,i and
i ∆i ≤ V ∗
(this gives budget-balance).
Example 4. In Example 2, the VCG discounts are (9, 2,
0, 1, 0) to the seller and four buyers respectively, VCG
payments are (−9, 4, 0, 2, 0) and the exchange runs at a deficit
of -3. In Threshold, the discounts are (8, 1, 0, 0, 0) and the
payments are (−8, 5, 0, 3, 0). This minimizes the worst-case
error to VCG discounts across all budget-balanced payment
schemes.
Threshold payments are designed to minimize the
maximal ex post incentive to manipulate. Krych [16] confirmed
that Threshold promotes allocative efficiency in restricted
and approximate Bayes-Nash equilibrium.
5. THE ICE DESIGN
We are now ready to introduce the iterative
combinatorial exchange (ICE) design. Several new components are
introduced, relative to the design for the one-shot exchange.
Rather than provide precise valuations, bidders can provide
lower and upper valuations and revise this bid information
across rounds. The exchange provides price-based feedback
252
to guide bidders in this process, and terminates with an
efficient (or approximately-efficient) trade with respect to
reported valuations.
In each round t ∈ {0, 1, . . .} the current lower and upper
bounds, vt
and vt
, are used to define a provisional
valuation profile vα
(the α-valuation), together with a provisional
trade λt
and provisional prices pt
= (pt
1, . . . , pt
m) on items.
The α-valuation is a linear combination of the current
upper and lower valuations, with αEFF
∈ [0, 1] chosen
endogenously based on the closeness of the optimistic trade (at
v) and the pessimistic trade (at v). Prices pt
are used to
inform an activity rule, and drive progress towards an efficient
trade.
5.1 Upper and Lower Valuations
The bidding language is extended to allow a bidder i to
report a lower and upper value (viβ, viβ) on each node. These
take the place of the exact value viβ defined in Section 4.1.
Based on these labels, we can define the valuation functions
vi(Ti, λi) and vi(Ti, λi), using the exact same semantics as
in Eq. (6). We say that such a bid-tree is well-formed if
viβ ≤ viβ for all nodes. The following lemma is useful:
Lemma 1. Given a well-formed tree, T, then vi(Ti, λi) ≤
vi(Ti, λi) for all trades.
Proof. Suppose there is some λi for which vi(Ti, λi) >
vi(Ti, λi). Then, maxsat∈valid(Ti,λi)
β∈Ti
viβ · satβ >
maxsat∈valid(Ti,λi)
β∈Ti
viβ · satβ. But, this is a
contradiction because the trade λ that defines vi(Ti, λi) is still
feasible with upper bounds vi, and viβ ≥ viβ for all nodes
β in a well-formed tree.
5.2 Price Feedback
In each round, approximate competitive-equilibrium (CE)
prices, pt
= (pt
1, . . . , pt
m), are determined. Given these
provisional prices, the price on trade λi for bidder i is pt
(λi) =
j≤m pt
j · λij.
Definition 3 (CE prices). Prices p∗
are competitive
equilibrium prices if the efficient trade λ∗
is supported at
prices p∗
, so that for each bidder:
λ∗
i ∈ arg max
λ∈Feas(x0)
{vi(λi) − p∗
(λi)} (8)
CE prices will not always exist and we will often need to
compute approximate prices [5]. We extend ideas due to
Rassenti et al. [26], Kwasnica et al. [17] and Dunford et al.
[12], and select approximate prices as follows:
I: Accuracy. First, we compute prices that minimize the
maximal error in the best-response constraints across
all bidders.
II: Fairness. Second, we break ties to prefer prices that
minimize the maximal deviation from Threshold
payments across all bidders.
III: Balance. Third, we break ties to prefer prices that
minimize the maximal price across all items.
Taken together, these steps are designed to promote the
informativeness of the prices in driving progress across rounds.
In computing prices, we explain how to compute
approximate (or otherwise) prices for structured bidding languages,
and without enumerating all possible trades. For this, we
adopt constraint generation to efficient handle an
exponential number of constraints. Each step is described in detail
below.
I: Accuracy. We adopt a definition of price accuracy that
generalizes the notions adopted in previous papers for
unstructured bidding languages. Let λt
denote the current
provisional trade and suppose the provisional valuation is
vα
. To compute accurate CE prices, we consider:
min
p,δ
δ (9)
s.t. vα
i (λ) − p(λ) ≤ vα
i (λt
i) − p(λt
i) + δ, ∀i, ∀λ (10)
δ ≥ 0,pj ≥ 0, ∀j.
This linear program (LP) is designed to find prices that
minimize the worst-case error across all agents.
From the definition of CE prices, it follows that CE prices
would have δ = 0 as a solution to (9), at which point trade
λt
i would be in the best-response set of every agent (with
λt
i = ∅, i.e. no trade, for all agents with no surplus for trade
at the prices.)
Example 5. We can illustrate the formulation (9) on
Example 2, assuming for simplicity that vα
= v (i.e. truth).
The efficient trade allocates AB to buyer 1 and CD to buyer
3. Accuracy will seek prices p(A), p(B), p(C) and p(D) to
minimize the δ ≥ 0 required to satisfy constraints:
p(A) + p(B) + p(C) + p(D) ≥ 0 (seller)
p(A) + p(B) ≤ 6 + δ (buyer 1)
p(A) + δ ≥ 4, p(B) + δ ≥ 4 (buyer 2)
p(C) + p(D) ≤ 3 (buyer 3)
p(C) + δ ≥ 2, p(D) + δ ≥ 2 (buyer 4)
An optimal solution requires p(A) = p(B) = 10/3, with
δ = 2/3, with p(C) and p(D) taking values such as p(C) =
p(D) = 3/2.
But, (9) has an exponential number of constraints (Eq. 10).
Rather than solve it explicitly we use constraint
generation [4] and dynamically generate a sufficient subset of
constraints. Let
i denote a manageable subset of all possible
feasible trades to bidder i. Then, a relaxed version of (9)
(written ACC) is formulated by substituting (10) with
vα
i (λ) − p(λ) ≤ vα
i (λt
i) − p(λt
i) + δ, ∀i, ∀λ ∈
i , (11)
where
i is a set of trades that are feasible for bidder i
given the other bids. Fixing the prices p∗
, we then solve n
subproblems (one for each bidder),
max
λ
vα
i (λi) − p∗
(λi) [R-WD(i)]
s.t. λ ∈ Feas(x0
), (12)
to check whether solution (p∗
, δ∗
) to ACC is feasible in
problem (9). In R-WD(i) the objective is to determine a most
preferred trade for each bidder at these prices. Let ˆλi denote
the solution to R-WD(i). Check condition:
vα
i (ˆλi) − p∗
(ˆλ) ≤ vα
i (λt
i) − p∗
(λt
i) + δ∗
, (13)
and if this condition holds for all bidders i, then solution
(p∗
, δ∗
) is optimal for problem (9). Otherwise, trade ˆλi is
added to
i for all bidders i for which this constraint is
253
violated and we re-solve the LP with the new set of
constraints.4
II: Fairness. Second, we break remaining ties to prefer fair
prices: choosing prices that minimize the worst-case error
with respect to Threshold payoffs (i.e. utility to bidders with
Threshold payments), but without choosing prices that are
less accurate.5
Example 6. For example, accuracy in Example 1
(depicted in Figure 1) requires 12 ≤ pA +pB ≤ 16 (for vα
= v).
At these valuations the Threshold payoffs would be 2 to both
the seller and the buyer. This can be exactly achieved in
pricing with pA + pB = 14.
The fairness tie-breaking method is formulated as the
following LP:
min
p,π
π [FAIR]
s.t. vα
i (λ) − p(λ) ≤ vα
i (λt
i) − p(λt
i) + δ∗
i , ∀i, ∀λ ∈
i (14)
π ≥ πvcg,i − (vα
i (λt
i) − p(λt
i)), ∀i (15)
π ≥ 0,pj ≥ 0, ∀j,
where δ∗
represents the error in the optimal solution, from
ACC. The objective here is the same as in the Threshold
payment rule (see Section 4.3): minimize the maximal
error between bidder payoff (at vα
) for the provisional trade
and the VCG payoff (at vα
). Problem FAIR is also solved
through constraint generation, using R-WD(i) to add
additional violated constraints as necessary.
III: Balance. Third, we break remaining ties to prefer
balanced prices: choosing prices that minimize the maximal
price across all items. Returning again to Example 1,
depicted in Figure 1, we see that accuracy and fairness require
p(A) + p(B) = 14. Finally, balance sets p(A) = p(B) = 7.
Balance is justified when, all else being equal, items are
more likely to have similar than dissimilar values.6
The LP
for balance is formulated as follows:
min
p,Y
Y [BAL]
s.t. vα
i (λ) − p(λ) ≤ vα
i (λt
i) − p(λt
i) + δ∗
i , ∀i, ∀λ ∈
i (16)
π∗
i ≥ πvcg,i − (vα
i (λt
i) − p(λt
i)), ∀i, (17)
Y ≥ pj, ∀j (18)
Y ≥ 0, pj ≥ 0, ∀j,
where δ∗
represents the error in the optimal solution from
ACC and π∗
represents the error in the optimal solution
from FAIR. Constraint generation is also used to solve BAL,
generating new trades for
i as necessary.
4
Problem R-WD(i) is a specialization of the WD problem,
in which the objective is to maximize the payoff of a single
bidder, rather than the total value across all bidders. It is
solved as a MIP, by rewriting the objective in WD(T, x0
)
as max{viβ · satiβ −
j p∗
j · λij } for agent i. Thus, the
structure of the bid-tree language is exploited in generating
new constraints, because this is solved as a concise MIP.
The other bidders are kept around in the MIP (but do not
appear in the objective), and are used to define the space of
feasible trades.
5
The methods of Dunford et al. [12], that use a nucleolus
approach, are also closely related.
6
The use of balance was advocated by Kwasnica et al. [17].
Dunford et al. [12] prefer to smooth prices across rounds.
Comment 1: Lexicographical Refinement. For all
three sub-problems we also perform lexicographical
refinement (with respect to bidders in ACC and FAIR, and with
respect to goods in BAL). For instance, in ACC we
successively minimize the maximal error across all bidders. Given
an initial solution we first pin down the error on all
bidders for whom a constraint (11) is binding. For such a bidder
i, the constraint is replaced with
vα
i (λ) − p(λ) ≤ vα
i (λt
i) − p(λt
i) + δ∗
i , ∀λ ∈
i , (19)
and the error to bidder i no longer appears explicitly in
the objective. ACC is then re-solved, and makes progress
by further minimizing the maximal error across all bidders
yet to be pinned down. This continues, pinning down any
new bidders for whom one of constraints (11) is binding,
until the error is lexicographically optimized for all
bidders.7
The exact same process is repeated for FAIR and
BAL, with bidders pinned down and constraints (15)
replaced with π∗
i ≥ πvcg,i − (vα
i (λt
i) − p(λt
i)), ∀λ ∈
i , (where
π∗
i is the current objective) in FAIR, and items pinned down
and constraints (18) replaced with p∗
j ≥ pj (where p∗
j
represents the target for the maximal price on that item) in
BAL.
Comment 2: Computation. All constraints in
i are
retained, and this set grows across all stages and across all
rounds of the exchange. Thus, the computational effort in
constraint generation is re-used. In implementation we are
careful to address a number of -issues that arise due to
floating-point issues. We prefer to err on the side of being
conservative in determining whether or not to add another
constraint in performing check (13). This avoids later
infeasibility issues. In addition, when pinning-down bidders for
the purpose of lexicographical refinement we relax the
associated bidder-constraints with a small > 0 on the
righthand side.
5.3 Revealed-Preference Activity Rules
The role of activity rules in the auction is to ensure both
consistency and progress across rounds [21]. Consistency in
our exchange requires that bidders tighten bounds as the
exchange progresses. Activity rules ensure that bidders are
active during early rounds, and promote useful elicitation
throughout the exchange.
We adopt a simple revealed-preference (RP) activity rule.
The idea is loosely based around the RP-rule in Ausubel et
al. [1], where it is used for one-sided CAs. The motivation
is to require more than simply consistency: we need bidders
to provide enough information for the system to be able to
to prove that an allocation is (approximately) efficient.
It is helpful to think about the bidders interacting with
proxy agents that will act on their behalf in responding
to provisional prices pt−1
determined at the end of round
t − 1. The only knowledge that such a proxy has of the
valuation of a bidder is through the bid-tree. Suppose a
proxy was queried by the exchange and asked which trade
the bidder was most interested in at the provisional prices.
The RP rule says the following: the proxy must have enough
7
For example, applying this to accuracy on Example 2 we
solve once and find bidders 1 and 2 are binding, for error
δ∗
= 2/3. We pin these down and then minimize the error
to bidders 3 and 4. Finally, this gives p(A) = p(B) = 10/3
and p(C) = p(D) = 5/3, with accuracy 2/3 to bidders 1 and
2 and 1/3 to bidders 3 and 4.
254
information to be able to determine this surplus-maximizing
trade at current prices. Consider the following examples:
Example 7. A bidder has XOR(+A, +B) and a value of
+5 on the leaf +A and a value range of [5,10] on leaf +B.
Suppose prices are currently 3 for each of A and B. The RP
rule is satisfied because the proxy knows that however the
remaining value uncertainty on +B is resolved the bidder
will always (weakly) prefer +B to +A.
Example 8. A bidder has XOR(+A, +B) and value
bounds [5, 10] on the root node and a value of 1 on leaf +A.
Suppose prices are currently 3 for each of A and B. The RP
rule is satisfied because the bidder will always prefer +A to
+B at equal prices, whichever way the uncertain value on
the root node is ultimately resolved.
Overloading notation, let vi ∈ Ti denote a valuation that
is consistent with lower and upper valuations in bid tree Ti.
Definition 4. Bid tree Ti satisfies RP at prices pt−1
if
and only if there exists some feasible trade L∗
for which,
vi(L∗
i ) − pt−1
(L∗
i ) ≥ max
λ∈Feas(x0)
vi(λi) − pt−1
(λi), ∀vi ∈ Ti.
(20)
To make this determination for bidder i we solve a
sequence of problems, each of which is a variation on the WD
problem. First, we construct a candidate lower-bound trade,
which is a feasible trade that solves:
max
λ
vi(λi) − pt−1
(λi) [RP1(i)]
s.t. λ ∈ Feas(x0
), (21)
The solution π∗
l to RP1(i) represents the maximal payoff
that bidder i can achieve across all feasible trades, given its
pessimistic valuation.
Second, we break ties to find a trade with maximal value
uncertainty across all possible solutions to RP1(i):
max
λ
vi(λi) − vi(λi) [RP2(i)]
s.t. λ ∈ Feas(x0
) (22)
vi(λi) − pt−1
(λi) ≥ π∗
l (23)
We adopt solution L∗
i as our candidate for the trade that
may satisfy RP. To understand the importance of this
tiebreaking rule consider Example 7. The proxy can prove +B
but not +A is a best-response for all vi ∈ Ti, and should
choose +B as its candidate. Notice that +B is a
counterexample to +A, but not the other way round.
Now, we construct a modified valuation ˜vi, by setting
˜viβ =
viβ , if β ∈ sat(L∗
i )
viβ , otherwise.
(24)
where sat(L∗
i ) is the set of nodes that are satisfied in the
lower-bound tree for trade L∗
i . Given this modified
valuation, we find U∗
to solve:
max
λ
˜vi(λi) − pt−1
(λi) [RP3(i)]
s.t. λ ∈ Feas(x0
) (25)
Let π∗
u denote the payoff from this optimal trade at modified
values ˜v. We call trade U∗
i the witness trade. We show in
Proposition 1 that the RP rule is satisfied if and only if
π∗
l ≥ π∗
u.
Constructing the modified valuation as ˜vi recognizes that
there is shared uncertainty across trades that satisfy the
same nodes in a bid tree. Example 8 helps to illustrate this.
Just using vi in RP3(i), we would find L∗
i is buy A with
payoff π∗
l = 3 but then find U∗
i is buy B with π∗
u = 7 and
fail RP. We must recognize that however the uncertainty on
the root node is resolved it will affect +A and +B in exactly
the same way. For this reason, we set ˜viβ = viβ = 5 on the
root node, which is exactly the same value that was adopted
in determining π∗
l . Then, RP3(i) applied to U∗
i gives buy
A and the RP test is judged to be passed.
Proposition 1. Bid tree Ti satisfies RP given prices pt−1
if and only if any lower-bound trade L∗
i that solves RP1(i)
and RP2(i) satisfies:
vi(Ti, L∗
i ) − pt−1
(L∗
i ) ≥ ˜vi(Ti, U∗
i ) − pt−1
(U∗
i ), (26)
where ˜vi is the modified valuation in Eq. (24).
Proof. For sufficiency, notice that the difference in
payoff between trade L∗
i and another trade λi is unaffected by
the way uncertainty is resolved on any node that is satisfied
in both L∗
i and λi. Fixing the values in ˜vi on nodes satisfied
in L∗
i has the effect of removing this consideration when a
trade U∗
i is selected that satisfies one of these nodes. On
the other hand, fixing the values on these nodes has no
effect on trades considered in RP3(i) that do not share a node
with L∗
i . For the necessary direction, we first show that any
trade that satisfies RP must solve RP1(i). Suppose
otherwise, that some λi with payoff greater than π∗
l satisfies RP.
But, valuation vi ∈ Ti together with L∗
i presents a
counterexample to RP (Eq. 20). Now, suppose (for
contradiction) that some λi with maximal payoff π∗
l but uncertainty
less than L∗
i satisfies RP. Proceed by case analysis. Case
a): only one solution to RP1(i) has uncertain value and so
λi has certain value. But, this cannot satisfy RP because
L∗
i with uncertain value would be a counterexample to RP
(Eq. 20). Case b): two or more solutions to RP1(i) have
uncertain value. Here, we first argue that one of these trades
must satisfy a (weak) superset of all the nodes with
uncertain value that are satisfied by all other trades in this set.
This is by RP. Without this, then for any choice of trade
that solves RP1(i), there is another trade with a disjoint set
of uncertain but satisfied nodes that provides a
counterexample to RP (Eq. 20). Now, consider the case that some
trade contains a superset of all the uncertain satisfied nodes
of the other trades. Clearly RP2(i) will choose this trade,
L∗
i , and λi must satisfy a subset of these nodes (by
assumption). But, we now see that λi cannot satisfy RP because
L∗
i would be a counterexample to RP.
Failure to meet the activity rule must have some
consequence. In the current rules, the default action we choose
is to set the upper bounds in valuations down to the
maximal value of the provisional price on a node8
and the
lowerbound value on that node.9
Such a bidder can remain active
8
The provisional price on a node is defined as the minimal
total price across all feasible trades for which the subtree
rooted at the tree is satisfied.
9
This is entirely analogous to when a bidder in an ascending
clock auction stops bidding at a price: she is not permitted
to bid at a higher price again in future rounds.
255
within the exchange, but only with valuations that are
consistent with these new bounds.
5.4 Bidder Feedback
In each round, our default design provides every bidder
with the provisional trade and also with the current
provisional prices. See 7 for an additional discussion. We also
provide guidance to help a bidder meet the RP rule. Let
sat(L∗
i ) and sat(U∗
i ) denote the nodes that are satisfied in
trades L∗
i and U∗
i , as computed in RP1-RP3.
Lemma 2. When RP fails, a bidder must increase a lower
bound on at least one node in sat(L∗
i ) \ sat(U∗
i ) or decrease
an upper bound on at least one node in sat(U∗
i ) \ sat(L∗
i ) in
order to meet the activity rule.
Proof. Changing the upper- or lower- values on nodes
that are not satisfied by either trade does not change L∗
i or
U∗
i , and does not change the payoff from these trades. Thus,
the RP condition will continue to fail. Similarly, changing
the bounds on nodes that are satisfied in both trades has
no effect on revealed preference. A change to a lower bound
on a shared node affects both L∗
i and U∗
i identically because
of the use of the modified valuation to determine U∗
i . A
change to an upper bound on a shared node has no effect in
determining either L∗
i or U∗
i .
Note that when sat(U∗
i ) = sat(L∗
i ) then condition (26) is
always trivially satisfied, and so the guidance in the lemma
is always well-defined when RP fails. This is an elegant
feedback mechanism because it is adaptive. Once a bidder
makes some changes on some subset of these nodes, the
bidder can query the exchange. The exchange can then respond
yes, or can revise the set of nodes sat(λ∗
l ) and sat(λ∗
u) as
necessary.
5.5 Termination Conditions
Once each bidder has committed its new bids (and either
met the RP rule or suffered the penalty) then round t closes.
At this point, the task is to determine the new α-valuation,
and in turn the provisional allocation λt
and provisional
prices pt
. A termination condition is also checked, to
determine whether to move the exchange to a last-and-final
round. To define the α-valuation we compute the following
two quantities:
Pessimistic at Pessimistic (PP) Determine an efficient
trade, λ∗
l , at pessimistic values, i.e. to solve
maxλ
i vi(λi), and set PP=
i vi(λ∗
li).
Pessimistic at Optimistic (PO) Determine an efficient
trade, λ∗
u, at optimistic values, i.e. to solve
maxλ
i vi(λi), and set PO=
i vi(λ∗
ui).
First, note that PP ≥ PO and PP ≥ 0 by definition,
for all bid-trees, although PO can be negative (because the
right trade at v is not currently a useful trade at v).
Recognizing this, define
γeff
(PP, PO) = 1 +
PP − PO
PP
, (27)
when PP > 0, and observe that γeff
(PP, PO) ≥ 1 when
this is defined, and that γeff
(PP, PO) will start large and
then trend towards 1 as the optimistic allocation converges
towards the pessimistic allocation. In each round, we define
αeff
∈ [0, 1] as:
αeff
=
0 when PP is 0
1/γeff
otherwise
(28)
which is 0 while PP is 0 and then trends towards 1 once
PP> 0 in some round. This is used to define α-valuation
vα
i = αeff
vi + (1 − αeff
)vi, ∀i, (29)
which is used to define the provisional allocation and
provisional prices. The effect is to endogenously define a
schedule for moving from optimistic to pessimistic values across
rounds, based on how close the trades are to one another.
Termination Condition. In moving to the last-and-final
round, and finally closing, we also care about the
convergence of payments, in addition to the convergence towards
an efficient trade. For this we introduce another parameter,
αthresh
∈ [0, 1], that trends from 0 to 1 as the Threshold
payments at lower and upper valuations converge. Consider
the following parameter:
γthresh
= 1 +
||pthresh(v) − pthresh(v)||2
(PP/Nactive)
, (30)
which is defined for PP > 0, where pthresh(v) denotes the
Threshold payments at valuation profile v, Nactive is the
number of bidders that are actively engaged in trade in the
PP trade, and || · ||2 is the L2-norm. Note that γthresh
is
defined for payments and not payoffs. This is appropriate
because it is the accuracy of the outcome of the exchange
that matters: i.e. the trade and the payments. Given this,
we define
αthresh
=
0 when PP is 0
1/γthresh
otherwise
(31)
which is 0 while PP is 0 and then trends towards 1 as
progress is made.
Definition 5 (termination). ICE transitions to a
lastand-final round when one of the following holds:
1. αeff
≥ CUTOFFeff and αthresh
≥ CUTOFFthresh,
2. there is no trade at the optimistic values,
where CUTOFFeff , CUTOFFthresh ∈ (0, 1] determine the
accuracy required for termination.
At the end of the last-and-final round vα
= v is used to
define the final trade and the final Threshold payments.
Example 9. Consider again Example 1, and consider the
upper and lower bounds as depicted in Figure 1. First, if the
seller"s bounds were [−20, −4] then there is an optimistic
trade but no pessimistic trade, and PO = −4 and PP = 0,
and αeff
= 0. At the bounds depicted, both the optimistic
and the pessimistic trades occur and PO = PP = 4 and
αeff
= 1. However, we can see the Threshold payments are
(17, −17) at v but (14, −14) at v. Evaluating γthresh
, we
have γthresh
= 1 +
√
1/2(32+32)
(4/2)
= 5/2, and αthresh
= 2/5.
For CUTOFFthresh < 2/5 the exchange would remain open.
On the other hand, if the buyer"s value for +AB was
between [18, 24] and the seller"s value for −AB was between
[−12, −6], the Threshold payments are (15, −15) at both
upper and lower bounds, and αthresh
= 1.
256
Component Purpose Lines
Agent. Captures strategic behavior and information revelation decisions 762
Model Support Provides XML support to load goods and valuations into world 200
World Keeps track of all agent, good, and valuation details 998
Exchange Driver & Communication Controls exchange, and coordinates remote agent behavior 585
Bidding Language Implements the tree-based bidding language 1119
Activity Rule Engine Implements the revealed preference rule with range support 203
Closing Rule Engine Checks if auction termination condition reached 137
WD Engine Provides WD-related logic 377
Pricing Engine Provides Pricing-related logic 460
MIP Builders Translates logic used by engines into our general optimizer formulation 346
Pricing Builders Used by three pricing stages 256
Winner Determination Builders Used by WD, activity rule, closing rule, and pricing constraint generation 365
Framework Support code; eases modular replacement of above components 510
Table 1: Exchange Component and Code Breakdown.
6. SYSTEMS INFRASTRUCTURE
ICE is approximately 6502 lines of Java code, broken up
into the functional packages described in Table 1.10
The prototype is modular so that researchers may easily
replace components for experimentation. In addition to the
core exchange discussed in this paper, we have developed
an agent component that allows a user to simulate the
behavior and knowledge of other players in the system, better
allowing a user to formulate their strategy in advance of
actual play. A user specifies a valuation model in an
XMLinterpretation of our bidding language, which is revealed to
the exchange via the agent"s strategy.
Major exchange tasks are handled by engines that
dictate the non-optimizer specific logic. These engines drive
the appropriate MIP/LP builders. We realized that all of
our optimization formulations boil down to two classes of
optimization problem. The first, used by winner
determination, activity rule, closing rule, and constraint generation
in pricing, is a MIP that finds trades that maximize value,
holding prices and slacks constant. The second, used by the
three pricing stages, is an LP that holds trades constant,
seeking to minimize slack, profit, or prices. We take
advantage of the commonality of these problems by using common
LP/MIP builders that differ only by a few functional hooks
to provide the correct variables for optimization.
We have generalized our back-end optimization solver
interface11
(we currently support CPLEX and the LGPL-
licensed LPSolve), and can take advantage of the load-balancing
and parallel MIP/LP solving capability that this library
provides.
7. DISCUSSION
The bidding language was defined to allow for perfect
symmetry between buyers and sellers and provide expressiveness
in an exchange domain, for instance for mixed bidders
interested in executing trades such as swaps. This proved
especially challenging. The breakthrough came when we
focused on changes in value for trades rather than providing
absolute values for allocations. For simplicity, we require the
same tree structure for both the upper and lower valuations.
10
Code size is measured in physical source line of code
(SLOC), as generated using David A. Wheeler"s SLOC
Count. The total of 6502 includes 184 for instrumentation
(not shown in the table). The JOpt solver interface is
another 1964 lines, and Castor automatically generates around
5200 lines of code for XML file manipulation.
11
http://econcs.eecs.harvard.edu/jopt
This allows the language itself to ensure consistency (with
the upper value at least the lower value on all trades) and
enforce monotonic tightening of these bounds for all trades
across rounds. It also provides for an efficient method to
check the RP activity rule, because it makes it simple to
reason about shared uncertainty between trades.
The decision to adopt a direct and proxied approach
in which bidders express their upper and lower values to a
trusted proxy agent that interacts with the exchange was
made early in the design process. In many ways this is
the clearest and most immediate way to generalize the
design in Parkes et al. [24] and make it iterative. In addition,
this removes much opportunity for strategic manipulation:
bidders are restricted to making (incremental) statements
about their valuations. Another advantage is that it makes
the activity rule easy to explain: bidders can always meet
the activity rule by tightening bounds such that their true
value remains in the support.12
Perhaps most importantly,
having explicit information on upper and lower values
permits progress in early rounds, even while there is no efficient
trade at pessimistic values.
Upper and lower bound information also provides
guidance about when to terminate. Note that taken by itself,
PP = PO does not imply that the current provisional trade
is efficient with respect to all values consistent with current
value information. The difference in values between
different trades, aggregated across all bidders, could be similar at
lower and upper bounds but quite different at intermediate
values (including truth). Nevertheless, we conjecture that
PP = PO will prove an excellent indicator of efficiency in
practical settings where the shape of the upper and lower
valuations does convey useful information. This is worthy of
experimental investigation. Moreover, the use of price and
RP activity provides additional guarantees.
We adopted linear prices (prices on individual items) rather
than non-linear prices (with prices on a trade not equal to
the sum of the prices on the component items) early in the
design process. The conciseness of this price representation
is very important for computational tractability within the
exchange and also to promote simplicity and transparency
for bidders. The RP activity rule was adopted later, and is
a good choice because of its excellent theoretical properties
when coupled with CE prices. The following can be easily
established: given exact CE prices pt−1
for provisional trade
12
This is in contrast to indirect price-based approaches, such
as clock-proxy [1], in which bidders must be able to reason
about the RP-constraints implied by bids in each round.
257
λt−1
at valuations vα
, then if the upper and lower values at
the start of round t already satisfy the RP rule (and without
the need for any tie-breaking), the provisional trade is
efficient for all valuations consistent with the current bid trees.
When linear CE prices exist, this provides for a soundness
and completeness statement: if PP = PO, linear CE prices
exist, and the RP rule is satisfied, the provisional trade is
efficient (soundness); if prices are exact CE prices for the
provisional trade at vα
, but the trade is inefficient with
respect to some valuation profile consistent with the current
bid trees, then at least one bidder must fail RP with her
current bid tree and progress will be made (completeness).
Future work must study convergence experimentally, and
extend this theory to allow for approximate prices.
Some strategic aspects of our ICE design deserve
comment, and further study. First, we do not claim that
truthfully responding to the RP rule is an ex post equilibrium.13
However, the exchange is designed to mimic the Threshold
rule in its payment scheme, which is known to have
useful incentive properties [16]. We must be careful, though.
For instance we do not suggest to provide αeff
to bidders,
because as αeff
approaches 1 it would inform bidders that
bid values are becoming irrelevant to determining the trade
but merely used to determine payments (and bidders would
become increasingly reluctant to increase their lower
valuations). Also, no consideration has been given in this work
to collusion by bidders. This is an issue that deserves some
attention in future work.
8. CONCLUSIONS
In this work we designed and prototyped a scalable and
highly-expressive iterative combinatorial exchange. The
design includes many interesting features, including: a new
bid-tree language for exchanges, a new method to construct
approximate linear prices from expressive languages, and a
proxied elicitation method with optimistic and pessimistic
valuations with a new method to evaluate a revealed-
preference activity rule. The exchange is fully implemented in
Java and is in a validation phase.
The next steps for our work are to allow bidders to refine
the structure of the bid tree in addition to values on the
tree. We intend to study the elicitation properties of the
exchange and we have put together a test suite of exchange
problem instances. In addition, we are beginning to engage
in collaborations to apply the design to airline takeoff and
landing slot scheduling and to resource allocation in
widearea network distributed computational systems.
Acknowledgments
We would like to dedicate this paper to all of the participants
in CS 286r at Harvard University in Spring 2004. This work
is supported in part by NSF grant IIS-0238147.
9. REFERENCES
[1] L. Ausubel, P. Cramton, and P. Milgrom. The clock-proxy
auction: A practical combinatorial auction design. In Cramton
et al. [9], chapter 5.
[2] M. Babaioff, N. Nisan, and E. Pavlov. Mechanisms for a
spatially distributed market. In Proc. 5th ACM Conf. on
Electronic Commerce, pages 9-20. ACM Press, 2001.
13
Given the Myerson-Satterthwaite impossibility
theorem [22] and the method by which we determine the trade
we should not expect this.
[3] M. Ball, G. Donohue, and K. Hoffman. Auctions for the safe,
efficient, and equitable allocation of airspace system resources.
In S. Cramton, Shoham, editor, Combinatorial Auctions.
2004. Forthcoming.
[4] D. Bertsimas and J. Tsitsiklis. Introduction to Linear
Optimization. Athena Scientific, 1997.
[5] S. Bikhchandani and J. M. Ostroy. The package assignment
model. Journal of Economic Theory, 107(2):377-406, 2002.
[6] C. Boutilier. A pomdp formulation of preference elicitation
problems. In Proc. 18th National Conference on Artificial
Intelligence (AAAI-02), 2002.
[7] C. Boutilier and H. Hoos. Bidding languages for combinatorial
auctions. In Proc. 17th International Joint Conference on
Artificial Intelligence (IJCAI-01), 2001.
[8] W. Conen and T. Sandholm. Preference elicitation in
combinatorial auctions. In Proc. 3rd ACM Conf. on
Electronic Commerce (EC-01), pages 256-259. ACM Press,
New York, 2001.
[9] P. Cramton, Y. Shoham, and R. Steinberg, editors.
Combinatorial Auctions. MIT Press, 2004.
[10] S. de Vries, J. Schummer, and R. V. Vohra. On ascending
Vickrey auctions for heterogeneous objects. Technical report,
MEDS, Kellogg School, Northwestern University, 2003.
[11] S. de Vries and R. V. Vohra. Combinatorial auctions: A
survey. Informs Journal on Computing, 15(3):284-309, 2003.
[12] M. Dunford, K. Hoffman, D. Menon, R. Sultana, and
T. Wilson. Testing linear pricing algorithms for use in
ascending combinatorial auctions. Technical report, SEOR,
George Mason University, 2003.
[13] Y. Fu, J. Chase, B. Chun, S. Schwab, and A. Vahdat. Sharp:
an architecture for secure resource peering. In Proceedings of
the nineteenth ACM symposium on Operating systems
principles, pages 133-148. ACM Press, 2003.
[14] B. Hudson and T. Sandholm. Effectiveness of query types and
policies for preference elicitation in combinatorial auctions. In
Proc. 3rd Int. Joint. Conf. on Autonomous Agents and Multi
Agent Systems, pages 386-393, 2004.
[15] V. Krishna. Auction Theory. Academic Press, 2002.
[16] D. Krych. Calculation and analysis of Nash equilibria of
Vickrey-based payment rules for combinatorial exchanges,
Harvard College, April 2003.
[17] A. M. Kwasnica, J. O. Ledyard, D. Porter, and C. DeMartini.
A new and improved design for multi-object iterative auctions.
Management Science, 2004. To appear.
[18] E. Kwerel and J. Williams. A proposal for a rapid transition to
market allocation of spectrum. Technical report, FCC Office of
Plans and Policy, Nov 2002.
[19] S. M. Lahaie and D. C. Parkes. Applying learning algorithms
to preference elicitation. In Proc. ACM Conf. on Electronic
Commerce, pages 180-188, 2004.
[20] R. P. McAfee. A dominant strategy double auction. J. of
Economic Theory, 56:434-450, 1992.
[21] P. Milgrom. Putting auction theory to work: The simultaneous
ascending auction. J.Pol. Econ., 108:245-272, 2000.
[22] R. B. Myerson and M. A. Satterthwaite. Efficient mechanisms
for bilateral trading. Journal of Economic Theory,
28:265-281, 1983.
[23] N. Nisan. Bidding and allocation in combinatorial auctions. In
Proc. 2nd ACM Conf. on Electronic Commerce (EC-00),
pages 1-12, 2000.
[24] D. C. Parkes, J. R. Kalagnanam, and M. Eso. Achieving
budget-balance with Vickrey-based payment schemes in
exchanges. In Proc. 17th International Joint Conference on
Artificial Intelligence (IJCAI-01), pages 1161-1168, 2001.
[25] D. C. Parkes and L. H. Ungar. Iterative combinatorial
auctions: Theory and practice. In Proc. 17th National
Conference on Artificial Intelligence (AAAI-00), pages
74-81, July 2000.
[26] S. J. Rassenti, V. L. Smith, and R. L. Bulfin. A combinatorial
mechanism for airport time slot allocation. Bell Journal of
Economics, 13:402-417, 1982.
[27] M. H. Rothkopf, A. Pekeˇc, and R. M. Harstad.
Computationally manageable combinatorial auctions.
Management Science, 44(8):1131-1147, 1998.
[28] T. Sandholm and C. Boutilier. Preference elicitation in
combinatorial auctions. In Cramton et al. [9], chapter 10.
[29] P. R. Wurman and M. P. Wellman. AkBA: A progressive,
anonymous-price combinatorial auction. In Second ACM
Conference on Electronic Commerce, pages 21-29, 2000.
258 | winner-determination;double auction;combinatorial auction;price;combinatorial exchange;vcg;bidding;threshold payment;tree-based bidding language;trade;preference elicitation;iterative combinatorial exchange;buyer and seller |
train_J-62 | Weak Monotonicity Suffices for Truthfulness on Convex Domains | Weak monotonicity is a simple necessary condition for a social choice function to be implementable by a truthful mechanism. Roberts [10] showed that it is sufficient for all social choice functions whose domain is unrestricted. Lavi, Mu"alem and Nisan [6] proved the sufficiency of weak monotonicity for functions over order-based domains and Gui, Muller and Vohra [5] proved sufficiency for order-based domains with range constraints and for domains defined by other special types of linear inequality constraints. Here we show the more general result, conjectured by Lavi, Mu"alem and Nisan [6], that weak monotonicity is sufficient for functions defined on any convex domain. | 1. INTRODUCTION
Social choice theory centers around the general problem of
selecting a single outcome out of a set A of alternative
outcomes based on the individual preferences of a set P of
players. A method for aggregating player preferences to select
one outcome is called a social choice function. In this paper
we assume that the range A is finite and that each player"s
preference is expressed by a valuation function which
assigns to each possible outcome a real number representing
the benefit the player derives from that outcome. The
ensemble of player valuation functions is viewed as a
valuation matrix with rows indexed by players and columns by
outcomes.
A major difficulty connected with social choice functions
is that players can not be required to tell the truth about
their preferences. Since each player seeks to maximize his
own benefit, he may find it in his interest to misrepresent
his valuation function. An important approach for dealing
with this problem is to augment a given social choice
function with a payment function, which assigns to each player
a (positive or negative) payment as a function of all of the
individual preferences. By carefully choosing the payment
function, one can hope to entice each player to tell the truth.
A social choice function augmented with a payment function
is called a mechanism 1
and the mechanism is said to
implement the social choice function. A mechanism is truthful
(or to be strategyproof or to have a dominant strategy) if
each player"s best strategy, knowing the preferences of the
others, is always to declare his own true preferences. A
social choice function is truthfully implementable, or truthful
if it has a truthful implementation. (The property of
truthful implementability is sometimes called dominant strategy
incentive compatibility). This framework leads naturally to
the question: which social choice functions are truthful?
This question is of the following general type: given a
class of functions (here, social choice functions) and a
property that holds for some of them (here, truthfulness),
characterize the property. The definition of the property itself
provides a characterization, so what more is needed? Here
are some useful notions of characterization:
• Recognition algorithm. Give an algorithm which, given
an appropriate representation of a function in the class,
determines whether the function has the property.
• Parametric representation. Give an explicit parametrized
family of functions and show that each function in the
1
The usual definition of mechanism is more general than this
(see [8] Chapter 23.C or [9]); the mechanisms we consider
here are usually called direct revelation mechanisms.
286
family has the property, and that every function with
the property is in the family.
A third notion applies in the case of hereditary properties
of functions. A function g is a subfunction of function f, or
f contains g, if g is obtained by restricting the domain of
f. A property P of functions is hereditary if it is preserved
under taking subfunctions. Truthfulness is easily seen to be
hereditary.
• Sets of obstructions. For a hereditary property P, a
function g that does not have the property is an
obstruction to the property in the sense that any function
containing g doesn"t have the property. An obstruction
is minimal if every proper subfunction has the
property. A set of obstructions is complete if every function
that does not have the property contains one of them
as a subfunction. The set of all functions that don"t
satisfy P is a complete (but trivial and uninteresting)
set of obstructions; one seeks a set of small (ideally,
minimal) obstructions.
We are not aware of any work on recognition algorithms
for the property of truthfulness, but there are significant
results concerning parametric representations and obstruction
characterizations of truthfulness. It turns out that the
domain of the function, i.e., the set of allowed valuation
matrices, is crucial. For functions with unrestricted domain, i.e.,
whose domain is the set of all real matrices, there are very
good characterizations of truthfulness. For general domains,
however, the picture is far from complete. Typically, the
domains of social choice functions are specified by a system of
constraints. For example, an order constraint requires that
one specified entry in some row be larger than another in
the same row, a range constraint places an upper or lower
bound on an entry, and a zero constraint forces an entry to
be 0. These are all examples of linear inequality constraints
on the matrix entries.
Building on work of Roberts [10], Lavi, Mu"alem and
Nisan [6] defined a condition called weak monotonicity
(WMON). (Independently, in the context of multi-unit
auctions, Bikhchandani, Chatterji and Sen [3] identified the
same condition and called it nondecreasing in marginal
utilities (NDMU).) The definition of W-MON can be formulated
in terms of obstructions: for some specified simple set F of
functions each having domains of size 2, a function satisfies
W-MON if it contains no function from F. The functions
in F are not truthful, and therefore W-MON is a
necessary condition for truthfulness. Lavi, Mu"alem and Nisan
[6] showed that W-MON is also sufficient for truthfulness
for social choice functions whose domain is order-based, i.e.,
defined by order constraints and zero constraints, and Gui,
Muller and Vohra [5] extended this to other domains. The
domain constraints considered in both papers are special
cases of linear inequality constraints, and it is natural to
ask whether W-MON is sufficient for any domain defined by
such constraints. Lavi, Mu"alem and Nisan [6] conjectured
that W-MON suffices for convex domains. The main result
of this paper is an affirmative answer to this conjecture:
Theorem 1. For any social choice function having
convex domain and finite range, weak monotonicity is necessary
and sufficient for truthfulness.
Using the interpretation of weak monotonicity in terms
of obstructions each having domain size 2, this provides a
complete set of minimal obstructions for truthfulness within
the class of social choice functions with convex domains.
The two hypotheses on the social choice function, that
the domain is convex and that the range is finite, can not
be omitted as is shown by the examples given in section 7.
1.1 Related Work
There is a simple and natural parametrized set of
truthful social choice functions called affine maximizers. Roberts
[10] showed that for functions with unrestricted domain,
every truthful function is an affine maximizer, thus providing
a parametrized representation for truthful functions with
unrestricted domain. There are many known examples of
truthful functions over restricted domains that are not affine
maximizers (see [1], [2], [4], [6] and [7]). Each of these
examples has a special structure and it seems plausible that
there might be some mild restrictions on the class of all
social choice functions such that all truthful functions obeying
these restrictions are affine maximizers. Lavi, Mu"alem and
Nisan [6] obtained a result in this direction by showing that
for order-based domains, under certain technical
assumptions, every truthful social choice function is almost an
affine maximizer.
There are a number of results about truthfulness that
can be viewed as providing obstruction characterizations,
although the notion of obstruction is not explicitly discussed.
For a player i, a set of valuation matrices is said to be
i-local if all of the matrices in the set are identical except for
row i. Call a social choice function i-local if its domain is
ilocal and call it local if it is i-local for some i. The following
easily proved fact is used extensively in the literature:
Proposition 2. The social choice function f is truthful
if and only if every local subfunction of f is truthful.
This implies that the set of all local non-truthful functions
comprises a complete set of obstructions for truthfulness.
This set is much smaller than the set of all non-truthful
functions, but is still far from a minimal set of obstructions.
Rochet [11], Rozenshtrom [12] and Gui, Muller and Vohra
[5] identified a necessary and sufficient condition for
truthfulness (see lemma 3 below) called the nonnegative cycle
property. This condition can be viewed as providing a
minimal complete set of non-truthful functions. As is required
by proposition 2, each function in the set is local.
Furthermore it is one-to-one. In particular its domain has size at
most the number of possible outcomes |A|.
As this complete set of obstructions consists of minimal
non-truthful functions, this provides the optimal obstruction
characterization of non-truthful functions within the class of
all social choice functions. But by restricting attention to
interesting subclasses of social choice functions, one may hope
to get simpler sets of obstructions for truthfulness within
that class.
The condition of weak monotonicity mentioned earlier can
be defined by a set of obstructions, each of which is a local
function of domain size exactly 2. Thus the results of Lavi,
Mu"alem and Nisan [6], and of Gui, Muller and Vohra [5]
give a very simple set of obstructions for truthfulness within
certain subclasses of social choice functions. Theorem 1
extends these results to a much larger subclass of functions.
287
1.2 Weak Monotonicity and the Nonnegative
Cycle Property
By proposition 2, a function is truthful if and only if each
of its local subfunctions is truthful. Therefore, to get a set
of obstructions for truthfulness, it suffices to obtain such a
set for local functions.
The domain of an i-local function consists of matrices that
are fixed on all rows but row i. Fix such a function f and
let D ⊆ RA
be the set of allowed choices for row i. Since
f depends only on row i and row i is chosen from D, we
can view f as a function from D to A. Therefore, f is a
social choice function having one player; we refer to such a
function as a single player function.
Associated to any single player function f with domain D
we define an edge-weighted directed graph Hf whose vertex
set is the image of f. For convenience, we assume that f
is surjective and so this image is A. For each a, b ∈ A,
x ∈ f−1
(a) there is an edge ex(a, b) from a to b with weight
x(a) − x(b). The weight of a set of edges is just the sum of
the weights of the edges. We say that f satisfies:
• the nonnegative cycle property if every directed cycle
has nonnegative weight.
• the nonnegative two-cycle property if every directed
cycle between two vertices has nonnegative weight.
We say a local function g satisfies nonnegative cycle
property/nonnegative two-cycle property if its associated single
player function f does.
The graph Hf has a possibly infinite number of edges
between any two vertices. We define Gf to be the
edgeweighted directed graph with exactly one edge from a to b,
whose weight δab is the infimum (possibly −∞) of all of the
edge weights ex(a, b) for x ∈ f−1
(a). It is easy to see that Hf
has the nonnegative cycle property/nonnegative two-cycle
property if and only if Gf does. Gf is called the outcome
graph of f.
The weak monotonicity property mentioned earlier can
be defined for arbitrary social choice functions by the
condition that every local subfunction satisfies the nonnegative
two-cycle property. The following result was obtained by
Rochet [11] in a slightly different form and rediscovered by
Rozenshtrom [12] and Gui, Muller and Vohra [5]:
Lemma 3. A local social choice function is truthful if and
only if it has the nonnegative cycle property. Thus a social
choice function is truthful if and only if every local
subfunction satisfies the nonnegative cycle property.
In light of this, theorem 1 follows from:
Theorem 4. For any surjective single player function f :
D −→ A where D is a convex subset of RA
and A is finite,
the nonnegative two-cycle property implies the nonnegative
cycle property.
This is the result we will prove.
1.3 Overview of the Proof of Theorem 4
Let D ⊆ RA
be convex and let f : D −→ A be a single
player function such that Gf has no negative two-cycles. We
want to conclude that Gf has no negative cycles. For two
vertices a, b, let δ∗
ab denote the minimum weight of any path
from a to b. Clearly δ∗
ab ≤ δab. Our proof shows that the
δ∗
-weight of every cycle is exactly 0, from which theorem 4
follows.
There seems to be no direct way to compute δ∗
and so we
proceed indirectly. Based on geometric considerations, we
identify a subset of paths in Gf called admissible paths and
a subset of admissible paths called straight paths. We prove
that for any two outcomes a, b, there is a straight path from
a to b (lemma 8 and corollary 10), and all straight paths
from a to b have the same weight, which we denote ρab
(theorem 12). We show that ρab ≤ δab (lemma 14) and that
the ρ-weight of every cycle is 0. The key step to this proof
is showing that the ρ-weight of every directed triangle is 0
(lemma 17).
It turns out that ρ is equal to δ∗
(corollary 20), although
this equality is not needed in the proof of theorem 4.
To expand on the above summary, we give the definitions
of an admissible path and a straight path. These are
somewhat technical and rely on the geometry of f. We first
observe that, without loss of generality, we can assume that
D is (topologically) closed (section 2). In section 3, for each
a ∈ A, we enlarge the set f−1
(a) to a closed convex set
Da ⊆ D in such a way that for a, b ∈ A with a = b, Da and
Db have disjoint interiors. We define an admissible path to
be a sequence of outcomes (a1, . . . , ak) such that each of the
sets Ij = Daj ∩ Daj+1 is nonempty (section 4). An
admissible path is straight if there is a straight line that meets one
point from each of the sets I1, . . . , Ik−1 in order (section 5).
Finally, we mention how the hypotheses of convex domain
and finite range are used in the proof. Both hypotheses are
needed to show: (1) the existence of a straight path from a
to b for all a, b (lemma 8). (2) that the ρ-weight of a directed
triangle is 0 (lemma 17). The convex domain hypothesis is
also needed for the convexity of the sets Da (section 3). The
finite range hypothesis is also needed to reduce theorem 4 to
the case that D is closed (section 2) and to prove that every
straight path from a to b has the same δ-weight (theorem
12).
2. REDUCTION TO CLOSED DOMAIN
We first reduce the theorem to the case that D is closed.
Write DC
for the closure of D. Since A is finite, DC
=
∪a∈A(f−1
(a))C
. Thus for each v ∈ DC
− D, there is an
a = a(v) ∈ A such that v ∈ (f−1
(a))C
. Extend f to the
function g on DC
by defining g(v) = a(v) for v ∈ DC
−
D and g(v) = f(v) for v ∈ D. It is easy to check that
δab(g) = δab(f) for all a, b ∈ A and therefore it suffices to
show that the nonnegative two-cycle property for g implies
the nonnegative cycle property for g.
Henceforth we assume D is convex and closed.
3. A DISSECTION OF THE DOMAIN
In this section, we construct a family of closed convex sets
{Da : a ∈ A} with disjoint interiors whose union is D and
satisfying f−1
(a) ⊆ Da for each a ∈ A.
Let Ra = {v : ∀b ∈ A, v(a) − v(b) ≥ δab}. Ra is a closed
polyhedron containing f−1
(a). The next proposition
implies that any two of these polyhedra intersect only on their
boundary.
Proposition 5. Let a, b ∈ A. If v ∈ Ra ∩Rb then v(a)−
v(b) = δab = −δba.
288
Da
Db
Dc
Dd
De
v
w
x
y
z
u
p
Figure 1: A 2-dimensional domain with 5 outcomes.
Proof. v ∈ Ra implies v(a) − v(b) ≥ δab and v ∈ Rb
implies v(b)−v(a) ≥ δba which, by the nonnegative two-cycle
property, implies v(a) − v(b) ≤ δab. Thus v(a) − v(b) = δab
and by symmetry v(b) − v(a) = δba.
Finally, we restrict the collection of sets {Ra : a ∈ A}
to the domain D by defining Da = Ra ∩ D for each a ∈
A. Clearly, Da is closed and convex, and contains f−1
(a).
Therefore
S
a∈A Da = D. Also, by proposition 5, any point
v in Da ∩ Db satisfies v(a) − v(b) = δab = −δba.
4. PATHS AND D-SEQUENCES
A path of size k is a sequence −→a = (a1, . . . , ak) with each
ai ∈ A (possibly with repetition). We call −→a an (a1,
ak)path. For a path −→a , we write |−→a | for the size of −→a . −→a is
simple if the ai"s are distinct.
For b, c ∈ A we write Pbc for the set of (b, c)-paths and
SPbc for the set of simple (b, c)-paths. The δ-weight of path
−→a is defined by
δ(−→a ) =
k−1X
i=1
δaiai+1 .
A D-sequence of order k is a sequence −→u = (u0, . . . , uk)
with each ui ∈ D (possibly with repetition). We call −→u a
(u0, uk)-sequence. For a D-sequence −→u , we write ord(u) for
the order of −→u . For v, w ∈ D we write Svw
for the set of
(v, w)-sequences.
A compatible pair is a pair (−→a , −→u ) where −→a is a path
and −→u is a D-sequence satisfying ord(−→u ) = |−→a | and for
each i ∈ [k], both ui−1 and ui belong to Dai .
We write C(−→a ) for the set of D-sequences −→u that are
compatible with −→a . We say that −→a is admissible if C(−→a )
is nonempty. For −→u ∈ C(−→a ) we define
∆−→a (−→u ) =
|−→a |−1
X
i=1
(ui(ai) − ui(ai+1)).
For v, w ∈ D and b, c ∈ A, we define Cvw
bc to be the set of
compatible pairs (−→a , −→u ) such that −→a ∈ Pbc and −→u ∈ Svw
.
To illustrate these definitions, figure 1 gives the
dissection of a domain, a 2-dimensional plane, into five regions
Da, Db, Dc, Dd, De. D-sequence (v, w, x, y, z) is compatible
with both path (a, b, c, e) and path (a, b, d, e); D-sequence
(v, w, u, y, z) is compatible with a unique path (a, b, d, e).
D-sequence (x, w, p, y, z) is compatible with a unique path
(b, a, d, e). Hence (a, b, c, e), (a, b, d, e) and (b, a, d, e) are
admissible paths. However, path (a, c, d) or path (b, e) is not
admissible.
Proposition 6. For any compatible pair (−→a , −→u ), ∆−→a (−→u ) =
δ(−→a ).
Proof. Let k = ord(−→u ) = |−→a |. By the definition of a
compatible pair, ui ∈ Dai ∩ Dai+1 for i ∈ [k − 1]. ui(ai) −
ui(ai+1) = δaiai+1 from proposition 5. Therefore,
∆−→a (−→u ) =
k−1X
i=1
(ui(ai) − ui(ai+1)) =
k−1X
i=1
δaiai+1 = δ(−→a ).
Lemma 7. Let b, c ∈ A and let −→a , −→a ∈ Pbc. If C(−→a ) ∩
C(−→a ) = ∅ then δ(−→a ) = δ(−→a ).
Proof. Let −→u be a D-sequence in C(−→a ) ∩ C(−→a ). By
proposition 6, δ(−→a ) = ∆−→a (−→u ) and δ(−→a ) = ∆−→a (−→u ), it
suffices to show ∆−→a (−→u ) = ∆−→a (−→u ).
Let k = ord(−→u ) = |−→a | = |−→a |. Since
∆−→a (−→u ) =
k−1X
i=1
(ui(ai) − ui(ai+1))
= u1(a1) +
k−1X
i=2
(ui(ai) − ui−1(ai)) − uk−1(ak)
= u1(b) +
k−1X
i=2
(ui(ai) − ui−1(ai)) − uk−1(c),
∆−→a (−→u ) − ∆−→a (−→u )
=
k−1X
i=2
((ui(ai) − ui−1(ai)) − (ui(ai) − ui−1(ai)))
=
k−1X
i=2
((ui(ai) − ui(ai)) − (ui−1(ai) − ui−1(ai))).
Noticing both ui−1 and ui belong to Dai ∩ Dai
, we have by
proposition 5
ui−1(ai) − ui−1(ai) = δaiai
= ui(ai) − ui(ai).
Hence ∆−→a (−→u ) − ∆−→a (−→u ) = 0.
5. LINEAR D-SEQUENCES AND STRAIGHT
PATHS
For v, w ∈ D we write vw for the (closed) line segment
joining v and w.
A D-sequence −→u of order k is linear provided that there
is a sequence of real numbers 0 = λ0 ≤ λ1 ≤ . . . ≤ λk = 1
such that ui = (1 − λi)u0 + λiuk. In particular, each ui
belongs to u0uk. For v, w ∈ D we write Lvw
for the set of
linear (v, w)-sequences.
For b, c ∈ A and v, w ∈ D we write LCvw
bc for the set of
compatible pairs (−→a , −→u ) such that −→a ∈ Pbc and −→u ∈ Lvw
.
For a path −→a , we write L(−→a ) for the set of linear
sequences compatible with −→a . We say that −→a is straight if
L(−→a ) = ∅.
For example, in figure 1, D-sequence (v, w, x, y, z) is
linear while (v, w, u, y, z), (x, w, p, y, z), and (x, v, w, y, z) are
not. Hence path (a, b, c, e) and (a, b, d, e) are both straight.
However, path (b, a, d, e) is not straight.
289
Lemma 8. Let b, c ∈ A and v ∈ Db, w ∈ Dc. There is
a simple path −→a and D-sequence −→u such that (−→a , −→u ) ∈
LCvw
bc . Furthermore, for any such path −→a , δ(−→a ) ≤ v(b) −
v(c).
Proof. By the convexity of D, any sequence of points on
vw is a D-sequence.
If b = c, singleton path −→a = (b) and D-sequence −→u =
(v, w) are obviously compatible. δ(−→a ) = 0 = v(b) − v(c).
So assume b = c. If Db ∩Dc ∩vw = ∅, we pick an arbitrary
x from this set and let −→a = (b, c) ∈ SPbc, −→u = (v, x, w) ∈
Lvw
. Again it is easy to check the compatibility of (−→a , −→u ).
Since v ∈ Db, v(b) − v(c) ≥ δbc = δ(−→a ).
For the remaining case b = c and Db ∩Dc ∩vw = ∅, notice
v = w otherwise v = w ∈ Db ∩ Dc ∩ vw. So we can define
λx for every point x on vw to be the unique number in [0, 1]
such that x = (1 − λx)v + λxw. For convenience, we write
x ≤ y for λx ≤ λy.
Let Ia = Da ∩ vw for each a ∈ A. Since D = ∪a∈ADa, we
have vw = ∪a∈AIa. Moreover, by the convexity of Da and
vw, Ia is a (possibly trivial) closed interval.
We begin by considering the case that Ib and Ic are each
a single point, that is, Ib = {v} and Ic = {w}.
Let S be a minimal subset of A satisfying ∪s∈SIs = vw.
For each s ∈ S, Is is maximal, i.e., not contained in any
other It, for t ∈ S. In particular, the intervals {Is : s ∈
S} have all left endpoints distinct and all right endpoints
distinct and the order of the left endpoints is the same as
that of the right endpoints. Let k = |S| + 2 and index S
as a2, . . . , ak−1 in the order defined by the right endpoints.
Denote the interval Iai by [li, ri]. Thus l2 < l3 < . . . < lk−1,
r2 < r3 < . . . < rk−1 and the fact that these intervals cover
vw implies l2 = v, rk−1 = w and for all 2 ≤ i ≤ k − 2,
li+1 ≤ ri which further implies li < ri. Now we define
the path −→a = (a1, a2, . . . , ak−1, ak) with a1 = b, ak = c
and a2, a3, . . . , ak−1 as above. Define the linear D-sequence
−→u = (u0, u1, . . . , uk) by u0 = u1 = v, uk = w and for
2 ≤ i ≤ k−1, ui = ri. It follows immediately that (−→a , −→u ) ∈
LCvw
bc . Neither b nor c is in S since lb = rb and lc = rc. Thus
−→a is simple.
Finally to show δ(−→a ) ≤ v(b) − v(c), we note
v(b) − v(c) = v(a1) − v(ak) =
k−1X
i=1
(v(ai) − v(ai+1))
and
δ(−→a ) = ∆−→a (−→u ) =
k−1X
i=1
(ui(ai) − ui(ai+1))
= v(a1) − v(a2) +
k−1X
i=2
(ri(ai) − ri(ai+1)).
For two outcomes d, e ∈ A, let us define fde(z) = z(d)−z(e)
for all z ∈ D. It suffices to show faiai+1 (ri) ≤ faiai+1 (v) for
2 ≤ i ≤ k − 1.
Fact 9. For d, e ∈ A, fde(z) is a linear function of z.
Furthermore, if x ∈ Dd and y ∈ De with x = y, then
fde(x) = x(d) − x(e) ≥ δde ≥ −δed ≥ −(y(e) − y(d)) =
fde(y). Therefore fde(z) is monotonically nonincreasing along
the line
←→
xy as z moves in the direction from x to y.
Applying this fact with d = ai, e = ai+1, x = li and y = ri
gives the desired conclusion. This completes the proof for
the case that Ib = {v} and Ic = {w}.
For general Ib, Ic, rb < lc otherwise Db ∩ Dc ∩ vw = Ib ∩
Ic = ∅. Let v = rb and w = lc. Clearly we can apply the
above conclusion to v ∈ Db, w ∈ Dc and get a compatible
pair (−→a , −→u ) ∈ LCv w
bc with −→a simple and δ(−→a ) ≤ v (b) −
v (c). Define the linear D-sequence −→u by u0 = v, uk = w
and ui = ui for i ∈ [k − 1]. (−→a , −→u ) ∈ LCvw
bc is evident.
Moreover, applying the above fact with d = b, e = c, x = v
and y = w, we get v(b) − v(c) ≥ v (b) − v (c) ≥ δ(−→a ).
Corollary 10. For any b, c ∈ A there is a straight (b,
c)path.
The main result of this section (theorem 12) says that for
any b, c ∈ A, every straight (b, c)-path has the same δ-weight.
To prove this, we first fix v ∈ Db and w ∈ Dc and show
(lemma 11) that every straight (b, c)-path compatible with
some linear (v, w)-sequence has the same δ-weight ρbc(v, w).
We then show in theorem 12 that ρbc(v, w) is the same for
all choices of v ∈ Db and w ∈ Dc.
Lemma 11. For b, c ∈ A, there is a function ρbc : Db ×
Dc −→ R satisfying that for any (−→a , −→u ) ∈ LCvw
bc , δ(−→a ) =
ρbc(v, w).
Proof. Let (−→a , −→u ), (−→a , −→u ) ∈ LCvw
bc . It suffices to
show δ(−→a ) = δ(−→a ). To do this we construct a linear
(v, w)-sequence −→u and paths −→a ∗
, −→a ∗∗
∈ Pbc, both
compatible with −→u , satisfying δ(−→a ∗
) = δ(−→a ) and δ(−→a ∗∗
) = δ(−→a ).
Lemma 7 implies δ(−→a ∗
) = δ(−→a ∗∗
), which will complete the
proof.
Let |−→a | = ord(−→u ) = k and |−→a | = ord(−→u ) = l. We
select −→u to be any linear (v, w)-sequence (u0, u1, . . . , ut) such
that −→u and −→u are both subsequences of −→u , i.e., there
are indices 0 = i0 < i1 < · · · < ik = t and 0 = j0 <
j1 < · · · < jl = t such that −→u = (ui0 , ui1 , . . . , uik ) and
−→u = (uj0 , uj1 , . . . , ujl ). We now construct a (b, c)-path
−→a ∗
compatible with −→u such that δ(−→a ∗
) = δ(−→a ). (An
analogous construction gives −→a ∗∗
compatible with −→u such
that δ(−→a ∗∗
) = δ(−→a ).) This will complete the proof.
−→a ∗
is defined as follows: for 1 ≤ j ≤ t, a∗
j = ar where
r is the unique index satisfying ir−1 < j ≤ ir. Since both
uir−1 = ur−1 and uir = ur belong to Dar
, uj ∈ Dar
for
ir−1 ≤ j ≤ ir by the convexity of Dar
. The compatibility of
(−→a ∗
, −→u ) follows immediately. Clearly, a∗
1 = a1 = b and a∗
t =
ak = c, so −→a ∗
∈ Pbc. Furthermore, as δa∗
j a∗
j+1
= δarar
= 0
for each r ∈ [k], ir−1 < j < ir,
δ(−→a ∗
) =
k−1X
r=1
δa∗
ir
a∗
ir+1
=
k−1X
r=1
δarar+1
= δ(−→a ).
We are now ready for the main theorem of the section:
Theorem 12. ρbc is a constant map on Db × Dc. Thus
for any b, c ∈ A, every straight (b, c)-path has the same
δweight.
Proof. For a path −→a , (v, w) is compatible with −→a if
there is a linear (v, w)-sequence compatible with −→a . We
write CP(−→a ) for the set of pairs (v, w) compatible with
−→a . ρbc is constant on CP(−→a ) because for each (v, w) ∈
CP(−→a ), ρbc(v, w) = δ(−→a ). By lemma 8, we also haveS
−→a ∈SPbc
CP(−→a ) = Db ×Dc. Since A is finite, SPbc, the set
of simple paths from b to c, is finite as well.
290
Next we prove that for any path −→a , CP(−→a ) is closed.
Let ((vn
, wn
) : n ∈ N) be a convergent sequence in CP(−→a )
and let (v, w) be the limit. We want to show that (v, w) ∈
CP(−→a ). For each n ∈ N, since (vn
, wn
) ∈ CP(−→a ), there is
a linear (vn
, wn
)-sequence un
compatible with −→a , i.e., there
are 0 = λn
0 ≤ λn
1 ≤ . . . ≤ λn
k = 1 (k = |−→a |) such that
un
j = (1 − λn
j )vn
+ λn
j wn
(j = 0, 1, . . . , k). Since for each
n, λn
= (λn
0 , λn
1 , . . . , λn
k ) belongs to the closed bounded set
[0, 1]k+1
we can choose an infinite subset I ⊆ N such that the
sequence (λn
: n ∈ I) converges. Let λ = (λ0, λ1, . . . , λk) be
the limit. Clearly 0 = λ0 ≤ λ1 ≤ · · · ≤ λk = 1.
Define the linear (v, w)-sequence −→u by uj = (1 − λj )v +
λj w (j = 0, 1, . . . , k). Then for each j ∈ {0, . . . , k}, uj is
the limit of the sequence (un
j : n ∈ I). For j > 0, each un
j
belongs to the closed set Daj , so uj ∈ Daj . Similarly, for j <
k each un
j belongs to the closed set Daj+1 , so uj ∈ Daj+1 .
Hence (−→a , −→u ) is compatible, implying (v, w) ∈ CP(−→a ).
Now we have Db × Dc covered by finitely many closed
subsets on each of them ρbc is a constant.
Suppose for contradiction that there are (v, w), (v , w ) ∈
Db × Dc such that ρbc(v, w) = ρbc(v , w ).
L = {((1 − λ)v + λv , (1 − λ)w + λw ) : λ ∈ [0, 1]}
is a line segment in Db ×Dc by the convexity of Db, Dc. Let
L1 = {(x, y) ∈ L : ρbc(x, y) = ρbc(v, w)}
and L2 = L − L1. Clearly (v, w) ∈ L1, (v , w ) ∈ L2. Let
P = {−→a ∈ SPbc : δ(−→a ) = ρbc(v, w)}.
L1 =
`S
−→a ∈P CP(−→a )
´
∩ L, L2 =
S
−→a ∈SPbc−P CP(−→a )
∩ L
are closed by the finiteness of P. This is a contradiction,
since it is well known (and easy to prove) that a line segment
can not be expressed as the disjoint union of two nonempty
closed sets.
Summarizing corollary 10, lemma 11 and theorem 12, we
have
Corollary 13. For any b, c ∈ A, there is a real number
ρbc with the property that (1) There is at least one straight
(b, c)-path of δ-weight ρbc and (2) Every straight (b, c)-path
has δ-weight ρbc.
6. PROOF OF THEOREM 4
Lemma 14. ρbc ≤ δbc for all b, c ∈ A.
Proof. For contradiction, suppose ρbc − δbc = > 0.
By the definition of δbc, there exists v ∈ f−1
(b) ⊆ Db with
v(b) − v(c) < δbc + = ρbc. Pick an arbitrary w ∈ Dc.
By lemma 8, there is a compatible pair (−→a , −→u ) ∈ LCvw
bc
with δ(−→a ) ≤ v(b) − v(c). Since −→a is a straight (b, c)-path,
ρbc = δ(−→a ) ≤ v(b) − v(c), leading to a contradiction.
Define another edge-weighted complete directed graph Gf
on vertex set A where the weight of arc (a, b) is ρab.
Immediately from lemma 14, the weight of every directed cycle in
Gf is bounded below by its weight in Gf . To prove theorem
4, it suffices to show the zero cycle property of Gf , i.e.,
every directed cycle has weight zero. We begin by considering
two-cycles.
Lemma 15. ρbc + ρcb = 0 for all b, c ∈ A.
Proof. Let −→a be a straight (b, c)-path compatible with
linear sequence −→u . let −→a be the reverse of −→a and −→u the
reverse of −→u . Obviously, (−→a , −→u ) is compatible as well and
thus −→a is a straight (c, b)-path. Therefore,
ρbc + ρcb = δ(−→a ) + δ(−→a ) =
k−1X
i=1
δaiai+1 +
k−1X
i=1
δai+1ai
=
k−1X
i=1
(δaiai+1 + δai+1ai ) = 0,
where the final equality uses proposition 5.
Next, for three cycles, we first consider those compatible
with linear triples.
Lemma 16. If there are collinear points u ∈ Da, v ∈ Db,
w ∈ Dc (a, b, c ∈ A), ρab + ρbc + ρca = 0.
Proof. First, we prove for the case where v is between u
and w. From lemma 8, there are compatible pairs (−→a , −→u ) ∈
LCuv
ab , (−→a , −→u ) ∈ LCvw
bc . Let |−→a | = ord(−→u ) = k and
|−→a | = ord(−→u ) = l. We paste −→a and −→a together as
−→a = (a = a1, a2, . . . , ak−1, ak, a1 , . . . , al = c),
−→u and −→u as
−→u = (u = u0, u1, . . . , uk = v = u0 , u1 , . . . , ul = w).
Clearly (−→a , −→u ) ∈ LCuw
ac and
δ(−→a ) =
k−1X
i=1
δaiai+1
+ δak
a1
+
l−1X
i=1
δai ai+1
= δ(−→a ) + δbb + δ(−→a )
= δ(−→a ) + δ(−→a ).
Therefore, ρac = δ(−→a ) = δ(−→a ) + δ(−→a ) = ρab + ρbc.
Moreover, ρac = −ρca by lemma 15, so we get ρab + ρbc +
ρca = 0.
Now suppose w is between u and v. By the above
argument, we have ρac + ρcb + ρba = 0 and by lemma 15,
ρab + ρbc + ρca = −ρba − ρcb − ρac = 0.
The case that u is between v and w is similar.
Now we are ready for the zero three-cycle property:
Lemma 17. ρab + ρbc + ρca = 0 for all a, b, c ∈ A.
Proof. Let S = {(a, b, c) : ρab + ρbc + ρca = 0} and
for contradiction, suppose S = ∅. S is finite. For each
a ∈ A, choose va ∈ Da arbitrarily and let T be the convex
hull of {va : a ∈ A}. For each (a, b, c) ∈ S, let Rabc =
Da × Db × Dc ∩ T3
. Clearly, each Rabc is nonempty and
compact. Moreover, by lemma 16, no (u, v, w) ∈ Rabc is
collinear.
Define f : D3
→ R by f(u, v, w) = |v−u|+|w−v|+|u−w|.
For (a, b, c) ∈ S, the restriction of f to the compact set Rabc
attains a minimum m(a, b, c) at some point (u, v, w) ∈ Rabc
by the continuity of f, i.e., there exists a triangle ∆uvw of
minimum perimeter within T with u ∈ Da, v ∈ Db, w ∈ Dc.
Choose (a∗
, b∗
, c∗
) ∈ S so that m(a∗
, b∗
, c∗
) is minimum
and let (u∗
, v∗
, w∗
) ∈ Ra∗b∗c∗ be a triple achieving it. Pick
an arbitrary point p in the interior of ∆u∗
v∗
w∗
. By the
convexity of domain D, there is d ∈ A such that p ∈ Dd.
291
Consider triangles ∆u∗
pw∗
, ∆w∗
pv∗
and ∆v∗
pu∗
. Since
each of them has perimeter less than that of ∆u∗
v∗
w∗
and
all three triangles are contained in T, by the minimality of
∆u∗
v∗
w∗
, (a∗
, d, c∗
), (c∗
, d, b∗
), (b∗
, d, a∗
) ∈ S. Thus
ρa∗d + ρdc∗ + ρc∗a∗ = 0,
ρc∗d + ρdb∗ + ρb∗c∗ = 0,
ρb∗d + ρda∗ + ρa∗b∗ = 0.
Summing up the three equalities,
(ρa∗d + ρdc∗ + ρc∗d + ρdb∗ + ρb∗d + ρda∗ )
+(ρc∗a∗ + ρb∗c∗ + ρa∗b∗ ) = 0,
which yields a contradiction
ρa∗b∗ + ρb∗c∗ + ρc∗a∗ = 0.
With the zero two-cycle and three-cycle properties, the
zero cycle property of Gf is immediate. As noted earlier,
this completes the proof of theorem 4.
Theorem 18. Every directed cycle of Gf has weight zero.
Proof. Clearly, zero two-cycle and three-cycle properties
imply triangle equality ρab +ρbc = ρac for all a, b, c ∈ A. For
a directed cycle C = a1a2 . . . aka1, by inductively applying
triangle equality, we have
Pk−1
i=1 ρaiai+1 = ρa1ak . Therefore,
the weight of C is
k−1X
i=1
ρaiai+1 + ρaka1 = ρa1ak + ρaka1 = 0.
As final remarks, we note that our result implies the
following strengthenings of theorem 12:
Corollary 19. For any b, c ∈ A, every admissible (b,
c)path has the same δ-weight ρbc.
Proof. First notice that for any b, c ∈ A, if Db ∩Dc = ∅,
δbc = ρbc. To see this, pick v ∈ Db ∩ Dc arbitrarily.
Obviously, path −→a = (b, c) is compatible with linear sequence
−→u = (v, v, v) and is thus a straight (b, c)-path. Hence
ρbc = δ(−→a ) = δbc.
Now for any b, c ∈ A and any (b, c)-path −→a with C(−→a ) =
∅, let −→u ∈ C(−→a ). Since ui ∈ Dai ∩ Dai+1 for i ∈ [|−→a | − 1],
δ(−→a ) =
|−→a |−1
X
i=1
δaiai+1 =
|−→a |−1
X
i=1
ρaiai+1 ,
which by theorem 18, = −ρa|−→a |a1 = ρa1a|−→a |
= ρbc.
Corollary 20. For any b, c ∈ A, ρbc is equal to δ∗
bc, the
minimum δ-weight over all (b, c)-paths.
Proof. Clearly ρbc ≥ δ∗
bc by corollary 13. On the other
hand, for every (b, c)-path −→a = (b = a1, a2, . . . , ak = c), by
lemma 14,
δ(−→a ) =
k−1X
i=1
δaiai+1 ≥
k−1X
i=1
ρaiai+1 ,
which by theorem 18, = −ρaka1 = ρa1ak = ρbc. Hence ρbc ≤
δ∗
bc, which completes the proof.
7. COUNTEREXAMPLES TO STRONGER
FORMS OF THEOREM 4
Theorem 4 applies to social choice functions with convex
domain and finite range. We now show that neither of these
hypotheses can be omitted. Our examples are single player
functions.
The first example illustrates that convexity can not be
omitted. We present an untruthful single player social choice
function with three outcomes a, b, c satisfying W-MON on a
path-connected but non-convex domain. The domain is the
boundary of a triangle whose vertices are x = (0, 1, −1), y =
(−1, 0, 1) and z = (1, −1, 0). x and the open line segment
zx is assigned outcome a, y and the open line segment xy
is assigned outcome b, and z and the open line segment
yz is assigned outcome c. Clearly, δab = −δba = δbc =
−δcb = δca = −δac = −1, W-MON (the nonnegative
twocycle property) holds. Since there is a negative cycle δab +
δbc + δca = −3, by lemma 3, this is not a truthful choice
function.
We now show that the hypothesis of finite range can not
be omitted. We construct a family of single player social
choice functions each having a convex domain and an infinite
number of outcomes, and satisfying weak monotonicity but
not truthfulness.
Our examples will be specified by a positive integer n and
an n × n matrix M satisfying the following properties: (1)
M is non-singular. (2) M is positive semidefinite. (3) There
are distinct i1, i2, . . . , ik ∈ [n] satisfying
k−1X
j=1
(M(ij, ij) − M(ij , ij+1)) + (M(ik, ik) − M(ik, i1)) < 0.
Here is an example matrix with n = 3 and (i1, i2, i3) =
(1, 2, 3):
0
@
0 1 −1
−1 0 1
1 −1 0
1
A
Let e1, e2, . . . , en denote the standard basis of Rn
. Let
Sn denote the convex hull of {e1, e2 . . . , en}, which is the
set of vectors in Rn
with nonnegative coordinates that sum
to 1. The range of our social choice function will be the
set Sn and the domain D will be indexed by Sn, that is
D = {yλ : λ ∈ Sn}, where yλ is defined below. The function
f maps yλ to λ.
Next we specify yλ. By definition, D must be a set of
functions from Sn to R. For λ ∈ Sn, the domain element
yλ : Sn −→ R is defined by yλ(α) = λT
Mα. The
nonsingularity of M guarantees that yλ = yµ for λ = µ ∈ Sn.
It is easy to see that D is a convex subset of the set of all
functions from Sn to R.
The outcome graph Gf is an infinite graph whose vertex
set is the outcome set A = Sn. For outcomes λ, µ ∈ A, the
edge weight δλµ is equal to
δλµ = inf{v(λ) − v(µ) : f(v) = λ}
= yλ(λ) − yλ(µ) = λT
Mλ − λT
Mµ = λT
M(λ − µ).
We claim that Gf satisfies the nonnegative two-cycle
property (W-MON) but has a negative cycle (and hence is not
truthful).
For outcomes λ, µ ∈ A,
δλµ +δµλ = λT
M(λ−µ)+µT
M(µ−λ) = (λ−µ)T
M(λ−µ),
292
which is nonnegative since M is positive semidefinite. Hence
the nonnegative two-cycle property holds. Next we show
that Gf has a negative cycle. Let i1, i2, . . . , ik be a
sequence of indices satisfying property 3 of M. We claim
ei1 ei2 . . . eik ei1 is a negative cycle. Since
δeiej = eT
i M(ei − ej) = M(i, i) − M(i, j)
for any i, j ∈ [k], the weight of the cycle
k−1X
j=1
δeij
eij+1
+ δeik
ei1
=
k−1X
j=1
(M(ij , ij ) − M(ij, ij+1)) + (M(ik, ik) − M(ik, i1)) < 0,
which completes the proof.
Finally, we point out that the third property imposed on
the matrix M has the following interpretation. Let R(M) =
{r1, r2, . . . , rn} be the set of row vectors of M and let hM be
the single player social choice function with domain R(M)
and range {1, 2, . . . , n} mapping ri to i. Property 3 is
equivalent to the condition that the outcome graph GhM has a
negative cycle. By lemma 3, this is equivalent to the
condition that hM is untruthful.
8. FUTURE WORK
As stated in the introduction, the goal underlying the
work in this paper is to obtain useful and general
characterizations of truthfulness.
Let us say that a set D of P × A real valuation matrices
is a WM-domain if any social choice function on D
satisfying weak monotonicity is truthful. In this paper, we showed
that for finite A, any convex D is a WM-domain. Typically,
the domains of social choice functions considered in
mechanism design are convex, but there are interesting examples
with non-convex domains, e.g., combinatorial auctions with
unknown single-minded bidders. It is intriguing to find the
most general conditions under which a set D of real
matrices is a WM-domain. We believe that convexity is the main
part of the story, i.e., a WM-domain is, after excluding some
exceptional cases, essentially a convex set.
Turning to parametric representations, let us say a set
D of P × A matrices is an AM-domain if any truthful
social choice function with domain D is an affine maximizer.
Roberts" theorem says that the unrestricted domain is an
AM-domain. What are the most general conditions under
which a set D of real matrices is an AM-domain?
Acknowledgments
We thank Ron Lavi for helpful discussions and the two
anonymous referees for helpful comments.
9. REFERENCES
[1] A. Archer and E. Tardos. Truthful mechanisms for
one-parameter agents. In IEEE Symposium on
Foundations of Computer Science, pages 482-491, 2001.
[2] Y. Bartal, R. Gonen, and N. Nisan. Incentive
compatible multi unit combinatorial auctions. In
TARK "03: Proceedings of the 9th conference on
Theoretical aspects of rationality and knowledge, pages
72-87. ACM Press, 2003.
[3] S. Bikhchandani, S. Chatterjee, and A. Sen. Incentive
compatibility in multi-unit auctions. Technical report,
UCLA Department of Economics, Dec. 2004.
[4] A. Goldberg, J. Hartline, A. Karlin, M. Saks and
A. Wright. Competitive Auctions, 2004.
[5] H. Gui, R. Muller, and R. Vohra. Dominant strategy
mechanisms with multidimensional types. Technical
Report 047, Maastricht: METEOR, Maastricht
Research School of Economics of Technology and
Organization, 2004. available at
http://ideas.repec.org/p/dgr/umamet/2004047.html.
[6] R. Lavi, A. Mu"alem, and N. Nisan. Towards a
characterization of truthful combinatorial auctions. In
FOCS "03: Proceedings of the 44th Annual IEEE
Symposium on Foundations of Computer Science, page
574. IEEE Computer Society, 2003.
[7] D. Lehmann, L. O"Callaghan, and Y. Shoham. Truth
revelation in approximately efficient combinatorial
auctions. J. ACM, 49(5):577-602, 2002.
[8] A. Mas-Colell, M. Whinston, and J. Green.
Microeconomic Theory. Oxford University Press, 1995.
[9] N. Nisan. Algorithms for selfish agents. Lecture Notes
in Computer Science, 1563:1-15, 1999.
[10] K. Roberts. The characterization of implementable
choice rules. Aggregation and Revelation of Preferences,
J-J. Laffont (ed.), North Holland Publishing Company.
[11] J.-C. Rochet. A necessary and sufficient condition for
rationalizability in a quasi-linear context. Journal of
Mathematical Economics, 16:191-200, 1987.
[12] I. Rozenshtrom. Dominant strategy implementation
with quasi-linear preferences. Master"s thesis, Dept. of
Economics, The Hebrew University, Jerusalem, Israel,
1999.
293 | truthful implementation;weak monotonicity;non-truthful function;strategyproof;individual preference;social choice function;truthful;truthfulness;dominant strategy;recognition algorithm;affine maximizer;convex domain;nonnegative cycle property;mechanism design |
train_J-63 | Negotiation-Range Mechanisms: Exploring the Limits of Truthful Efficient Markets | This paper introduces a new class of mechanisms based on negotiation between market participants. This model allows us to circumvent Myerson and Satterthwaite"s impossibility result and present a bilateral market mechanism that is efficient, individually rational, incentive compatible and budget balanced in the single-unit heterogeneous setting. The underlying scheme makes this combination of desirable qualities possible by reporting a price range for each buyer-seller pair that defines a zone of possible agreements, while the final price is left open for negotiation. | 1. INTRODUCTION
In this paper we introduce the concept of negotiation
based mechanisms in the context of the theory of efficient
truthful markets. A market consists of multiple buyers and
sellers who wish to exchange goods. The market"s main
objective is to produce an allocation of sellers" goods to buyers
as to maximize the total gain from trade.
A commonly studied model of participant behavior is taken
from the field of economic mechanism design [3, 4, 11]. In
this model each player has a private valuation function that
assigns real values to each possible allocation. The
algorithm motivates players to participate truthfully by handing
payments to them.
The mechanism in an exchange collects buyer bids and
seller bids and clears the exchange by computing:(i) a set of
trades, and (ii) the payments made and received by players.
In designing a mechanism to compute trades and payments
we must consider the bidding strategies of self-interested
players, i.e. rational players that follow expected-utility
maximizing strategies. We set allocative efficiency as our
primary goal. That is the mechanism must compute a set
of trades that maximizes gain from trade. In addition we
require individual rationality (IR) so that all players have
positive expected utility to participate, budget balance (BB)
so that the exchange does not run at a loss, and incentive
compatibility (IC) so that reporting the truth is a dominant
strategy for each player.
Unfortunately, Myerson and Satterthwaite"s (1983) well
known result demonstrates that in bilateral trade it is
impossible to simultaneously achieve perfect efficiency, BB, and
IR using an IC mechanism [10]. A unique approach to
overcome Myerson and Satterthwaite"s impossibility result was
attempted by Parkes, Kalagnanam and Eso [12]. This result
designs both a regular and a combinatorial bilateral trade
mechanism (which imposes BB and IR) that approximates
truth revelation and allocation efficiency.
In this paper we circumvent Myerson and Satterthwaite"s
impossibility result by introducing a new model of
negotiationrange markets. A negotiation-range mechanism does not
produce payment prices for the market participants. Rather,
is assigns each buyer-seller pair a price range, called Zone
Of Possible Agreements (ZOPA). The buyer is provided with
the high end of the range and the seller with the low end
of the range. This allows the trading parties to engage in
negotiation over the final price with guarantee that the deal
is beneficial for both of them. The negotiation process is not
considered part of the mechanism but left up to the
interested parties, or to some external mechanism to perform. In
effect a negotiation-range mechanism operates as a mediator
between the market participants, offering them the grounds
to be able to finalize the terms of the trade by themselves.
This concept is natural to many real-world market
environments such as the real estate market.
1
We focus on the single-unit heterogeneous setting: n
sellers offer one unique good each by placing sealed bids
specifying their willingness to sell, and m buyers, interested in
buying a single good each, placing sealed bids specifying
their willingness to pay for each good they may be
interested in.
Our main result is a single-unit heterogeneous bilateral
trade negotiation-range mechanism (ZOPAS) that is
efficient, individually rational, incentive compatible and budget
balanced.
Our result does not contradict Myerson and
Satterthwaite"s important theorem. Myerson-Satterthwaite"s proof
relies on a theorem assuring that in two different efficient
IC markets; if the sellers have the same upper bound utility
then they will receive the same prices in each market and
if the buyers have the same lower bound utility then they
will receive the same prices in each market. Our single-unit
heterogeneous mechanism bypasses Myerson and
Satterthwaite"s theorem by producing a price range, defined by a
seller"s floor and a buyer"s ceiling, for each pair of matched
players. In our market mechanism the seller"s upper bound
utility may be the same while the seller"s floor is different
and the buyer"s lower bound utility may be the same while
the buyer"s ceiling is different. Moreover, the final price is
not fixed by the mechanism at all. Instead, it is determined
by an independent negotiation between the buyer and seller.
More specifically, in a negotiation-range mechanism, the
range of prices each matched pair is given is resolved by a
negotiation stage where a final price is determined. This
negotiation stage is crucial for our mechanism to be IC.
Intuitively, a negotiation range mechanism is incentive
compatible if truth telling promises the best ZOPA from the
point of view of the player in question. That is, he would
tell the truth if this strategy maximizes the upper and lower
bounds on his utility as expressed by the ZOPA boundaries.
Yet, when carefully examined it turns out that it is
impossible (by [10]) that this goal will always hold. This is simply
because such a mechanism could be easily modified to
determine final prices for the players (e.g. by taking the average of
the range"s boundaries). Here, the negotiation stage come
into play. We show that if the above utility maximizing
condition does not hold then it is the case that the player
cannot influence the negotiation bound that is assigned to
his deal partner no matter what value he declares. This
means that the only thing that he may achieve by reporting
a false valuation is modifying his own negotiation bound,
something that he could alternatively achieve by reporting
his true valuation and incorporating the effect of the
modified negotiation bound into his negotiation strategy. This
eliminates the benefit of reporting false valuations and
allows our mechanism to compute the optimal gain from trade
according to the players" true values.
The problem of computing the optimal allocation which
maximizes gain from trade can be conceptualized as the
problem of finding the maximum weighted matching in a
weighted bipartite graph connecting buyers and sellers, where
each edge in the graph is assigned a weight equal to the
difference between the respective buyer bid and seller bid. It
is well known that this problem can be solved efficiently in
polynomial time.
VCG IC payment schemes [2, 7, 13] support efficient and
IR bilateral trade but not simultaneously BB. Our particular
approach adapts the VCG payment scheme to achieve
budget balance. The philosophy of the VCG payment schemes
in bilateral trade is that the buyer pays the seller"s
opportunity cost of not selling the good to another buyer and not
keeping the good to herself. The seller is paid in addition to
the buyer"s price a compensation for the damage the
mechanism did to the seller by not extracting the buyer"s full
bid. Our philosophy is a bit different: The seller is paid at
least her opportunity cost of not selling the good to another
buyer and not keeping the good for herself. The buyer pays
at most his alternate seller"s opportunity cost of not selling
the good to another buyer and not keeping the alternate
good for herself.
The rest of this paper is organized as follows. In
Section 2 we describe our model and definitions. In section 3
we present the single-unit heterogeneous negotiation-range
mechanism and show that it is efficient, IR, IC and BB.
Finally, we conclude with a discussion in Section 4.
2. NEGOTIATION MARKETS
PRELIMINARIES
Let Π denote the set of players, N the set of n
selling players, and M the set of m buying players, where
Π = N ∪ M. Let Ψ = {1, ..., t} denote the set of goods.
Let Ti ∈ {−1, 0, 1}t
denote an exchange vector for a trade,
such that player i buys goods {A ∈ Ψ|Ti (A) = 1} and sells
goods {A ∈ Ψ|Ti (A) = −1}. Let T = (T1, ..., T|Π|) denote
the complete trade between all players. We view T as
describing the allocation of goods by the mechanism to the
buyers and sellers.
In the single-unit heterogeneous setting every good
belongs to specific seller, and every buyer is interested in
buying one good. The buyer may bid for several or all goods.
At the end of the auction every good is either assigned to
one of the buyers who bid for it or kept unsold by the seller.
It is convenient to assume the sets of buyers and sellers are
disjoint (though it is not required), i.e. N ∩ M = ∅. Each
seller i is associated with exactly one good Ai, for which she
has true valuation ci which expresses the price at which it
is beneficial for her to sell the good. If the seller reports a
false valuation at an attempt to improve the auction results
for her, this valuation is denoted ˆci. A buyer has a
valuation vector describing his valuation for each of the goods
according to their owner. Specifically, vj(k) denotes buyer
j"s valuation for good Ak. Similarly, if he reports a false
valuation it is denoted ˆvj(k).
If buyer j is matched by the mechanism with seller i then
Ti(Ai) = −1 and Tj(Ai) = 1. Notice, that in our setting for
every k = i, Ti(Ak) = 0 and Tj(Ai) = 0 and also for every
z = j, Tz(Ai) = 0.
For a matched buyer j - seller i pair, the gain from trade
on the deal is defined as vj(i) − ci. Given and allocation T,
the gain from trade associated with T is
V =
j∈M,i∈N
(vj(i) − ci) · Tj(Ai).
Let T∗
denote the optimal allocation which maximizes the
gain from trade, computed according to the players" true
valuations. Let V ∗
denote the optimal gain from trade
associated with this allocation.
When players" report false valuations we use ˆT∗
and ˆV ∗
to
denote the optimal allocation and gain from trade,
respectively, when computed according to the reported valuations.
2
We are interested in the design of negotiation-range
mechanisms. In contrast to a standard auction mechanism where
the buyer and seller are provided with the prices they should
pay, the goal of a negotiation-range mechanism is to provide
the player"s with a range of prices within which they can
negotiate the final terms of the deal by themselves. The
mechanism would provide the buyer with the upper bound
on the range and the seller with the lower bound on the
range. This gives each of them a promise that it will be
beneficial for them to close the deal, but does not provide
information about the other player"s terms of negotiation.
Definition 1. Negotiation Range: Zone Of Possible
Agreements, ZOPA, between a matched buyer and seller. The
ZOPA is a range, (L, H), 0 ≤ L ≤ H, where H is an upper
bound (ceiling) price for the buyer and L is a lower bound
(floor) price for the seller.
Definition 2. Negotiation-Range Mechanism: A
mechanism that computes a ZOPA, (L, H), for each matched buyer
and seller in T∗
, and provides the buyer with the upper bound
H and the seller with the lower bound L.
The basic assumption is that participants in the auction
are self-interested players. That is their main goal is to
maximize their expected utility. The utility for a buyer who
does not participate in the trade is 0. If he does win some
good, his utility is the surplus between his valuation for that
good and the price he pays. For a seller, if she keeps the good
unsold, her utility is just her valuation of the good, and the
surplus is 0. If she gets to sell it, her utility is the price she
is paid for it, and the surplus is the difference between this
price and her valuation.
Since negotiation-range mechanisms assign bounds on the
range of prices rather than the final price, it is useful to
define the upper and lower bounds on the player"s utilities
defined by the range"s limits.
Definition 3. Consider a buyer j - seller i pair matched
by a negotiation-range mechanism and let (L, H) be their
associated negotiation range.
• The buyer"s top utility is: vj(i) − L, and the buyer"s
bottom utility is vj(i) − H.
• The seller"s top utility is H, with surplus H − ci, and
the seller"s bottom utility is L, with surplus L − ci.
3. THE SINGLE-UNIT HETEROGENEOUS
MECHANISM (ZOPAS)
3.1 Description of the Mechanism
ZOPAS is a negotiation-range mechanism, it finds the
optimal allocation T∗
and uses it to define a ZOPA for each
buyer-seller pair.
The first stage in applying the mechanism is for the buyers
and sellers to submit their sealed bids. The mechanism then
allocates buyers to sellers by computing the allocation T ∗
,
which results in the optimal gain from trade V ∗
, and defines
a ZOPA for each buyer-seller pair. Finally, buyers and sellers
use the ZOPA to negotiate a final price.
Computing T∗
involves solving the maximum weighted
bipartite matching problem for the complete bipartite graph
Kn,m constructed by placing the buyers on one side of the
Find the optimal allocation T ∗
Compute the maximum weighted bipartite
matching for the bipartite graph
of buyers and sellers, and edge weights
equal to the gain from trade.
Calculate Sellers" Floors
For every buyer j, allocated good Ai
Find the optimal allocation (T−j)∗
Li = vj(i) + (V−j)∗
− V ∗
Calculate Buyers" Ceilings
For every buyer j, allocated good Ai
Find the optimal allocation (T −i
)∗
Find the optimal allocation (T −i
−j )∗
Hj = vj(i) + (V −i
−j )∗
− (V −i
)∗
Negotiation Phase
For every buyer j
and every seller i of good Ai
Report to seller i her floor Li
and identify her matched buyer j
Report to buyer j his ceiling Hj
and identify his matched seller i
i, j negotiate the good"s final price
Figure 1: The ZOPAS mechanism
graph, the seller on another and giving the edge between
buyer j and seller i weight equal to vj(i) − ci. The
maximum weighted matching problem in solvable in polynomial
time (e.g., using the Hungarian Method). This results in
a matching between buyers and sellers that maximizes gain
from trade.
The next step is to compute for each buyer-seller pair a
seller"s floor, which provides the lower bound of the ZOPA
for this pair, and assigns it to the seller.
A seller"s floor is computed by calculating the difference
between the total gain from trade when the buyer is excluded
and the total gain from trade of the other participants when
the buyer is included (the VCG principle).
Let (T−j)∗
denote the gain from trade of the optimal
allocation when buyer j"s bids are discarded. Denote by (V−j)∗
the total gain from trade in the allocation (T−j)∗
.
Definition 4. Seller Floor: The lowest price the seller
should expect to receive, communicated to the seller by the
mechanism. The seller floor for player i who was matched
with buyer j on good Ai, i.e., Tj(Ai) = 1, is computed as:
Li = vj(i) + (V−j)∗
− V ∗
.
The seller is instructed not to accept less than this price from
her matched buyer.
Next, the mechanism computes for each buyer-seller pair a
buyer ceiling, which provides the upper bound of the ZOPA
for this pair, and assigns it to the buyer.
Each buyer"s ceiling is computed by removing the buyer"s
matched seller and calculating the difference between the
total gain from trade when the buyer is excluded and the
total gain from trade of the other participants when the
3
buyer is included. Let (T−i
)∗
denote the gain from trade
of the optimal allocation when seller i is removed from the
trade. Denote by (V −i
)∗
the total gain from trade in the
allocation (T−i
)∗
.
Let (T−i
−j )∗
denote the gain from trade of the optimal
allocation when seller i is removed from the trade and buyer j"s
bids are discarded. Denote by (V −i
−j )∗
the total gain from
trade in the allocation (T −i
−j )∗
.
Definition 5. Buyer Ceiling: The highest price the seller
should expect to pay, communicated to the buyer by the
mechanism. The buyer ceiling for player j who was matched with
seller i on good Ai, i.e., Tj(Ai) = 1, is computed as:
Hj = vj(i) + (V −i
−j )∗
− (V −i
)∗
.
The buyer is instructed not to pay more than this price to
his matched seller.
Once the negotiation range lower bound and upper bound
are computed for every matched pair, the mechanism reports
the lower bound price to the seller and the upper bound price
to the buyer. At this point each buyer-seller pair negotiates
on the final price and concludes the deal.
A schematic description the ZOPAS mechanism is given
in Figure 3.1.
3.2 Analysis of the Mechanism
In this section we analyze the properties of the ZOPAS
mechanism.
Theorem 1. The ZOPAS market negotiation-range
mechanism is an incentive-compatible bilateral trade
mechanism that is efficient, individually rational and budget
balanced.
Clearly ZOPAS is an efficient polynomial time mechanism.
Let us show it satisfies the rest of the properties in the
theorem.
Claim 1. ZOPAS is individually rational, i.e., the
mechanism maintains nonnegative utility surplus for all
participants.
Proof. If a participant does not trade in the optimal
allocation then his utility surplus is zero by definition.
Consider a pair of buyer j and seller i which are matched in the
optimal allocation T ∗
. Then the buyer"s utility is at least
vj(i) − Hj. Recall that Hj = vj(i) + (V −i
−j )∗
− (V −i
)∗
, so
that vj(i) − Hj = (V −i
)∗
− (V −i
−j )∗
. Since the optimal gain
from trade which includes j is higher than that which does
not, we have that the utility is nonnegative: vj(i) − Hj ≥ 0.
Now, consider the seller i. Her utility surplus is at least
Li − ci. Recall that Li = vj(i) + (V−j)∗
− V ∗
. If we
removed from the optimal allocation T ∗
the contribution of
the buyer j - seller i pair, we are left with an allocation
which excludes j, and has value V ∗
− (vj(i) − ci). This
implies that (V−j)∗
≥ V ∗
− vj(i) + ci, which implies that
Li − ci ≥ 0.
The fact that ZOPAS is a budget-balanced mechanism
follows from the following lemma which ensures the validity
of the negotiation range, i.e., that every seller"s floor is below
her matched buyer"s ceiling. This ensures that they can close
the deal at a final price which lies in this range.
Lemma 1. For every buyer j- seller i pair matched by the
mechanism: Li ≤ Hj.
Proof. Recall that Li = vj(i) + (V−j)∗
− V ∗
and Hj =
vj(i)+(V −i
−j )∗
−(V −i
)∗
. To prove that Li ≤ Hj it is enough
to show that
(V −i
)∗
+ (V−j)∗
≤ V ∗
+ (V −i
−j )∗
. (1)
The proof of (1) is based on a method which we apply
several times in our analysis. We start with the
allocations (T−i
)∗
and (T−j)∗
which together have value equal
to (V −i
)∗
+ (V−j)∗
. We now use them to create a pair of
new valid allocations, by using the same pairs that were
matched in the original allocations. This means that the
sum of values of the new allocations is the same as the
original pair of allocations. We also require that one of the new
allocations does not include buyer j or seller i. This means
that the sum of values of these new allocations is at most
V ∗
+ (V −i
−j )∗
, which proves (1).
Let G be the bipartite graph where the nodes on one side
of G represent the buyers and the nodes on the other side
represent the sellers, and edge weights represent the gain
from trade for the particular pair. The different allocations
represent bipartite matchings in G. It will be convenient for
the sake of our argument to think of the edges that belong
to each of the matchings as being colored with a specific
color representing this matching.
Assign color 1 to the edges in the matching (T −i
)∗
and
assign color 2 to the edges in the matching (T−j)∗
. We claim
that these edges can be recolored using colors 3 and 4 so that
the new coloring represents allocations T (represented by
color 3) and (T−i
−j ) (represented by color 4). This implies
the that inequality (1) holds. Figure 2 illustrates the graph
G and the colorings of the different matchings.
Define an alternating path P starting at j. Let S1 be
the seller matched to j in (T −i
)∗
(if none exists then P is
empty). Let B1 be the buyer matched to S1 in (T−j)∗
, S2 be
the seller matched to B1 in (T−i
)∗
, B2 be the buyer matched
to S2 in (T−j)∗
, and so on. This defines an alternating
path P, starting at j, whose edges" colors alternate between
colors 1 and 2 (starting with 1). This path ends either in a
seller who is not matched in (T−j)∗
or in a buyer who is not
matched in (T−i
)∗
. Since all sellers in this path are matched
in (T−i
)∗
, we have that seller i does not belong to P. This
ensures that edges in P may be colored by alternating colors
3 and 4 (starting with 3). Since except for the first edge, all
others do not involve i or j and thus may be colored 4 and
be part of an allocation (T −i
−j ) .
We are left to recolor the edges that do not belong to P.
Since none of these edges includes j we have that the edges
that were colored 1, which are part of (T −i
)∗
, may now be
colored 4, and be included in the allocation (T −i
−j ) . It is
also clear that the edges that were colored 2, which are part
of (T−j)∗
, may now be colored 3, and be included in the
allocation T . This completes the proof of the lemma.
3.3 Incentive Compatibility
The basic requirement in mechanism design is for an
exchange mechanism to be incentive compatible. This means
that its payment structure enforces that truth-telling is the
players" weakly dominant strategy, that is that the
strategy by which the player is stating his true valuation results
4
...
jS1
S2 B1
S3 B2
B3S4
S5 B4
S6 B5
S8 B7
S7 B6
...
Figure 2: Alternating path argument for Lemma 1
(Validity of the Negotiation Range) and Claim 2
(part of Buyer"s IC proof)
Colors
Bidders
1 32 4
UnmatchedMatched
Figure 3: Key to Figure 2
in bigger or equal utility as any other strategy. The
utility surplus is defined as the absolute difference between the
player"s bid and his price.
Negotiation-range mechanisms assign bounds on the range
of prices rather than the final price and therefore the player"s
valuation only influences the minimum and maximum bounds
on his utility. For a buyer the minimum (bottom) utility
would be based on the top of the negotiation range
(ceiling), and the maximum (top) utility would be based on the
bottom of the negotiation range (floor). For a seller it"s the
other way around. Therefore the basic natural requirement
from negotiation-range mechanisms would be that stating
the player"s true valuation results in both the higher
bottom utility and higher top utility for the player, compared
with other strategies. Unfortunately, this requirement is
still too strong and it is impossible (by [10]) that this will
always hold. Therefore we slightly relax it as follows: we
require this holds when the false valuation based strategy
changes the player"s allocation. When the allocation stays
unchanged we require instead that the player would not be
able to change his matched player"s bound (e.g. a buyer
cannot change the seller"s floor). This means that the only
thing he can influence is his own bound, something that he
can alternatively achieve through means of negotiation.
The following formally summarizes our incentive
compatibility requirements from the negotiation-range mechanism.
Buyer"s incentive compatibility:
• Let j be a buyer matched with seller i by the
mechanism according to valuation vj and the
negotiationrange assigned is (Li, Hj). Assume that when the
mechanism is applied according to valuation ˆvj, seller
k = i is matched with j and the negotiation-range
assigned is (ˆLk, ˆHj). Then
vj(i) − Hj ≥ vj(k) − ˆHj. (2)
vj(i) − Li ≥ vj(k) − ˆLk. (3)
• Let j be a buyer not matched by the mechanism
according to valuation vj. Assume that when the
mechanism is applied according to valuation ˆvj, seller k = i
is matched with j and the negotiation-range assigned
is (ˆLk, ˆHj). Then
vj(k) − ˆHj ≤ vj(k) − ˆLk ≤ 0. (4)
• Let j be a buyer matched with seller i by the
mechanism according to valuation vj and let the assigned
bottom of the negotiation range (seller"s floor) be Li.
Assume that when the mechanism is applied according
to valuation ˆvj, the matching between i and j remains
unchanged and let the assigned bottom of the
negotiation range (seller"s floor) be ˆLi. Then,
ˆLi = Li. (5)
Notice that the first inequality of (4) always holds for a valid
negotiation range mechanism (Lemma 1).
Seller"s incentive compatibility:
• Let i be a seller not matched by the mechanism
according to valuation ci. Assume that when the mechanism
5
is applied according to valuation ˆci, buyer z = j is
matched with i and the negotiation-range assigned is
(ˆLi, ˆHz). Then
ˆLi − ci ≤ ˆHz − ci ≤ 0. (6)
• Let i be a buyer matched with buyer j by the
mechanism according to valuation ci and let the assigned top
of the negotiation range (buyer"s ceiling) be Hj.
Assume that when the mechanism is applied according
to valuation ˆci, the matching between i and j remains
unchanged and let the assigned top of the negotiation
range (buyer"s ceiling) be ˆHj. Then,
ˆHj = Hj. (7)
Notice that the first inequality of (6) always holds for a valid
negotiation range mechanism (Lemma 1).
Observe that in the case of sellers in our setting, the case
expressed by requirement (6) is the only case in which the
seller may change the allocation to her benefit. In particular,
it is not possible for seller i who is matched in T ∗
to change
her buyer by reporting a false valuation. This fact simply
follows from the observation that reducing the seller"s
valuation increases the gain from trade for the current allocation
by at least as much than any other allocation, whereas
increasing the seller"s valuation decreases the gain from trade
for the current allocation by exactly the same amount as any
other allocation in which it is matched. Therefore, the only
case the optimal allocation may change is when in the new
allocation i is not matched in which case her utility surplus
is 0.
Theorem 2. ZOPAS is an incentive compatible
negotiationrange mechanism.
Proof. We begin with the incentive compatibility for
buyers.
Consider a buyer j who is matched with seller i according
to his true valuation v. Consider that j is reporting instead
a false valuation ˆv which results in a different allocation in
which j is matched with seller k = i. The following claim
shows that a buyer j which changed his allocation due to
a false declaration of his valuation cannot improve his top
utility.
Claim 2. Let j be a buyer matched to seller i in T ∗
, and
let k = i be the seller matched to j in ˆT∗
. Then,
vj(i) − Hj ≥ vj(k) − ˆHj. (8)
Proof. Recall that Hj = vj(i) + (V −i
−j )∗
− (V −i
)∗
and
ˆHj = ˆvj(k) + ( ˆV −k
−j )∗
− ( ˆV −k
)∗
. Therefore, vj(i) − Hj =
(V −i
)∗
− (V −i
−j )∗
and vj(k) − ˆHj = vj(k) − ˆvj(k) + ( ˆV −k
)∗
−
( ˆV −k
−j )∗
.
It follows that in order to prove (8) we need to show
( ˆV −k
)∗
+ (V −i
−j )∗
≤ (V −i
)∗
+ ( ˆV −k
−j )∗
+ ˆvj(k) − vj(k). (9)
Consider first the case were j is matched to i in ( ˆT−k
)∗
. If
we remove this pair and instead match j with k we obtain
a matching which excludes i, if the gain from trade on the
new pair is taken according to the true valuation then we
get
( ˆV −k
)∗
− (ˆvj(i) − ci) + (vj(k) − ck) ≤ (V −i
)∗
.
Now, since the optimal allocation ˆT∗
matches j with k rather
than with i we have that
(V −i
−j )∗
+ (ˆvj(i) − ci) ≤ ˆV ∗
= ( ˆV −k
−j )∗
+ (ˆvj(k) − ck),
where we have used that ( ˆV −i
−j )∗
= (V −i
−j )∗
since these
allocations exclude j. Adding up these two inequalities implies
(9) in this case.
It is left to prove (9) when j is not matched to i in ( ˆT−k
)∗
.
In fact, in this case we prove the stronger inequality
( ˆV −k
)∗
+ (V −i
−j )∗
≤ (V −i
)∗
+ ( ˆV −k
−j )∗
. (10)
It is easy to see that (10) indeed implies (9) since it follows
from the fact that k is assigned to j in ˆT∗
that ˆvj(k) ≥
vj(k). The proof of (10) works as follows. We start with the
allocations ( ˆT−k
)∗
and (T−i
−j )∗
which together have value
equal to ( ˆV −k
)∗
+ (V −i
−j )∗
. We now use them to create a
pair of new valid allocations, by using the same pairs that
were matched in the original allocations. This means that
the sum of values of the new allocations is the same as the
original pair of allocations. We also require that one of the
new allocations does not include seller i and is based on the
true valuation v, while the other allocation does not include
buyer j or seller k and is based on the false valuation ˆv. This
means that the sum of values of these new allocations is at
most (V −i
)∗
+ ( ˆV −k
−j )∗
, which proves (10).
Let G be the bipartite graph where the nodes on one side
of G represent the buyers and the nodes on the other side
represent the sellers, and edge weights represent the gain
from trade for the particular pair. The different allocations
represent bipartite matchings in G. It will be convenient for
the sake of our argument to think of the edges that belong
to each of the matchings as being colored with a specific
color representing this matching.
Assign color 1 to the edges in the matching ( ˆT−k
)∗
and
assign color 2 to the edges in the matching (T −i
−j )∗
. We claim
that these edges can be recolored using colors 3 and 4 so that
the new coloring represents allocations (T −i
) (represented
by color 3) and ( ˆT−k
−j ) (represented by color 4). This implies
the that inequality (10) holds. Figure 2 illustrates the graph
G and the colorings of the different matchings.
Define an alternating path P starting at j. Let S1 = i be
the seller matched to j in ( ˆT−k
)∗
(if none exists then P is
empty). Let B1 be the buyer matched to S1 in (T−i
−j )∗
, S2 be
the seller matched to B1 in ( ˆT−k
)∗
, B2 be the buyer matched
to S2 in (T−i
−j )∗
, and so on. This defines an alternating
path P, starting at j, whose edges" colors alternate between
colors 1 and 2 (starting with 1). This path ends either in
a seller who is not matched in (T −i
−j )∗
or in a buyer who
is not matched in ( ˆT−k
)∗
. Since all sellers in this path are
matched in ( ˆT−k
)∗
, we have that seller k does not belong to
P. Since in this case S1 = i and the rest of the sellers in P
are matched in (T−i
−j )∗
we have that seller i as well does not
belong to P. This ensures that edges in P may be colored
by alternating colors 3 and 4 (starting with 3). Since S1 = i,
we may use color 3 for the first edge and thus assign it to
the allocation (T−i
) . All other edges, do not involve i, j
or k and thus may be either colored 4 and be part of an
allocation ( ˆT−k
−j ) or colored 3 and be part of an allocation
(T−i
) , in an alternating fashion.
We are left to recolor the edges that do not belong to P.
Since none of these edges includes j we have that the edges
6
that were colored 1, which are part of ( ˆT−k
)∗
, may now be
colored 4, and be included in the allocation ( ˆT−k
−j ) . It is
also clear that the edges that were colored 2, which are part
of (T−i
−j )∗
, may now be colored 3, and be included in the
allocation (T−i
) . This completes the proof of (10) and the
claim.
The following claim shows that a buyer j which changed
his allocation due to a false declaration of his valuation
cannot improve his bottom utility. The proof is basically the
standard VCG argument.
Claim 3. Let j be a buyer matched to seller i in T ∗
, and
k = i be the seller matched to j in ˆT∗
. Then,
vj(i) − Li ≥ vj(k) − ˆLk. (11)
Proof. Recall that Li = vj(i) + (V−j)∗
− V ∗
, and ˆLk =
ˆvj(k) + ( ˆV−j)∗
− ˆV ∗
= ˆvj(k) + (V−j)∗
− ˆV ∗
. Therefore,
vj(i) − Li = V ∗
− (V−j)∗
and vj(k) − ˆLk = vj(k) − ˆvj(k) +
ˆV ∗
− (V−j)∗
.
It follows that in order to prove (11) we need to show
V ∗
≥ vj(k) − ˆvj(k) + ˆV ∗
. (12)
The scenario of this claim occurs when j understates his
value for Ai or overstated his value for Ak. Consider these
two cases:
• ˆvj(k) > vj(k): Since Ak was allocated to j in the
allocation ˆT∗
we have that using the allocation of ˆT∗
according to the true valuation gives an allocation of
value U satisfying ˆV ∗
− ˆvj(k) + vj(k) ≤ U ≤ V ∗
.
• ˆvj(k) = vj(k) and ˆvj(i) < vj(i): In this case (12)
reduces to V ∗
≥ ˆV ∗
. Since j is not allocated i in ˆT∗
we have that ˆT∗
is an allocation that uses only true
valuations. From the optimality of T ∗
we conclude
that V ∗
≥ ˆV ∗
.
Another case in which a buyer may try to improve his
utility is when he does not win any good by stating his true
valuation. He may give a false valuation under which he
wins some good. The following claim shows that doing this
is not beneficial to him.
Claim 4. Let j be a buyer not matched in T ∗
, and
assume seller k is matched to j in ˆT∗
. Then,
vj(k) − ˆLk ≤ 0.
Proof. The scenario of this claim occurs if j did not
buy in the truth-telling allocation and overstates his value
for Ak, ˆvj(k) > vj(k) in his false valuation. Recall that
ˆLk = ˆvj(k) + ( ˆV−j)∗
− ˆV ∗
. Thus we need to show that
0 ≥ vj(k) − ˆvj(k) + ˆV ∗
− (V−j)∗
. Since j is not allocated
in T∗
then (V−j)∗
= V ∗
. Since j is allocated Ak in ˆT∗
we have that using the allocation of ˆT∗
according to the
true valuation gives an allocation of value U satisfying ˆV ∗
−
ˆvj(k) + vj(k) ≤ U ≤ V ∗
. Thus we can conclude that 0 ≥
vj(k) − ˆvj(k) + ˆV ∗
− (V−j)∗
.
Finally, the following claim ensures that a buyer cannot
influence the floor bound of the ZOPA for the good he wins.
Claim 5. Let j be a buyer matched to seller i in T ∗
, and
assume that ˆT∗
= T∗
, then ˆLi = Li.
Proof. Recall that Li = vj(i) + (V−j)∗
− V ∗
, and ˆLi =
ˆvj(i) + ( ˆV−j)∗
− ˆV ∗
= ˆvj(i) + (V−j)∗
− ˆV ∗
. Therefore we
need to show that ˆV ∗
= V ∗
+ ˆvj(i) − vj(i).
Since j is allocated Ai in T∗
, we have that using the
allocation of T∗
according to the false valuation gives an allocation
of value U satisfying V ∗
− vj(i) + ˆvj(i) ≤ U ≤ ˆV ∗
.
Similarly since j is allocated Ai in ˆT∗
, we have that using the
allocation of ˆT∗
according to the true valuation gives an
allocation of value U satisfying ˆV ∗
− ˆvj(i)+vj(i) ≤ U ≤ V ∗
,
which together with the previous inequality completes the
proof.
This completes the analysis of the buyer"s incentive
compatibility. We now turn to prove the seller"s incentive
compatibility properties of our mechanism.
The following claim handles the case where a seller that
was not matched in T ∗
falsely understates her valuation such
that she gets matched n ˆT∗
.
Claim 6. Let i be a seller not matched in T ∗
, and assume
buyer z is matched to i in ˆT∗
. Then,
ˆHz − ci ≤ 0.
Proof. Recall that ˆHz = vz(i) + ( ˆV −i
−z )∗
− ( ˆV −i
)∗
.
Since i is not matched in T ∗
and ( ˆT−i
)∗
involves only true
valuations we have that ( ˆV −i
)∗
= V ∗
. Since i is matched
with z in ˆT∗
it can be obtained by adding the buyer z - seller
i pair to ( ˆT−i
−z)∗
. It follows that ˆV ∗
= ( ˆV −i
−z )∗
+ vz(i) − ˆci.
Thus, we have that ˆHz = ˆV ∗
+ ˆci − V ∗
. Now, since i is
matched in ˆT∗
, using this allocation according to the true
valuation gives an allocation of value U satisfying ˆV ∗
+ ˆci −
ci ≤ U ≤ V ∗
. Therefore ˆHz −ci = ˆV ∗
+ˆci −V ∗
−ci ≤ 0.
Finally, the following simple claim ensures that a seller
cannot influence the ceiling bound of the ZOPA for the good
she sells.
Claim 7. Let i be a seller matched to buyer j in T ∗
, and
assume that ˆT∗
= T∗
, then ˆHj = Hj.
Proof. Since ( ˆV −i
−j )∗
= (V −i
−j )∗
and ( ˆV −i
)∗
= (V −i
)∗
it
follows that
ˆHj = vj(i)+( ˆV −i
−j )∗
−( ˆV −i
)∗
= vj(i)+(V −i
−j )∗
−(V −i
)∗
= Hj.
4. CONCLUSIONS AND EXTENSIONS
In this paper we suggest a way to deal with the
impossibility of producing mechanisms which are efficient,
individually rational, incentive compatible and budget balanced.
To this aim we introduce the concept of negotiation-range
mechanisms which avoid the problem by leaving the final
determination of prices to a negotiation between the buyer
and seller. The goal of the mechanism is to provide the
initial range (ZOPA) for negotiation in a way that it will be
beneficial for the participants to close the proposed deals.
We present a negotiation range mechanism that is efficient,
individually rational, incentive compatible and budget
balanced. The ZOPA produced by our mechanism is based
7
on a natural adaptation of the VCG payment scheme in a
way that promises valid negotiation ranges which permit a
budget balanced allocation.
The basic question that we aimed to tackle seems very
exciting: which properties can we expect a market
mechanism to achieve ? Are there different market models and
requirements from the mechanisms that are more feasible
than classic mechanism design goals ?
In the context of our negotiation-range model, is
natural to further study negotiation based mechanisms in more
general settings. A natural extension is that of a
combinatorial market. Unfortunately, finding the optimal allocation
in a combinatorial setting is NP-hard, and thus the problem
of maintaining BB is compounded by the problem of
maintaining IC when efficiency is approximated [1, 5, 6, 9, 11].
Applying the approach in this paper to develop
negotiationrange mechanisms for combinatorial markets, even in
restricted settings, seems a promising direction for research.
5. REFERENCES
[1] Y. Bartal, R. Gonen, and N. Nisan. Incentive
Compatible Multi-Unit Combinatorial Auctions.
Proceeding of 9th TARK 2003, pp. 72-87, June 2003.
[2] E. H. Clarke. Multipart Pricing of Public Goods. In
journal Public Choice 1971, volume 2, pages 17-33.
[3] J.Feigenbaum, C. Papadimitriou, and S. Shenker.
Sharing the Cost of Multicast Transmissions. Journal
of Computer and System Sciences, 63(1),2001.
[4] A. Fiat, A. Goldberg, J. Hartline, and A. Karlin.
Competitive Generalized Auctions. Proceeding of 34th
ACM Symposium on Theory of Computing,2002.
[5] R. Gonen, and D. Lehmann. Optimal Solutions for
Multi-Unit Combinatorial Auctions: Branch and
Bound Heuristics. Proceeding of ACM Conference on
Electronic Commerce EC"00, pages 13-20, October
2000.
[6] R. Gonen, and D. Lehmann. Linear Programming
helps solving Large Multi-unit Combinatorial
Auctions. In Proceeding of INFORMS 2001,
November, 2001.
[7] T. Groves. Incentives in teams. In journal
Econometrica 1973, volume 41, pages 617-631.
[8] R. Lavi, A. Mu"alem and N. Nisan. Towards a
Characterization of Truthful Combinatorial Auctions.
Proceeding of 44th Annual IEEE Symposium on
Foundations of Computer Science,2003.
[9] D. Lehmann, L. I. O"Callaghan, and Y. Shoham.
Truth revelation in rapid, approximately efficient
combinatorial auctions. In Proceedings of the First
ACM Conference on Electronic Commerce, pages
96-102, November 1999.
[10] R. Myerson, M. Satterthwaite. Efficient Mechanisms
for Bilateral Trading. Journal of Economic Theory,
28, pages 265-81, 1983.
[11] N. Nisan and A. Ronen. Algorithmic Mechanism
Design. In Proceeding of 31th ACM Symposium on
Theory of Computing, 1999.
[12] D.C. Parkes, J. Kalagnanam, and M. Eso. Achieving
Budget-Balance with Vickrey-Based Payment Schemes
in Exchanges. Proceeding of 17th International Joint
Conference on Artificial Intelligence, pages 1161-1168,
2001.
[13] W. Vickrey. Counterspeculation, Auctions and
Competitive Sealed Tenders. In Journal of Finance
1961, volume 16, pages 8-37.
8 | negotiation-range mechanism;real-world market environment;efficient market;impossibility result;negotiationrange market;zone of possible agreement;negotiation based mechanism;incentive compatibility;utility;efficient truthful market;individual rationality;good exchange;possible agreement zone;buyer and seller;mechanism design |
train_J-65 | Privacy in Electronic Commerce and the Economics of Immediate Gratification | Dichotomies between privacy attitudes and behavior have been noted in the literature but not yet fully explained. We apply lessons from the research on behavioral economics to understand the individual decision making process with respect to privacy in electronic commerce. We show that it is unrealistic to expect individual rationality in this context. Models of self-control problems and immediate gratification offer more realistic descriptions of the decision process and are more consistent with currently available data. In particular, we show why individuals who may genuinely want to protect their privacy might not do so because of psychological distortions well documented in the behavioral literature; we show that these distortions may affect not only ‘na¨ıve" individuals but also ‘sophisticated" ones; and we prove that this may occur also when individuals perceive the risks from not protecting their privacy as significant. | 1. PRIVACY AND ELECTRONIC
COMMERCE
Privacy remains an important issue for electronic
commerce. A PriceWaterhouseCoopers study in 2000 showed
that nearly two thirds of the consumers surveyed would
shop more online if they knew retail sites would not do
anything with their personal information [15]. A Federal Trade
Commission study reported in 2000 that sixty-seven
percent of consumers were very concerned about the privacy
of the personal information provided on-line [11]. More
recently, a February 2002 Harris Interactive survey found that
the three biggest consumer concerns in the area of on-line
personal information security were: companies trading
personal data without permission, the consequences of insecure
transactions, and theft of personal data [19]. According to
a Jupiter Research study in 2002, $24.5 billion in on-line
sales will be lost by 2006 - up from $5.5 billion in 2001.
Online retail sales would be approximately twenty-four percent
higher in 2006 if consumers" fears about privacy and
security were addressed effectively [21]. Although the media
hype has somewhat diminished, risks and costs have
notas evidenced by the increasing volumes of electronic spam
and identity theft [16].
Surveys in this field, however, as well as experiments and
anecdotal evidence, have also painted a different picture.
[36, 10, 18, 21] have found evidence that even privacy
concerned individuals are willing to trade-off privacy for
convenience, or bargain the release of very personal information in
exchange for relatively small rewards. The failure of several
on-line services aimed at providing anonymity for Internet
users [6] offers additional indirect evidence of the reluctance
by most individuals to spend any effort in protecting their
personal information.
The dichotomy between privacy attitudes and behavior
has been highlighted in the literature. Preliminary
interpretations of this phenomenon have been provided [2, 38,
33, 40]. Still missing are: an explanation grounded in
economic or psychological theories; an empirical validation of
the proposed explanation; and, of course, the answer to the
most recurring question: should people bother at all about
privacy?
In this paper we focus on the first question: we formally
analyze the individual decision making process with respect
to privacy and its possible shortcomings. We focus on
individual (mis)conceptions about their handling of risks they
face when revealing private information. We do not address
the issue of whether people should actually protect
themselves. We will comment on that in Section 5, where we will
also discuss strategies to empirically validate our theory.
We apply lessons from behavioral economics. Traditional
economics postulates that people are forward-looking and
bayesian updaters: they take into account how current
behavior will influence their future well-being and preferences.
For example, [5] study rational models of addiction. This
approach can be compared to those who see in the decision
21
not to protect one"s privacy a rational choice given the
(supposedly) low risks at stake. However, developments in the
area of behavioral economics have highlighted various forms
of psychological inconsistencies (self-control problems,
hyperbolic discounting, present-biases, etc.) that clash with
the fully rational view of the economic agent. In this
paper we draw from these developments to reach the following
conclusions:
• We show that it is unlikely that individuals can act
rationally in the economic sense when facing privacy
sensitive decisions.
• We show that alternative models of personal behavior
and time-inconsistent preferences are compatible with
the dichotomy between attitudes and behavior and can
better match current data. For example, they can
explain the results presented by [36] at the ACM EC
"01 conference. In their experiment, self-proclaimed
privacy advocates were found to be willing to reveal
varying amounts of personal information in exchange
for small rewards.
• In particular, we show that individuals may have a
tendency to under-protect themselves against the privacy
risks they perceive, and over-provide personal
information even when wary of (perceived) risks involved.
• We show that the magnitude of the perceived costs of
privacy under certain conditions will not act as
deterrent against behavior the individual admits is risky.
• We show, following similar studies in the economics of
immediate gratification [31], that even ‘sophisticated"
individuals may under certain conditions become
‘privacy myopic."
Our conclusion is that simply providing more
information and awareness in a self-regulative environment is not
sufficient to protect individual privacy. Improved
technologies, by lowering costs of adoption and protection, certainly
can help. However, more fundamental human behavioral
responses must also be addressed if privacy ought to be
protected.
In the next section we propose a model of rational agents
facing privacy sensitive decisions. In Section 3 we show the
difficulties that hinder any model of privacy decision
making based on full rationality. In Section 4 we show how
behavioral models based on immediate gratification bias can
better explain the attitudes-behavior dichotomy and match
available data. In Section 5 we summarize and discuss our
conclusions.
2. A MODEL OF RATIONALITY IN
PRIVACY DECISION MAKING
Some have used the dichotomy between privacy attitudes
and behavior to claim that individuals are acting rationally
when it comes to privacy. Under this view, individuals may
accept small rewards for giving away information because
they expect future damages to be even smaller (when
discounted over time and with their probability of occurrence).
Here we want to investigate what underlying assumptions
about personal behavior would support the hypothesis of
full rationality in privacy decision making.
Since [28, 37, 29] economists have been interested in
privacy, but only recently formal models have started
appearing [3, 7, 39, 40]. While these studies focus on market
interactions between one agent and other parties, here we are
interested in formalizing the decision process of the single
individual. We want to see if individuals can be
economically rational (forward-lookers, bayesian updaters, utility
maximizers, and so on) when it comes to protect their own
personal information.
The concept of privacy, once intended as the right to be
left alone [41], has transformed as our society has become
more information oriented. In an information society the self
is expressed, defined, and affected through and by
information and information technology. The boundaries between
private and public become blurred. Privacy has therefore
become more a class of multifaceted interests than a single,
unambiguous concept. Hence its value may be discussed (if
not ascertained) only once its context has also been
specified. This most often requires the study of a network of
relations between a subject, certain information (related to
the subject), other parties (that may have various linkages
of interest or association with that information or that
subject), and the context in which such linkages take place.
To understand how a rational agent could navigate through
those complex relations, in Equation 1 we abstract the
decision process of an idealized rational economic agent who
is facing privacy trade-offs when completing a certain
transaction.
max
d
Ut = δ vE (a) , pd
(a) + γ vE (t) , pd
(t) − cd
t (1)
In Equation 1, δ and γ are unspecified functional forms
that describe weighted relations between expected payoffs
from a set of events v and the associated probabilities of
occurrence of those events p. More precisely, the utility U of
completing a transaction t (the transaction being any action
- not necessarily a monetary operation - possibly involving
exposure of personal information) is equal to some function
of the expected payoff vE (a) from maintaining (or not)
certain information private during that transaction, and the
probability of maintaining [or not maintaining] that
information private when using technology d, pd
(a) [1 − pd
(a)];
plus some function of the expected payoff vE (t) from
completing (or non completing) the transaction (possibly
revealing personal information), and the probability of completing
[or not completing] that transaction with a certain
technology d, pd
(t) [1 − pd
(t)]; minus the cost of using the
technology t: cd
t .1
The technology d may or may not be privacy enhancing.
Since the payoffs in Equation 1 can be either positive or
negative, Equation 1 embodies the duality implicit in privacy
issues: there are both costs and benefits gained from
revealing or from protecting personal information, and the costs
and benefits from completing a transaction, vE (t), might be
distinct from the costs and benefits from keeping the
associated information private, vE (a). For instance, revealing
one"s identity to an on-line bookstore may earn a discount.
Viceversa, it may also cost a larger bill, because of price
discrimination. Protecting one"s financial privacy by not
divulging credit card information on-line may protect against
future losses and hassles related to identity theft. But it may
1
See also [1].
22
make one"s on-line shopping experience more cumbersome,
and therefore more expensive.
The functional parameters δ and γ embody the variable
weights and attitudes an individual may have towards
keeping her information private (for example, her privacy
sensitivity, or her belief that privacy is a right whose respect
should be enforced by the government) and completing
certain transactions. Note that vE and p could refer to sets of
payoffs and the associated probabilities of occurrence. The
payoffs are themselves only expected because, regardless of
the probability that the transaction is completed or the
information remains private, they may depend on other sets of
events and their associated probabilities. vE() and pd
(), in
other words, can be read as multi-variate parameters inside
which are hidden several other variables, expectations, and
functions because of the complexity of the privacy network
described above.
Over time, the probability of keeping certain information
private, for instance, will not only depend on the chosen
technology d but also on the efforts by other parties to
appropriate that information. These efforts may be function,
among other things, of the expected value of that
information to those parties. The probability of keeping
information private will also depend on the environment in which
the transaction is taking place. Similarly, the expected
benefit from keeping information private will also be a
collection over time of probability distributions dependent on
several parameters. Imagine that the probability of keeping
your financial transactions private is very high when you
use a bank in Bermuda: still, the expected value from
keeping your financial information confidential will depend on a
number of other factors.
A rational agent would, in theory, choose the technology
d that maximizes her expected payoff in Equation 1. Maybe
she would choose to complete the transaction under the
protection of a privacy enhancing technology. Maybe she would
complete the transaction without protection. Maybe she
would not complete the transaction at all (d = 0). For
example, the agent may consider the costs and benefits of
sending an email through an anonymous MIX-net system
[8] and compare those to the costs and benefits of sending
that email through a conventional, non-anonymous
channel. The magnitudes of the parameters in Equation 1 will
change with the chosen technology. MIX-net systems may
decrease the expected losses from privacy intrusions.
Nonanonymous email systems may promise comparably higher
reliability and (possibly) reduced costs of operations.
3. RATIONALITY AND PSYCHOLOGICAL
DISTORTIONS IN PRIVACY
Equation 1 is a comprehensive (while intentionally generic)
road-map for navigation across privacy trade-offs that no
human agent would be actually able to use.
We hinted to some difficulties as we noted that several
layers of complexities are hidden inside concepts such as the
expected value of maintaining certain information private,
and the probability of succeeding doing so. More precisely,
an agent will face three problems when comparing the
tradeoffs implicit in Equation 1: incomplete information about all
parameters; bounded power to process all available
information; no deviation from the rational path towards
utilitymaximization. Those three problems are precisely the same
issues real people have to deal with on an everyday basis as
they face privacy-sensitive decisions. We discuss each
problem in detail.
1. Incomplete information. What information has the
individual access to as she prepares to take privacy sensitive
decisions? For instance, is she aware of privacy invasions and
the associated risks? What is her knowledge of the existence
and characteristics of protective technologies?
Economic transactions are often characterized by
incomplete or asymmetric information. Different parties involved
may not have the same amount of information about the
transaction and may be uncertain about some important
aspects of it [4]. Incomplete information will affect almost all
parameters in Equation 1, and in particular the estimation
of costs and benefits. Costs and benefits associated with
privacy protection and privacy intrusions are both
monetary and immaterial. Monetary costs may for instance
include adoption costs (which are probably fixed) and usage
costs (which are variable) of protective technologies - if the
individual decides to protect herself. Or they may include
the financial costs associated to identity theft, if the
individual"s information turns out not to have been adequately
protected. Immaterial costs may include learning costs of
a protective technology, switching costs between different
applications, or social stigma when using anonymizing
technologies, and many others. Likewise, the benefits from
protecting (or not protecting) personal information may also be
easy to quantify in monetary terms (the discount you receive
for revealing personal data) or be intangible (the feeling of
protection when you send encrypted emails).
It is difficult for an individual to estimate all these
values. Through information technology, privacy invasions can
be ubiquitous and invisible. Many of the payoffs associated
with privacy protection or intrusion may be discovered or
ascertained only ex post through actual experience. Consider,
for instance, the difficulties in using privacy and encrypting
technologies described in [43].
In addition, the calculations implicit in Equation 1 depend
on incomplete information about the probability
distribution of future events. Some of those distributions may be
predicted after comparable data - for example, the
probability that a certain credit card transaction will result in fraud
today could be calculated using existing statistics. The
probability distributions of other events may be very
difficult to estimate because the environment is too
dynamicfor example, the probability of being subject to identity theft
5 years in the future because of certain data you are
releasing now. And the distributions of some other events may be
almost completely subjective - for example, the probability
that a new and practical form of attack on a currently
secure cryptosystem will expose all of your encrypted personal
communications a few years from now.
This leads to a related problem: bounded rationality.
2. Bounded rationality. Is the individual able to
calculate all the parameters relevant to her choice? Or is she
limited by bounded rationality?
In our context, bounded rationality refers to the inability
to calculate and compare the magnitudes of payoffs
associated with various strategies the individual may choose in
privacy-sensitive situations. It also refers to the inability to
process all the stochastic information related to risks and
probabilities of events leading to privacy costs and benefits.
23
In traditional economic theory, the agent is assumed to
have both rationality and unbounded ‘computational" power
to process information. But human agents are unable to
process all information in their hands and draw accurate
conclusions from it [34]. In the scenario we consider, once
an individual provides personal information to other parties,
she literally loses control of that information. That loss of
control propagates through other parties and persists for
unpredictable spans of time. Being in a position of
information asymmetry with respect to the party with whom she
is transacting, decisions must be based on stochastic
assessments, and the magnitudes of the factors that may affect
the individual become very difficult to aggregate, calculate,
and compare.2
Bounded rationality will affect the
calculation of the parameters in Equation 1, and in particular δ,
γ, vE(), and pt(). The cognitive costs involved in trying to
calculate the best strategy could therefore be so high that
the individual may just resort to simple heuristics.
3. Psychological distortions. Eventually, even if an
individual had access to complete information and could
appropriately compute it, she still may find it difficult to
follow the rational strategy presented in Equation 1. A vast
body of economic and psychological literature has by now
confirmed the impact of several forms of psychological
distortions on individual decision making. Privacy seems to
be a case study encompassing many of those distortions:
hyperbolic discounting, under insurance, self-control
problems, immediate gratification, and others. The traditional
dichotomy between attitude and behavior, observed in
several aspects of human psychology and studied in the social
psychology literature since [24] and [13], may also appear in
the privacy space because of these distortions.
For example, individuals have a tendency to discount
‘hyperbolically" future costs or benefits [31, 27]. In economics,
hyperbolic discounting implies inconsistency of personal
preferences over time - future events may be discounted at
different discount rates than near-term events. Hyperbolic
discounting may affect privacy decisions, for instance when we
heavily discount the (low) probability of (high) future risks
such as identity theft.3
Related to hyperbolic discounting
is the tendency to underinsure oneself against certain risks
[22].
In general, individuals may put constraints on future
behavior that limit their own achievement of maximum utility:
people may genuinely want to protect themselves, but
because of self-control bias, they will not actually take those
steps, and opt for immediate gratification instead.
People tend to underappreciate the effects of changes in their
states, and hence falsely project their current preferences
over consumption onto their future preferences. Far more
than suggesting merely that people mispredict future tastes,
this projection bias posits a systematic pattern in these
mispredictions which can lead to systematic errors in
dynamicchoice environments [25, p. 2].
2
The negative utility coming from future potential
misuses of somebody"s personal information could be a random
shock whose probability and scope are extremely variable.
For example, a small and apparently innocuous piece of
information might become a crucial asset or a dangerous
liability in the right context.
3
A more rigorous description and application of hyperbolic
discounting is provided in Section 4.
In addition, individuals suffer from optimism bias [42],
the misperception that one"s risks are lower than those of
other individuals under similar conditions. Optimism bias
may lead us to believe that we will not be subject to privacy
intrusions.
Individuals encounter difficulties when dealing with
cumulative risks. [35], for instance, shows that while young
smokers appreciate the long term risks of smoking, they do not
fully realize the cumulative relation between the low risks of
each additional cigarette and the slow building up of a
serious danger. Difficulties with dealing with cumulative risks
apply to privacy, because our personal information, once
released, can remain available over long periods of time. And
since it can be correlated to other data, the ‘anonymity sets"
[32, 14] in which we wish to remain hidden get smaller. As
a result, the whole risk associated with revealing different
pieces of personal information is more than the sum of the
individual risks associated with each piece of data.
Also, it is easier to deal with actions and effects that
are closer to us in time. Actions and effects that are in
the distant future are difficult to focus on given our limited
foresight perspective. As the foresight changes, so does
behavior, even when preferences remain the same [20]. This
phenomenon may also affects privacy decisions, since the
costs of privacy protection may be immediate, but the
rewards may be invisible (absence of intrusions) and spread
over future periods of time.
To summarize: whenever we face privacy sensitive
decisions, we hardly have all data necessary for an informed
choice. But even if we had, we would be likely unable to
process it. And even if we could process it, we may still end
behaving against our own better judgment. In what follows,
we present a model of privacy attitudes and behavior based
on some of these findings, and in particular on the plight of
immediate gratification.
4. PRIVACY AND THE ECONOMICS OF
IMMEDIATE GRATIFICATION
The problem of immediate gratification (which is related
to the concepts of time inconsistency, hyperbolic
discounting, and self-control bias) is so described by O"Donoghue and
Rabin [27, p. 4]: A person"s relative preference for
wellbeing at an earlier date over a later date gets stronger as the
earlier date gets closer. [...] [P]eople have self-control
problems caused by a tendency to pursue immediate gratification
in a way that their ‘long-run selves" do not appreciate. For
example, if you were given only two alternatives, on Monday
you may claim you will prefer working 5 hours on Saturday
to 5 hours and half on Sunday. But as Saturday comes, you
will be more likely to prefer postponing work until Sunday.
This simple observation has rather important consequences
in economic theory, where time-consistency of preferences is
the dominant model. Consider first the traditional model
of utility that agents derive from consumption: the model
states that utility discounts exponentially over time:
Ut =
T
τ=t
δτ
uτ (2)
In Equation 2, the cumulative utility U at time t is the
discounted sum of all utilities from time t (the present) until
time T (the future). δ is the discount factor, with a value
24
Period 1 Period 2 Period 3 Period 4
Benefits from selling period 1 2 0 0 0
Costs from selling period 1 0 1 1 1
Benefits from selling period 2 0 2 0 0
Costs from selling period 2 0 0 1 1
Benefits from selling period 3 0 0 2 0
Costs from selling period 3 0 0 0 1
Table 1: (Fictional) expected payoffs from joining loyalty program.
between 0 and 1. A value of 0 would imply that the
individual discounts so heavily that the utility from future periods
is worth zero today. A value of 1 would imply that the
individual is so patient she does not discount future utilities.
The discount factor is used in economics to capture the fact
that having (say) one dollar one year from now is valuable,
but not as much as having that dollar now. In Equation 2,
if all uτ were constant - for instance, 10 - and δ was 0.9,
then at time t = 0 (that is, now) u0 would be worth 10, but
u1 would be worth 9.
Modifying the traditional model of utility discounting, [23]
and then [31] have proposed a model which takes into
account possible time-inconsistency of preferences. Consider
Equation 3:
Ut(ut, ut+1, ..., uT ) = δt
ut + β
T
τ=t+1
δτ
uτ (3)
Assume that δ, β ∈ [0, 1]. δ is the discount factor for
intertemporal utility as in Equation 2. β is the parameter
that captures an individual"s tendency to gratify herself
immediately (a form of time-inconsistent preferences). When
β is 1, the model maps the traditional time-consistent
utility model, and Equation 3 is identical to Equation 2. But
when β is zero, the individual does not care for anything but
today. In fact, any β smaller than 1 represents self-control
bias.
The experimental literature has convincingly proved that
human beings tend to have self-control problems even when
they claim otherwise: we tend to avoid and postpone
undesirable activities even when this will imply more effort
tomorrow; and we tend to over-engage in pleasant activities
even though this may cause suffering or reduced utility in
the future.
This analytical framework can be applied to the study
of privacy attitudes and behavior. Protecting your privacy
sometimes means protecting yourself from a clear and present
hassle (telemarketers, or people peeping through your
window and seeing how you live - see [33]); but sometimes it
represents something akin to getting an insurance against
future and only uncertain risks. In surveys completed at
time t = 0, subjects asked about their attitude towards
privacy risks may mentally consider some costs of protecting
themselves at a later time t = s and compare those to the
avoided costs of privacy intrusions in an even more distant
future t = s + n. Their alternatives at survey time 0 are
represented in Equation 4.
min
wrt x
DU0 = β[(E(cs,p)δs
x) + (E(cs+n,i)δs+n
(1 − x))] (4)
x is a dummy variable that can take values 0 or 1. It
represents the individual"s choice - which costs the
individual opts to face: the expected cost of protecting herself at
time s, E(cs,p) (in which case x = 1), or the expected costs
of being subject to privacy intrusions at a later time s + n,
E(cs+n,i).
The individual is trying to minimize the disutility DU of
these costs with respect to x. Because she discounts the
two future events with the same discount factor (although
at different times), for certain values of the parameters the
individual may conclude that paying to protect herself is
worthy. In particular, this will happen when:
E(cs,p)δs
< E(cs+n,i)δs+n
(5)
Now, consider what happens as the moment t = s comes.
Now a real price should be paid in order to enjoy some form
of protection (say, starting to encrypt all of your emails to
protect yourself from future intrusions). Now the individual
will perceive a different picture:
min
wrt x
DUs = δE(cs,p)x + βE(cn,i)δn
(1 − x)] (6)
Note that nothing has changed in the equation (certainly
not the individual"s perceived risks) except time. If β (the
parameter indicating the degree of self-control problems) is
less than one, chances are that the individual now will
actually choose not to protect herself. This will in fact happen
when:
δE(cs,p) > βE(cn,i)δn
(7)
Note that Disequalities 5 and 7 may be simultaneously
met for certain β < 1. At survey time the individual
honestly claimed she wanted to protect herself in
principlethat is, some time in the future. But as she is asked to
make an effort to protect herself right now, she chooses to
run the risk of privacy intrusion.
Similar mathematical arguments can be made for the
comparison between immediate costs with immediate benefits
(subscribing to a ‘no-call" list to stop telemarketers from
harassing you at dinner), and immediate costs with only
future expected rewards (insuring yourself against identity
theft, or protecting yourself from frauds by never using your
credit card on-line), particularly when expected future
rewards (or avoided risks) are also intangible: the immaterial
consequences of living (or not) in a dossier society, or the
chilling effects (or lack thereof) of being under surveillance.
The reader will have noticed that we have focused on
perceived (expected) costs E(c), rather than real costs. We
do not know the real costs and we do not claim that the
25
individual does. But we are able to show that under
certain conditions even costs perceived as very high (as during
periods of intense privacy debate) will be ignored.
We can provide some fictional numerical examples to make
the analysis more concrete. We present some scenarios
inspired by the calculations in [31].
Imagine an economy with just 4 periods (Table 1). Each
individual can enroll in a supermarket"s loyalty program by
revealing personal information. If she does so, the individual
gets a discount of 2 during the period of enrollment, only to
pay one unit each time thereafter because of price
discrimination based on the information she revealed (we make no
attempt at calibrating the realism of this obviously abstract
example; the point we are focusing on is how time
inconsistencies may affect individual behavior given the expected
costs and benefits of certain actions).4
Depending on which
period the individual chooses for ‘selling" her data, we have
the undiscounted payoffs represented in Table 1.
Imagine that the individual is contemplating these
options and discounting them according to Equation 3.
Suppose that δ = 1 for all types of individuals (this means that
for simplicity we do not consider intertemporal discounting)
but β = 1/2 for time-inconsistent individuals and β = 1 for
everybody else. The time-consistent individual will choose
to join the program at the very last period and rip off a
benefit of 2-1=1. The individual with immediate
gratification problems, for whom β = 1/2, will instead perceive the
benefits from joining now or in period 3 as equivalent (0.5),
and will join the program now, thus actually making herself
worse off.
[31] also suggest that, in addition to the distinction
between time-consistent individuals and individuals with
timeinconsistent preferences, we should also distinguish
timeinconsistent individuals who are na¨ıve from those who are
sophisticated. Na¨ıve time-inconsistent individuals are not
aware of their self-control problems - for example, they are
those who always plan to start a diet next week.
Sophisticated time-inconsistent individuals suffer of immediate
gratification bias, but are at least aware of their inconsistencies.
People in this category choose their behavior today correctly
estimating their future time-inconsistent behavior.
Now consider how this difference affects decisions in
another scenario, represented in Table 2. An individual is
considering the adoption of a certain privacy enhancing
technology. It will cost her some money both to protect herself
and not to protect herself. If she decides to protect herself,
the cost will be the amount she pays - for example - for some
technology that shields her personal information. If she
decides not to protect herself, the cost will be the expected
consequences of privacy intrusions.
We assume that both these aggregate costs increase over
time, although because of separate dynamics. As time goes
by, more and more information about the individual has
been revealed, and it becomes more costly to be protected
against privacy intrusions. At the same time, however,
intrusions become more frequent and dangerous.
4
One may claim that loyalty cards keep on providing
benefits over time. Here we make the simplifying assumption
that such benefits are not larger than the future costs
incurred after having revealed one"s tastes. We also assume
that the economy ends in period 4 for all individuals,
regardless of when they chose to join the loyalty program.
In period 1, the individual may protect herself by spending
5, or she may choose to face a risk of privacy intrusion the
following period, expected to cost 7. In the second period,
assuming that no intrusion has yet taken place, she may
once again protect herself by spending a little more, 6; or
she may choose to face a risk of privacy intrusion the next
(third) period, expected to cost 9. In the third period she
could protect herself for 8 or face an expected cost of 15 in
the following last period.
Here too we make no attempt at calibrating the values in
Table 2. Again, we focus on the different behavior driven by
heterogeneity in time-consistency and sophistication versus
na¨ıvete. We assume that β = 1 for individuals with no
self control problems and β = 1/2 for everybody else. We
assume for simplicity that δ = 1 for all.
The time-consistent individuals will obviously choose to
protect themselves as soon as possible.
In the first period, na¨ıve time-inconsistent individuals will
compare the costs of protecting themselves then or face a
privacy intrusion in the second period. Because 5 > 7 ∗
(1/2), they will prefer to wait until the following period to
protect themselves. But in the second period they will be
comparing 6 > 9 ∗ (1/2) - and so they will postpone their
protection again. They will keep on doing so, facing higher
and higher risks. Eventually, they will risk to incur the
highest perceived costs of privacy intrusions (note again that
we are simply assuming that individuals believe there are
privacy risks and that they increase over time; we will come
back to this concept later on).
Time-inconsistent but sophisticated individuals, on the
other side, will adopt a protective technology in period 2
and pay 6. By period 2, in fact, they will (correctly) realize
that if they wait till period 3 (which they are tempted to
do, because 6 > 9 ∗ (1/2)), their self-control bias will lead
them to postpone adopting the technology once more
(because 8 > 15 ∗ (1/2)). Therefore they predict they would
incur the expected cost 15 ∗ (1/2), which is larger than
6the cost of protecting oneself in period 2. In period 1,
however, they correctly predict that they will not wait to protect
themselves further than period 2. So they wait till period
2, because 5 > 6 ∗ (1/2), at which time they will adopt a
protective technology (see also [31]).
To summarize, time-inconsistent people tend not to fully
appreciate future risks and, if na¨ıve, also their inability to
deal with them. This happens even if they are aware of those
risks and they are aware that those risks are increasing. As
we learnt from the second scenario, time inconsistency can
lead individuals to accept higher and higher risks.
Individuals may tend to downplay the fact that single actions present
low risks, but their repetition forms a huge liability: it is a
deceiving aspect of privacy that its value is truly
appreciated only after privacy itself is lost. This dynamics captures
the essence of privacy and the so-called anonymity sets [32,
14], where each bit of information we reveal can be linked to
others, so that the whole is more than the sum of the parts.
In addition, [31] show that when costs are immediate,
time-inconsistent individuals tend to procrastinate; when
benefits are immediate, they tend to preoperate. In our
context things are even more interesting because all privacy
decisions involve at the same time costs and benefits. So
we opt against using eCash [9] in order to save us the costs
of switching from credit cards. But we accept the risk that
our credit card number on the Internet could be used
ma26
Period 1 Period 2 Period 3 Period 4
Protection costs 5 6 8 .
Expected intrusion costs . 7 9 15
Table 2: (Fictional) costs of protecting privacy and expected costs of privacy intrusions over time.
liciously. And we give away our personal information to
supermarkets in order to gain immediate discounts - which
will likely turn into price discrimination in due time [3, 26].
We have shown in the second scenario above how
sophisticated but time-inconsistent individuals may choose to
protect their information only in period 2. Sophisticated
people with self-control problems may be at a loss, sometimes
even when compared to na¨ıve people with time
inconsistency problems (how many privacy advocates do use
privacy enhancing technologies all the time?). The reasoning
is that sophisticated people are aware of their self-control
problems, and rather than ignoring them, they incorporate
them into their decision process. This may decrease their
own incentive to behave in the optimal way now.
Sophisticated privacy advocates might realize that protecting
themselves from any possible privacy intrusion is unrealistic, and
so they may start misbehaving now (and may get used to
that, a form of coherent arbitrariness). This is consistent
with the results by [36] presented at the ACM EC "01
conference. [36] found that privacy advocates were also willing
to reveal personal information in exchange for monetary
rewards.
It is also interesting to note that these inconsistencies are
not caused by ignorance of existing risks or confusion about
available technologies. Individuals in the abstract scenarios
we described are aware of their perceived risks and costs.
However, under certain conditions, the magnitude of those
liabilities is almost irrelevant. The individual will take very
slowly increasing risks, which become steps towards huge
liabilities.
5. DISCUSSION
Applying models of self-control bias and immediate
gratification to the study of privacy decision making may offer
a new perspective on the ongoing privacy debate. We have
shown that a model of rational privacy behavior is
unrealistic, while models based on psychological distortions offer
a more accurate depiction of the decision process. We have
shown why individuals who genuinely would like to protect
their privacy may not do so because of psychological
distortions well documented in the behavioral economics
literature. We have highlighted that these distortions may affect
not only na¨ıve individuals but also sophisticated ones.
Surprisingly, we have also found that these inconsistencies may
occur when individuals perceive the risks from not
protecting their privacy as significant.
Additional uncertainties, risk aversion, and varying
attitudes towards losses and gains may be confounding elements
in our analysis. Empirical validation is necessary to calibrate
the effects of different factors.
An empirical analysis may start with the comparison of
available data on the adoption rate of privacy technologies
that offer immediate refuge from minor but pressing privacy
concerns (for example, ‘do not call" marketing lists), with
data on the adoption of privacy technologies that offer less
obviously perceivable protection from more dangerous but
also less visible privacy risks (for example, identity theft
insurances). However, only an experimental approach over
different periods of time in a controlled environment may
allow us to disentangle the influence of several factors. Surveys
alone cannot suffice, since we have shown why survey-time
attitudes will rarely match decision-time actions. An
experimental verification is part of our ongoing research agenda.
The psychological distortions we have discussed may be
considered in the ongoing debate on how to deal with the
privacy problem: industry self-regulation, users" self protection
(through technology or other strategies), or government"s
intervention. The conclusions we have reached suggest that
individuals may not be trusted to make decisions in their
best interests when it comes to privacy. This does not mean
that privacy technologies are ineffective. On the contrary,
our results, by aiming at offering a more realistic model of
user-behavior, could be of help to technologists in their
design of privacy enhancing tools. However, our results also
imply that technology alone or awareness alone may not
address the heart of the privacy problem. Improved
technologies (with lower costs of adoption and protection) and
more information about risks and opportunities certainly
can help. However, more fundamental human behavioral
mechanisms must also be addressed. Self-regulation, even
in presence of complete information and awareness, may not
be trusted to work for the same reasons. A combination of
technology, awareness, and regulative policies - calibrated
to generate and enforce liabilities and incentives for the
appropriate parties - may be needed for privacy-related welfare
increase (as in other areas of an economy: see on a related
analysis [25]).
Observing that people do not want to pay for privacy or do
not care about privacy, therefore, is only a half truth. People
may not be able to act as economically rational agents when
it comes to personal privacy. And the question whether do
consumers care? is a different question from does privacy
matter? Whether from an economic standpoint privacy
ought to be protected or not, is still an open question. It is
a question that involves defining specific contexts in which
the concept of privacy is being invoked. But the value of
privacy eventually goes beyond the realms of economic
reasoning and cost benefit analysis, and ends up relating to one"s
views on society and freedom. Still, even from a purely
economic perspective, anecdotal evidence suggest that the costs
of privacy (from spam to identity theft, lost sales, intrusions,
and the like [30, 12, 17, 33, 26]) are high and increasing.
6. ACKNOWLEDGMENTS
The author gratefully acknowledges Carnegie Mellon
University"s Berkman Development Fund, that partially
supported this research. The author also wishes to thank Jens
Grossklags, Charis Kaskiris, and three anonymous referees
for their helpful comments.
27
7. REFERENCES
[1] A. Acquisti, R. Dingledine, and P. Syverson. On the
economics of anonymity. In Financial
CryptographyFC "03, pages 84-102. Springer Verlag, LNCS 2742,
2003.
[2] A. Acquisti and J. Grossklags. Losses, gains, and
hyperbolic discounting: An experimental approach to
information security attitudes and behavior. In 2nd
Annual Workshop on Economics and Information
Security - WEIS "03, 2003.
[3] A. Acquisti and H. R. Varian. Conditioning prices on
purchase history. Technical report, University of
California, Berkeley, 2001. Presented at the European
Economic Association Conference, Venice, IT, August
2002. http://www.heinz.cmu.edu/~acquisti/
papers/privacy.pdf.
[4] G. A. Akerlof. The market for ‘lemons:" quality
uncertainty and the market mechanism. Quarterly
Journal of Economics, 84:488-500, 1970.
[5] G. S. Becker and K. M. Murphy. A theory of rational
addiction. Journal of Political Economy, 96:675-700,
1988.
[6] B. D. Brunk. Understanding the privacy space. First
Monday, 7, 2002. http://firstmonday.org/issues/
issue7_10/brunk/index.html.
[7] G. Calzolari and A. Pavan. Optimal design of privacy
policies. Technical report, Gremaq, University of
Toulouse, 2001.
[8] D. Chaum. Untraceable electronic mail, return
addresses, and digital pseudonyms. Communications
of the ACM, 24(2):84-88, 1981.
[9] D. Chaum. Blind signatures for untraceable payments.
In Advances in Cryptology - Crypto "82, pages
199-203. Plenum Press, 1983.
[10] R. K. Chellappa and R. Sin. Personalization versus
privacy: An empirical examination of the online
consumer"s dilemma. In 2002 Informs Meeting, 2002.
[11] F. T. Commission. Privacy online: Fair information
practices in the electronic marketplace, 2000.
http://www.ftc.gov/reports/privacy2000/
privacy2000.pdf.
[12] Community Banker Association of Indiana. Identity
fraud expected to triple by 2005, 2001.
http://www.cbai.org/Newsletter/December2001/
identity_fraud_de2001.htm.
[13] S. Corey. Professional attitudes and actual behavior.
Journal of Educational Psychology, 28(1):271 - 280,
1937.
[14] C. Diaz, S. Seys, J. Claessens, and B. Preneel.
Towards measuring anonymity. In P. Syverson and
R. Dingledine, editors, Privacy Enhancing
Technologies - PET "02. Springer Verlag, 2482, 2002.
[15] ebusinessforum.com. eMarketer: The great online
privacy debate, 2000. http://www.ebusinessforum.
com/index.asp?doc_id=1785&layout=rich_story.
[16] Federal Trade Commission. Identity theft heads the
ftc"s top 10 consumer fraud complaints of 2001, 2002.
http://www.ftc.gov/opa/2002/01/idtheft.htm.
[17] R. Gellman. Privacy, consumers, and costs - How the
lack of privacy costs consumers and why business
studies of privacy costs are biased and incomplete,
2002.
http://www.epic.org/reports/dmfprivacy.html.
[18] I.-H. Harn, K.-L. Hui, T. S. Lee, and I. P. L. Png.
Online information privacy: Measuring the
cost-benefit trade-off. In 23rd International
Conference on Information Systems, 2002.
[19] Harris Interactive. First major post-9.11 privacy
survey finds consumers demanding companies do more
to protect privacy; public wants company privacy
policies to be independently verified, 2002.
http://www.harrisinteractive.com/news/
allnewsbydate.asp?NewsID=429.
[20] P. Jehiel and A. Lilico. Smoking today and stopping
tomorrow: A limited foresight perspective. Technical
report, Department of Economics, UCLA, 2002.
[21] Jupiter Research. Seventy percent of US consumers
worry about online privacy, but few take protective
action, 2002. http:
//www.jmm.com/xp/jmm/press/2002/pr_060302.xml.
[22] H. Kunreuther. Causes of underinsurance against
natural disasters. Geneva Papers on Risk and
Insurance, 1984.
[23] D. Laibson. Essays on hyperbolic discounting. MIT,
Department of Economics, Ph.D. Dissertation, 1994.
[24] R. LaPiere. Attitudes versus actions. Social Forces,
13:230-237, 1934.
[25] G. Lowenstein, T. O"Donoghue, and M. Rabin.
Projection bias in predicting future utility. Technical
report, Carnegie Mellon University, Cornell University,
and University of California, Berkeley, 2003.
[26] A. Odlyzko. Privacy, economics, and price
discrimination on the Internet. In Fifth International
Conference on Electronic Commerce, pages 355-366.
ACM, 2003.
[27] T. O"Donoghue and M. Rabin. Choice and
procrastination. Quartely Journal of Economics,
116:121-160, 2001. The page referenced in the text
refers to the 2000 working paper version.
[28] R. A. Posner. An economic theory of privacy.
Regulation, pages 19-26, 1978.
[29] R. A. Posner. The economics of privacy. American
Economic Review, 71(2):405-409, 1981.
[30] Privacy Rights Clearinghouse. Nowhere to turn:
Victims speak out on identity theft, 2000. http:
//www.privacyrights.org/ar/idtheft2000.htm.
[31] M. Rabin and T. O"Donoghue. The economics of
immediate gratification. Journal of Behavioral
Decision Making, 13:233-250, 2000.
[32] A. Serjantov and G. Danezis. Towards an information
theoretic metric for anonymity. In P. Syverson and
R. Dingledine, editors, Privacy Enhancing
Technologies - PET "02. Springer Verlag, LNCS 2482,
2002.
[33] A. Shostack. Paying for privacy: Consumers and
infrastructures. In 2nd Annual Workshop on
Economics and Information Security - WEIS "03,
2003.
[34] H. A. Simon. Models of bounded rationality. The MIT
Press, Cambridge, MA, 1982.
28
[35] P. Slovic. What does it mean to know a cumulative
risk? Adolescents" perceptions of short-term and
long-term consequences of smoking. Journal of
Behavioral Decision Making, 13:259-266, 2000.
[36] S. Spiekermann, J. Grossklags, and B. Berendt.
E-privacy in 2nd generation e-commerce: Privacy
preferences versus actual behavior. In 3rd ACM
Conference on Electronic Commerce - EC "01, pages
38-47, 2002.
[37] G. J. Stigler. An introduction to privacy in economics
and politics. Journal of Legal Studies, 9:623-644, 1980.
[38] P. Syverson. The paradoxical value of privacy. In 2nd
Annual Workshop on Economics and Information
Security - WEIS "03, 2003.
[39] C. R. Taylor. Private demands and demands for
privacy: Dynamic pricing and the market for customer
information. Department of Economics, Duke
University, Duke Economics Working Paper 02-02,
2002.
[40] T. Vila, R. Greenstadt, and D. Molnar. Why we can"t
be bothered to read privacy policies: Models of
privacy economics as a lemons market. In 2nd Annual
Workshop on Economics and Information
SecurityWEIS "03, 2003.
[41] S. Warren and L. Brandeis. The right to privacy.
Harvard Law Review, 4:193-220, 1890.
[42] N. D. Weinstein. Optimistic biases about personal
risks. Science, 24:1232-1233, 1989.
[43] A. Whitten and J. D. Tygar. Why Johnny can"t
encrypt: A usability evaluation of PGP 5.0. In 8th
USENIX Security Symposium, 1999.
29 | privacy;financial privacy;electronic commerce;psychological inconsistency;personal information protection;rationality;hyperbolic discounting;time-inconsistent preference;individual decision making process;privacy enhancing technology;psychological distortion;self-control problem;immediate gratification;privacy sensitive decision;anonymity;hyperbolic discount |
train_J-66 | Expressive Negotiation over Donations to Charities∗ | When donating money to a (say, charitable) cause, it is possible to use the contemplated donation as negotiating material to induce other parties interested in the charity to donate more. Such negotiation is usually done in terms of matching offers, where one party promises to pay a certain amount if others pay a certain amount. However, in their current form, matching offers allow for only limited negotiation. For one, it is not immediately clear how multiple parties can make matching offers at the same time without creating circular dependencies. Also, it is not immediately clear how to make a donation conditional on other donations to multiple charities, when the donator has different levels of appreciation for the different charities. In both these cases, the limited expressiveness of matching offers causes economic loss: it may happen that an arrangement that would have made all parties (donators as well as charities) better off cannot be expressed in terms of matching offers and will therefore not occur. In this paper, we introduce a bidding language for expressing very general types of matching offers over multiple charities. We formulate the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and show that it is NP-complete to approximate to any ratio even in very restricted settings. We give a mixed-integer program formulation of the clearing problem, and show that for concave bids, the program reduces to a linear program. We then show that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming. Subsequently, we show that the clearing problem is much easier when bids are quasilinear-for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave). For the quasilinear setting, we study the mechanism design question. We show that an ex-post efficient mechanism is impossible even with only one charity and a very restricted class of bids. We also show that there may be benefits to linking the charities from a mechanism design standpoint. | 1. INTRODUCTION
When money is donated to a charitable (or other) cause
(hereafter referred to as charity), often the donating party
gives unconditionally: a fixed amount is transferred from
the donator to the charity, and none of this transfer is
contingent on other events-in particular, it is not contingent
on the amount given by other parties. Indeed, this is
currently often the only way to make a donation, especially for
small donating parties such as private individuals. However,
when multiple parties support the same charity, each of them
would prefer to see the others give more rather than less to
this charity. In such scenarios, it is sensible for a party to
use its contemplated donation as negotiating material to
induce the others to give more. This is done by making the
donation conditional on the others" donations. The
following example will illustrate this, and show that the donating
parties as well as the charitable cause may simultaneously
benefit from the potential for such negotiation.
Suppose we have two parties, 1 and 2, who are both
supporters of charity A. To either of them, it would be worth
$0.75 if A received $1. It follows neither of them will be
willing to give unconditionally, because $0.75 < $1. However, if
the two parties draw up a contract that says that they will
each give $0.5, both the parties have an incentive to accept
this contract (rather than have no contract at all): with the
contract, the charity will receive $1 (rather than $0
without a contract), which is worth $0.75 to each party, which
is greater than the $0.5 that that party will have to give.
Effectively, each party has made its donation conditional on
the other party"s donation, leading to larger donations and
greater happiness to all parties involved.
51
One method that is often used to effect this is to make
a matching offer. Examples of matching offers are: I will
give x dollars for every dollar donated., or I will give x
dollars if the total collected from other parties exceeds y.
In our example above, one of the parties can make the offer
I will donate $0.5 if the other party also donates at least
that much, and the other party will have an incentive to
indeed donate $0.5, so that the total amount given to the
charity increases by $1. Thus this matching offer implements
the contract suggested above. As a real-world example, the
United States government has authorized a donation of up to
$1 billion to the Global Fund to fight AIDS, TB and Malaria,
under the condition that the American contribution does not
exceed one third of the total-to encourage other countries
to give more [23].
However, there are several severe limitations to the simple
approach of matching offers as just described.
1. It is not clear how two parties can make matching
offers where each party"s offer is stated in terms of the
amount that the other pays. (For example, it is not
clear what the outcome should be when both parties
offer to match the other"s donation.) Thus, matching
offers can only be based on payments made by
parties that are giving unconditionally (not in terms of a
matching offer)-or at least there can be no circular
dependencies.1
2. Given the current infrastructure for making matching
offers, it is impractical to make a matching offer
depend on the amounts given to multiple charities. For
instance, a party may wish to specify that it will pay
$100 given that charity A receives a total of $1000,
but that it will also count donations made to charity
B, at half the rate. (Thus, a total payment of $500 to
charity A combined with a total payment of $1000 to
charity B would be just enough for the party"s offer to
take effect.)
In contrast, in this paper we propose a new approach
where each party can express its relative preferences for
different charities, and make its offer conditional on its own
appreciation for the vector of donations made to the
different charities. Moreover, the amount the party offers to
donate at different levels of appreciation is allowed to vary
arbitrarily (it does need to be a dollar-for-dollar (or
n-dollarfor-dollar) matching arrangement, or an arrangement where
the party offers a fixed amount provided a given (strike)
total has been exceeded). Finally, there is a clear
interpretation of what it means when multiple parties are making
conditional offers that are stated in terms of each other.
Given each combination of (conditional) offers, there is a
(usually) unique solution which determines how much each
party pays, and how much each charity is paid.
However, as we will show, finding this solution (the
clearing problem) requires solving a potentially difficult
optimization problem. A large part of this paper is devoted to
studying how difficult this problem is under different assumptions
on the structure of the offers, and providing algorithms for
solving it.
1
Typically, larger organizations match offers of private
individuals. For example, the American Red Cross Liberty
Disaster Fund maintains a list of businesses that match their
customers" donations [8].
Towards the end of the paper, we also study the
mechanism design problem of motivating the bidders to bid
truthfully.
In short, expressive negotiation over donations to charities
is a new way in which electronic commerce can help the
world. A web-based implementation of the ideas described
in this paper can facilitate voluntary reallocation of wealth
on a global scale. Aditionally, optimally solving the clearing
problem (and thereby generating the maximum economic
welfare) requires the application of sophisticated algorithms.
2. COMPARISON TO COMBINATORIAL
AUCTIONS AND EXCHANGES
This section discusses the relationship between expressive
charity donation and combinatorial auctions and exchanges.
It can be skipped, but may be of interest to the reader with
a background in combinatorial auctions and exchanges.
In a combinatorial auction, there are m items for sale,
and bidders can place bids on bundles of one or more items.
The auctioneer subsequently labels each bid as winning or
losing, under the constraint that no item can be in more
than one winning bid, to maximize the sum of the values of
the winning bids. (This is known as the clearing problem.)
Variants include combinatorial reverse auctions, where the
auctioneer is seeking to procure a set of items; and
combinatorial exchanges, where bidders can both buy and and sell
items (even within the same bid). Other extensions include
allowing for side constraints, as well as the specification of
attributes of the items in bids. Combinatorial auctions and
exchanges have recently become a popular research topic [20,
21, 17, 22, 9, 18, 13, 3, 12, 26, 19, 25, 2].
The problems of clearing expressive charity donation
markets and clearing combinatorial auctions or exchanges are
very different in formulation. Nevertheless, there are
interesting parallels. One of the main reasons for the interest
in combinatorial auctions and exchanges is that it allows
for expressive bidding. A bidder can express exactly how
much each different allocation is worth to her, and thus the
globally optimal allocation may be chosen by the
auctioneer. Compare this to a bidder having to bid on two different
items in two different (one-item) auctions, without any way
of expressing that (for instance) one item is worthless if the
other item is not won. In this scenario, the bidder may win
the first item but not the second (because there was another
high bid on the second item that she did not anticipate),
leading to economic inefficiency.
Expressive bidding is also one of the main benefits of the
expressive charity donation market. Here, bidders can
express exactly how much they are willing to donate for every
vector of amounts donated to charities. This may allow
bidders to negotiate a complex arrangement of who gives
how much to which charity, which is beneficial to all
parties involved; whereas no such arrangement may have been
possible if the bidders had been restricted to using simple
matching offers on individual charities. Again, expressive
bidding is necessary to achieve economic efficiency.
Another parallel is the computational complexity of the
clearing problem. In order to achieve the full economic
efficiency allowed by the market"s expressiveness (or even come
close to it), hard computational problems must be solved
in combinatorial auctions and exchanges, as well as in the
charity donation market (as we will see).
52
3. DEFINITIONS
Throughout this paper, we will refer to the offers that the
donating parties make as bids, and to the donating parties
as bidders. In our bidding framework, a bid will specify,
for each vector of total payments made to the charities, how
much that bidder is willing to contribute. (The contribution
of this bidder is also counted in the vector of
paymentsso, the vector of total payments to the charities represents
the amount given by all donating parties, not just the ones
other than this bidder.) The bidding language is expressive
enough that no bidder should have to make more than one
bid. The following definition makes the general form of a
bid in our framework precise.
Definition 1. In a setting with m charities c1, c2, . . . , cm,
a bid by bidder bj is a function vj : Rm
→ R. The
interpretation is that if charity ci receives a total amount of πci , then
bidder j is willing to donate (up to) vj(πc1 , πc2 , . . . , πcm ).
We now define possible outcomes in our model, and which
outcomes are valid given the bids that were made.
Definition 2. An outcome is a vector of payments made
by the bidders (πb1 , πb2 , . . . , πbn ), and a vector of payments
received by the charities (πc1 , πc2 , . . . , πcm ). A valid
outcome is an outcome where
1.
n
j=1
πbj ≥
m
i=1
πci (at least as much money is collected
as is given away);
2. For all 1 ≤ j ≤ n, πbj ≤ vj(πc1 , πc2 , . . . , πcm ) (no
bidder gives more than she is willing to).
Of course, in the end, only one of the valid outcomes can
be chosen. We choose the valid outcome that maximizes the
objective that we have for the donation process.
Definition 3. An objective is a function from the set of
all outcomes to R.2
After all bids have been collected, a valid
outcome will be chosen that maximizes this objective.
One example of an objective is surplus, given by
n
j=1
πbj −
m
i=1
πci . The surplus could be the profits of a company
managing the expressive donation marketplace; but,
alternatively, the surplus could be returned to the bidders, or
given to the charities. Another objective is total amount
donated, given by
m
i=1
πci . (Here, different weights could also
be placed on the different charities.)
Finding the valid outcome that maximizes the objective
is a (nontrivial) computational problem. We will refer to it
as the clearing problem. The formal definition follows.
Definition 4 (DONATION-CLEARING). We are
given a set of n bids over charities c1, c2, . . . , cm.
Additionally, we are given an objective function. We are asked to
find an objective-maximizing valid outcome.
How difficult the DONATION-CLEARING problem is
depends on the types of bids used and the language in which
they are expressed. This is the topic of the next section.
2
In general, the objective function may also depend on the
bids, but the objective functions under consideration in this
paper do not depend on the bids. The techniques presented
in this paper will typically generalize to objectives that take
the bids into account directly.
4. A SIMPLIFIED BIDDING LANGUAGE
Specifying a general bid in our framework (as defined
above) requires being able to specify an arbitrary real-valued
function over Rm
. Even if we restricted the possible total
payment made to each charity to the set {0, 1, 2, . . . , s}, this
would still require a bidder to specify (s+1)m
values. Thus,
we need a bidding language that will allow the bidders to
at least specify some bids more concisely. We will specify a
bidding language that only represents a subset of all possible
bids, which can be described concisely.3
To introduce our bidding language, we will first describe
the bidding function as a composition of two functions; then
we will outline our assumptions on each of these functions.
First, there is a utility function uj : Rm
→ R, specifying how
much bidder j appreciates a given vector of total donations
to the charities. (Note that the way we define a bidder"s
utility function, it does not take the payments the bidder
makes into account.) Then, there is a donation willingness
function wj : R → R, which specifies how much bidder j is
willing to pay given her utility for the vector of donations
to the charities. We emphasize that this function does not
need to be linear, so that utilities should not be thought of as
expressible in dollar amounts. (Indeed, when an individual
is donating to a large charity, the reason that the individual
donates only a bounded amount is typically not decreasing
marginal value of the money given to the charity, but rather
that the marginal value of a dollar to the bidder herself
becomes larger as her budget becomes smaller.) So, we have
wj(uj(πc1 , πc2 , . . . , πcm )) = vj(πc1 , πc2 , . . . , πcm ), and we
let the bidder describe her functions uj and wj separately.
(She will submit these functions as her bid.)
Our first restriction is that the utility that a bidder
derives from money donated to one charity is independent of
the amount donated to another charity. Thus,
uj(πc1 , πc2 , . . . , πcm ) =
m
i=1
ui
j(πci ). (We observe that this
does not imply that the bid function vj decomposes
similarly, because of the nonlinearity of wj.) Furthermore, each
ui
j must be piecewise linear. An interesting special case
which we will study is when each ui
j is a line: ui
j(πci ) =
ai
jπci . This special case is justified in settings where the
scale of the donations by the bidders is small relative to the
amounts the charities receive from other sources, so that the
marginal use of a dollar to the charity is not affected by the
amount given by the bidders.
The only restriction that we place on the payment
willingness functions wj is that they are piecewise linear. One
interesting special case is a threshold bid, where wj is a step
function: the bidder will provide t dollars if her utility
exceeds s, and otherwise 0. Another interesting case is when
such a bid is partially acceptable: the bidder will provide t
dollars if her utility exceeds s; but if her utility is u < s, she
is still willing to provide ut
s
dollars.
One might wonder why, if we are given the bidders" utility
functions, we do not simply maximize the sum of the
utilities rather than surplus or total donated. There are several
reasons. First, because affine transformations do not affect
utility functions in a fundamental way, it would be
possi3
Of course, our bidding language can be trivially extended
to allow for fully expressive bids, by also allowing bids from
a fully expressive bidding language, in addition to the bids
in our bidding language.
53
ble for a bidder to inflate her utility by changing its units,
thereby making her bid more important for utility
maximization purposes. Second, a bidder could simply give a
payment willingness function that is 0 everywhere, and have
her utility be taken into account in deciding on the outcome,
in spite of her not contributing anything.
5. AVOIDING INDIRECT PAYMENTS
In an initial implementation, the approach of having
donations made out to a center, and having a center forward
these payments to charities, may not be desirable. Rather, it
may be preferable to have a partially decentralized solution,
where the donating parties write out checks to the charities
directly according to a solution prescribed by the center. In
this scenario, the center merely has to verify that parties are
giving the prescribed amounts. Advantages of this include
that the center can keep its legal status minimal, as well
as that we do not require the donating parties to trust the
center to transfer their donations to the charities (or require
some complicated verification protocol). It is also a step
towards a fully decentralized solution, if this is desirable.
To bring this about, we can still use the approach
described earlier. After we clear the market in the manner
described before, we know the amount that each donator is
supposed to give, and the amount that each charity is
supposed to receive. Then, it is straightforward to give some
specification of who should give how much to which charity,
that is consistent with that clearing. Any greedy algorithm
that increases the cash flow from any bidder who has not
yet paid enough, to any charity that has not yet received
enough, until either the bidder has paid enough or the
charity has received enough, will provide such a specification.
(All of this is assuming that
bj
πbj =
ci
πci . In the case
where there is nonzero surplus, that is,
bj
πbj >
ci
πci , we
can distribute this surplus across the bidders by not
requiring them to pay the full amount, or across the charities by
giving them more than the solution specifies.)
Nevertheless, with this approach, a bidder may have to
write out a check to a charity that she does not care for at
all. (For example, an environmental activist who was using
the system to increase donations to a wildlife preservation
fund may be required to write a check to a group
supporting a right-wing political party.) This is likely to lead to
complaints and noncompliance with the clearing. We can
address this issue by letting each bidder specify explicitly
(before the clearing) which charities she would be willing
to make a check out to. These additional constraints, of
course, may change the optimal solution. In general,
checking whether a given centralized solution (with zero surplus)
can be accomplished through decentralized payments when
there are such constraints can be modeled as a MAX-FLOW
problem. In the MAX-FLOW instance, there is an edge from
the source node s to each bidder bj, with a capacity of πbj
(as specified in the centralized solution); an edge from each
bidder bj to each charity ci that the bidder is willing to
donate money to, with a capacity of ∞; and an edge from each
charity ci to the target node t with capacity πci (as specified
in the centralized solution).
In the remainder of this paper, all our hardness results
apply even to the setting where there is no constraint on which
bidders can pay to which charity (that is, even the problem
as it was specified before this section is hard). We also
generalize our clearing algorithms to the partially decentralized
case with constraints.
6. HARDNESS OF CLEARING THE
MARKET
In this section, we will show that the clearing problem is
completely inapproximable, even when every bidder"s utility
function is linear (with slope 0 or 1 in each charity"s
payments), each bidder cares either about at most two charities
or about all charities equally, and each bidder"s payment
willingness function is a step function. We will reduce from
MAX2SAT (given a formula in conjunctive normal form
(where each clause has two literals) and a target number of
satisfied clauses T, does there exist an assignment of truth
values to the variables that makes at least T clauses true?),
which is NP-complete [7].
Theorem 1. There exists a reduction from MAX2SAT
instances to DONATION-CLEARING instances such that
1. If the MAX2SAT instance has no solution, then the
only valid outcome is the zero outcome (no bidder pays
anything and no charity receives anything); 2. Otherwise, there
exists a solution with positive surplus. Additionally, the
DONATION-CLEARING instances that we reduce to have
the following properties: 1. Every ui
j is a line; that is, the
utility that each bidder derives from any charity is linear; 2.
All the ui
j have slope either 0 or 1; 3. Every bidder either
has at most 2 charities that affect her utility (with slope 1),
or all charities affect her utility (with slope 1); 4. Every bid
is a threshold bid; that is, every bidder"s payment willingness
function wj is a step function.
Proof. The problem is in NP because we can
nondeterministically choose the payments to be made and received,
and check the validity and objective value of this outcome.
In the following, we will represent bids as follows:
({(ck, ak)}, s, t) indicates that uk
j (πck ) = akπck (this
function is 0 for ck not mentioned in the bid), and wj(uj) = t
for uj ≥ s, wj(uj) = 0 otherwise.
To show NP-hardness, we reduce an arbitrary MAX2SAT
instance, given by a set of clauses K = {k} = {(l1
k, l2
k)}
over a variable set V together with a target number of
satisfied clauses T, to the following DONATION-CLEARING
instance. Let the set of charities be as follows. For every
literal l ∈ L, there is a charity cl. Then, let the set of
bids be as follows. For every variable v, there is a bid bv =
({(c+v, 1), (c−v, 1)}, 2, 1 − 1
4|V |
). For every literal l, there is
a bid bl = ({(cl, 1)}, 2, 1). For every clause k = {l1
k, l2
k} ∈ K,
there is a bid bk = ({(cl1
k
, 1), (cl2
k
, 1)}, 2, 1
8|V ||K|
). Finally,
there is a single bid that values all charities equally: b0 =
({(c1, 1), (c2, 1), . . . , (cm, 1)}, 2|V |+ T
8|V ||K|
, 1
4
+ 1
16|V ||K|
). We
show the two instances are equivalent.
First, suppose there exists a solution to the MAX2SAT
instance. If in this solution, l is true, then let πcl = 2 +
T
8|V |2|K|
; otherwise πcl = 0. Also, the only bids that are
not accepted (meaning the threshold is not met) are the bl
where l is false, and the bk such that both of l1
k, l2
k are false.
First we show that no bidder whose bid is accepted pays
more than she is willing to. For each bv, either c+v or c−v
receives at least 2, so this bidder"s threshold has been met.
54
For each bl, either l is false and the bid is not accepted, or l
is true, cl receives at least 2, and the threshold has been met.
For each bk, either both of l1
k, l2
k are false and the bid is not
accepted, or at least one of them (say li
k) is true (that is, k
is satisfied) and cli
k
receives at least 2, and the threshold has
been met. Finally, because the total amount received by the
charities is 2|V | + T
8|V ||K|
, b0"s threshold has also been met.
The total amount that can be extracted from the accepted
bids is at least |V |(1− 1
4|V |
)+|V |+T 1
8|V ||K|
+ 1
4
+ 1
16|V ||K|
) =
2|V |+ T
8|V ||K|
+ 1
16|V ||K|
> 2|V |+ T
8|V ||K|
, so there is positive
surplus. So there exists a solution with positive surplus to
the DONATION-CLEARING instance.
Now suppose there exists a nonzero outcome in the
DONATION-CLEARING instance. First we show that it
is not possible (for any v ∈ V ) that both b+v and b−v are
accepted. For, this would require that πc+v + πc−v ≥ 4.
The bids bv, b+v, b−v cannot contribute more than 3, so
we need another 1 at least. It is easily seen that for any
other v , accepting any subset of {bv , b+v , b−v } would
require that at least as much is given to c+v and c−v as
can be extracted from these bids, so this cannot help.
Finally, all the other bids combined can contribute at most
|K| 1
8|V ||K|
+ 1
4
+ 1
16|V ||K|
< 1. It follows that we can
interpret the outcome in the DONATION-CLEARING instance
as a partial assignment of truth values to variables: v is set
to true if b+v is accepted, and to false if b−v is accepted. All
that is left to show is that this partial assignment satisfies
at least T clauses.
First we show that if a clause bid bk is accepted, then
either bl1
k
or bl2
k
is accepted (and thus either l1
k or l2
k is set
to true, hence k is satisfied). If bk is accepted, at least one
of cl1
k
and cl2
k
must be receiving at least 1; without loss of
generality, say it is cl1
k
, and say l1
k corresponds to variable
v1
k (that is, it is +v1
k or −v1
k). If cl1
k
does not receive at
least 2, bl1
k
is not accepted, and it is easy to check that
the bids bv1
k
, b+v1
k
, b−v1
k
contribute (at least) 1 less than is
paid to c+v1
k
and c+v1
k
. But this is the same situation that
we analyzed before, and we know it is impossible. All that
remains to show is that at least T clause bids are accepted.
We now show that b0 is accepted. Suppose it is not; then
one of the bv must be accepted. (The solution is nonzero by
assumption; if only some bk are accepted, the total payment
from these bids is at most |K| 1
8|V ||K|
< 1, which is not
enough for any bid to be accepted; and if one of the bl is
accepted, then the threshold for the corresponding bv is also
reached.) For this v, bv1
k
, b+v1
k
, b−v1
k
contribute (at least)
1
4|V |
less than the total payments to c+v and c−v. Again,
the other bv and bl cannot (by themselves) help to close this
gap; and the bk can contribute at most |K| 1
8|V ||K|
< 1
4|V |
.
It follows that b0 is accepted.
Now, in order for b0 to be accepted, a total of 2|V |+ T
8|V ||K|
must be donated. Because is not possible (for any v ∈ V )
that both b+v and b−v are accepted, it follows that the total
payment by the bv and the bl can be at most 2|V | − 1
4
.
Adding b0"s payment of 1
4
+ 1
16|V ||K|
to this, we still need
T − 1
2
8|V ||K|
from the bk. But each one of them contributes at
most 1
8|V ||K|
, so at least T of them must be accepted.
Corollary 1. Unless P=NP, there is no polynomial-time
algorithm for approximating DONATION-CLEARING (with
either the surplus or the total amount donated as the
objective) within any ratio f(n), where f is a nonzero function of
the size of the instance. This holds even if the
DONATIONCLEARING structures satisfy all the properties given in
Theorem 1.
Proof. Suppose we had such a polynomial time
algorithm, and applied it to the DONATION-CLEARING
instances that were reduced from MAX2SAT instances in
Theorem 1. It would return a nonzero solution when the
MAX2SAT instance has a solution, and a zero solution
otherwise. So we can decide whether arbitrary MAX2SAT
instances are satisfiable this way, and it would follow that
P=NP.
(Solving the problem to optimality is NP-complete in many
other (noncomparable or even more restricted) settings as
well-we omit such results because of space constraint.)
This should not be interpreted to mean that our approach is
infeasible. First, as we will show, there are very expressive
families of bids for which the problem is solvable in
polynomial time. Second, NP-completeness is often overcome in
practice (especially when the stakes are high). For instance,
even though the problem of clearing combinatorial auctions
is NP-complete [20] (even to approximate [21]), they are
typically solved to optimality in practice.
7. MIXED INTEGER PROGRAMMING
FORMULATION
In this section, we give a mixed integer programming
(MIP) formulation for the general problem. We also discuss
in which special cases this formulation reduces to a linear
programming (LP) formulation. In such cases, the problem
is solvable in polynomial time, because linear programs can
be solved in polynomial time [11].
The variables of the MIP defining the final outcome are
the payments made to the charities, denoted by πci , and
the payments extracted from the bidders, πbj . In the case
where we try to avoid direct payments and let the bidders
pay the charities directly, we add variables πci,bj indicating
how much bj pays to ci, with the constraints that for each
ci, πci ≤
bj
πci,bj ; and for each bj, πbj ≥
ci
πci,bj .
Additionally, there is a constraint πci,bj = 0 whenever bidder bj
is unwilling to pay charity ci. The rest of the MIP can be
phrased in terms of the πci and πbj .
The objectives we have discussed earlier are both linear:
surplus is given by
n
j=1
πbj −
m
i=1
πci , and total amount
donated is given by
m
i=1
πci (coefficients can be added to
represent different weights on the different charities in the
objective).
The constraint that the outcome should be valid (no deficit)
is given simply by:
n
j=1
πbj ≥
m
i=1
πci .
For every bidder, for every charity, we define an additional
utility variable ui
j indicating the utility that this bidder
derives from the payment to this charity. The bidder"s total
55
utility is given by another variable uj, with the constraint
that uj =
m
i=1
ui
j.
Each ui
j is given as a function of πci by the (piecewise
linear) function provided by the bidder. In order to
represent this function in the MIP formulation, we will merely
place upper bounding constraints on ui
j, so that it cannot
exceed the given functions. The MIP solver can then push
the ui
j variables all the way up to the constraint, in order
to extract as much payment from this bidder as possible.
In the case where the ui
j are concave, this is easy: if (sl, tl)
and (sl+1, tl+1) are endpoints of a finite linear segment in the
function, we add the constraint that ui
j ≤ tl +
πci
−sl
sl+1−sl
(tl+1 −
tl). If the final (infinite) segment starts at (sk, tk) and has
slope d, we add the constraint that ui
j ≤ tk + d(πci − sk).
Using the fact that the function is concave, for each value of
πci , the tightest upper bound on ui
j is the one corresponding
to the segment above that value of πci , and therefore these
constraints are sufficient to force the correct value of ui
j.
When the function is not concave, we require (for the first
time) some binary variables. First, we define another point
on the function: (sk+1, tk+1) = (sk + M, tk + dM), where
d is the slope of the infinite segment and M is any upper
bound on the πcj . This has the effect that we will never be
on the infinite segment again. Now, let xi,j
l be an indicator
variable that should be 1 if πci is below the lth segment of
the function, and 0 otherwise. To effect this, first add a
constraint
k
l=0
xi,j
l = 1. Now, we aim to represent πci as a
weighted average of its two neighboring si,j
l . For 0 ≤ l ≤
k + 1, let λi,j
l be the weight on si,j
l . We add the constraint
k+1
l=0
λi,j
l = 1. Also, for 0 ≤ l ≤ k + 1, we add the constraint
λi,j
l ≤ xl−1 +xl (where x−1 and xk+1 are defined to be zero),
so that indeed only the two neighboring si,j
l have nonzero
weight. Now we add the constraint πci =
k+1
l=0
si,j
l λi,j
l , and
now the λi,j
l must be set correctly. Then, we can set ui
j =
k+1
l=0
ti,j
l λi,j
l . (This is a standard MIP technique [16].)
Finally, each πbj is bounded by a function of uj by the
(piecewise linear) function provided by the bidder (wj).
Representing this function is entirely analogous to how we
represented ui
j as a function of πci . (Again we will need binary
variables only if the function is not concave.)
Because we only use binary variables when either a
utility function ui
j or a payment willingness function wj is not
concave, it follows that if all of these are concave, our MIP
formulation is simply a linear program-which can be solved
in polynomial time. Thus:
Theorem 2. If all functions ui
j and wj are concave (and
piecewise linear), the DONATION-CLEARING problem can
be solved in polynomial time using linear programming.
Even if some of these functions are not concave, we can
simply replace each such function by the smallest upper
bounding concave function, and use the linear programming
formulation to obtain an upper bound on the
objectivewhich may be useful in a search formulation of the general
problem.
8. WHY ONE CANNOT DO MUCH
BETTER THAN LINEAR
PROGRAMMING
One may wonder if, for the special cases of the
DONATIONCLEARING problem that can be solved in polynomial time
with linear programming, there exist special purpose
algorithms that are much faster than linear programming
algorithms. In this section, we show that this is not the case.
We give a reduction from (the decision variant of) the
general linear programming problem to (the decision variant
of) a special case of the DONATION-CLEARING problem
(which can be solved in polynomial time using linear
programming). (The decision variant of an optimization
problem asks the binary question: Can the objective value
exceed o?) Thus, any special-purpose algorithm for solving
the decision variant of this special case of the
DONATIONCLEARING problem could be used to solve a decision
question about an arbitrary linear program just as fast. (And
thus, if we are willing to call the algorithm a logarithmic
number of times, we can solve the optimization version of
the linear program.)
We first observe that for linear programming, a decision
question about the objective can simply be phrased as
another constraint in the LP (forcing the objective to exceed
the given value); then, the original decision question
coincides with asking whether the resulting linear program has
a feasible solution.
Theorem 3. The question of whether an LP (given by a
set of linear constraints4
) has a feasible solution can be
modeled as a DONATION-CLEARING instance with payment
maximization as the objective, with 2v charities and v + c
bids (where v is the number of variables in the LP, and c is
the number of constraints). In this model, each bid bj has
only linear ui
j functions, and is a partially acceptable
threshold bid (wj(u) = tj for u ≥ sj, otherwise wj(u) =
utj
sj
). The
v bids corresponding to the variables mention only two
charities each; the c bids corresponding to the constraints mention
only two times the number of variables in the corresponding
constraint.
Proof. For every variable xi in the LP, let there be two
charities, c+xi and c−xi . Let H be some number such that
if there is a feasible solution to the LP, there is one in which
every variable has absolute value at most H.
In the following, we will represent bids as follows:
({(ck, ak)}, s, t) indicates that uk
j (πck ) = akπck (this
function is 0 for ck not mentioned in the bid), and wj(uj) = t
for uj ≥ s, wj(uj) =
uj t
s
otherwise.
For every variable xi in the LP, let there be a bid bxi =
({(c+xi , 1), (c−xi , 1)}, 2H, 2H − c
v
). For every constraint
i
rj
i xi ≤ sj in the linear program, let there be a bid bj =
({(c−xi , rj
i )}i:r
j
i >0
∪ {(c+xi , −rj
i )}i:r
j
i <0
, (
i
|rj
i |)H − sj, 1).
Let the target total amount donated be 2vH.
Suppose there is a feasible solution (x∗
1, x∗
2, . . . , x∗
v) to the
LP. Without loss of generality, we can suppose that |x∗
i | ≤ H
for all i. Then, in the DONATION-CLEARING instance,
4
These constraints must include bounds on the variables
(including nonnegativity bounds), if any.
56
for every i, let πc+xi
= H + x∗
i , and let πc−xi
= H − x∗
i
(for a total payment of 2H to these two charities). This
allows us to extract the maximum payment from the bids
bxi -a total payment of 2vH − c. Additionally, the utility
of bidder bj is now
i:r
j
i >0
rj
i (H − x∗
i ) +
i:r
j
i <0
−rj
i (H + x∗
i ) =
(
i
|rj
i |)H −
i
rj
i x∗
i ≥ (
i
|rj
i |)H − sj (where the last
inequality stems from the fact that constraint j must be
satisfied in the LP solution), so it follows we can extract the
maximum payment from all the bidders bj, for a total
payment of c. It follows that we can extract the required 2vH
payment from the bidders, and there exists a solution to
the DONATION-CLEARING instance with a total amount
donated of at least 2vH.
Now suppose there is a solution to the
DONATIONCLEARING instance with a total amount donated of at
least vH. Then the maximum payment must be extracted
from each bidder. From the fact that the maximum payment
must be extracted from each bidder bxi , it follows that for
each i, πc+xi
+ πc−xi
≥ 2H. Because the maximum
extractable total payment is 2vH, it follows that for each i,
πc+xi
+ πc−xi
= 2H. Let x∗
i = πc+xi
− H = H − πc−xi
.
Then, from the fact that the maximum payment must be
extracted from each bidder bj, it follows that (
i
|rj
i |)H −
sj ≤
i:r
j
i >0
rj
i πc−xi
+
i:r
j
i <0
−rj
i πc+xi
=
i:r
j
i >0
rj
i (H − x∗
i ) +
i:r
j
i <0
−rj
i (H + x∗
i ) = (
i
|rj
i |)H −
i
rj
i x∗
i . Equivalently,
i
rj
i x∗
i ≤ sj. It follows that the x∗
i constitute a feasible
solution to the LP.
9. QUASILINEAR BIDS
Another class of bids of interest is the class of quasilinear
bids. In a quasilinear bid, the bidder"s payment willingness
function is linear in utility: that is, wj = uj. (Because the
units of utility are arbitrary, we may as well let them
correspond exactly to units of money-so we do not need a
constant multiplier.) In most cases, quasilinearity is an
unreasonable assumption: for example, usually bidders have a
limited budget for donations, so that the payment
willingness will stop increasing in utility after some point (or at
least increase slower in the case of a softer budget
constraint). Nevertheless, quasilinearity may be a reasonable
assumption in the case where the bidders are large
organizations with large budgets, and the charities are a few small
projects requiring relatively little money. In this setting,
once a certain small amount has been donated to a charity,
a bidder will derive no more utility from more money
being donated from that charity. Thus, the bidders will never
reach a high enough utility for their budget constraint (even
when it is soft) to take effect, and thus a linear
approximation of their payment willingness function is reasonable.
Another reason for studying the quasilinear setting is that
it is the easiest setting for mechanism design, which we will
discuss shortly. In this section, we will see that the clearing
problem is much easier in the case of quasilinear bids.
First, we address the case where we are trying to maximize
surplus (which is the most natural setting for mechanism
design). The key observation here is that when bids are
quasilinear, the clearing problem decomposes across charities.
Lemma 1. Suppose all bids are quasilinear, and surplus
is the objective. Then we can clear the market optimally by
clearing the market for each charity individually. That is,
for each bidder bj, let πbj =
ci
πbi
j
. Then, for each charity
ci, maximize (
bj
πbi
j
) − πci , under the constraint that for
every bidder bj, πbi
j
≤ ui
j(πci ).
Proof. The resulting solution is certainly valid: first of
all, at least as much money is collected as is given away,
because
bj
πbj −
ci
πci =
bj ci
πbi
j
−
ci
πci =
ci
((
bj
πbi
j
) −
πci )-and the terms of this summation are the objectives of
the individual optimization problems, each of which can be
set at least to 0 (by setting all the variables are set to 0),
so it follows that the expression is nonnegative. Second, no
bidder bj pays more than she is willing to, because uj −πbj =
ci
ui
j(πci )−
ci
πbi
j
=
ci
(ui
j(πci )−πbi
j
)-and the terms of this
summation are nonnegative by the constraints we imposed
on the individual optimization problems.
All that remains to show is that the solution is
optimal. Because in an optimal solution, we will extract as
much payment from the bidders as possible given the πci ,
all we need to show is that the πci are set optimally by
this approach. Let π∗
ci
be the amount paid to charity πci
in some optimal solution. If we change this amount to πci
and leave everything else unchanged, this will only affect
the payment that we can extract from the bidders because
of this particular charity, and the difference in surplus will
be
bj
ui
j(πci
) − ui
j(π∗
ci
) − πci
+ π∗
ci
. This expression is, of
course, 0 if πci
= π∗
ci
. But now notice that this expression
is maximized as a function of πci
by the decomposed
solution for this charity (the terms without πci
in them do not
matter, and of course in the decomposed solution we always
set πbi
j
= ui
j(πci )). It follows that if we change πci to the
decomposed solution, the change in surplus will be at least
0 (and the solution will still be valid). Thus, we can change
the πci one by one to the decomposed solution without ever
losing any surplus.
Theorem 4. When all bids are quasilinear and surplus
is the objective, DONATION-CLEARING can be done in
linear time.
Proof. By Lemma 1, we can solve the problem
separately for each charity. For charity ci, this amounts to
maximizing (
bj
ui
j(πci )) − πci as a function of πci . Because all
its terms are piecewise linear functions, this whole function
is piecewise linear, and must be maximized at one of the
points where it is nondifferentiable. It follows that we need
only check all the points at which one of the terms is
nondifferentiable.
Unfortunately, the decomposing lemma does not hold for
payment maximization.
Proposition 1. When the objective is payment
maximization, even when bids are quasilinear, the solution obtained
by decomposing the problem across charities is in general not
optimal (even with concave bids).
57
Proof. Consider a single bidder b1 placing the following
quasilinear bid over two charities c1 and c2: u1
1(πc1 ) = 2πci
for 0 ≤ πci ≤ 1, u1
1(πc1 ) = 2 +
πci
−1
4
otherwise; u2
1(πc2 ) =
πci
2
. The decomposed solution is πc1 = 7
3
, πc2 = 0, for a
total donation of 7
3
. But the solution πc1 = 1, πc2 = 2 is
also valid, for a total donation of 3 > 7
3
.
In fact, when payment maximization is the objective,
DONATION-CLEARING remains (weakly) NP-complete in
general. (In the remainder of the paper, proofs are omitted
because of space constraint.)
Theorem 5. DONATION-CLEARING is (weakly)
NPcomplete when payment maximization is the objective, even
when every bid is concerns only one charity (and has a
stepfunction utility function for this charity), and is quasilinear.
However, when the bids are also concave, a simple greedy
clearing algorithm is optimal.
Theorem 6. Given a DONATION-CLEARING instance
with payment maximization as the objective where all bids
are quasilinear and concave, consider the following
algorithm. Start with πci = 0 for all charities. Then, letting
γci =
d
bj
ui
j (πci
)
dπci
(at nondifferentiable points, these
derivatives should be taken from the right), increase πc∗
i
(where
c∗
i ∈ arg maxci γci ), until either γc∗
i
is no longer the highest
(in which case, recompute c∗
i and start increasing the
corresponding payment), or
bj
uj =
ci
πci and γc∗
i
< 1. Finally,
let πbj = uj.
(A similar greedy algorithm works when the objective is
surplus and the bids are quasilinear and concave, with as
only difference that we stop increasing the payments as soon
as γc∗
i
< 1.)
10. INCENTIVE COMPATIBILITY
Up to this point, we have not discussed the bidders"
incentives for bidding any particular way. Specifically, the bids
may not truthfully reflect the bidders" preferences over
charities because a bidder may bid strategically, misrepresenting
her preferences in order to obtain a result that is better to
herself. This means the mechanism is not strategy-proof.
(We will show some concrete examples of this shortly.) This
is not too surprising, because the mechanism described so
far is, in a sense, a first-price mechanism, where the
mechanism will extract as much payment from a bidder as her bid
allows. Such mechanisms (for example, first-price auctions,
where winners pay the value of their bids) are typically not
strategy-proof: if a bidder reports her true valuation for an
outcome, then if this outcome occurs, the payment the
bidder will have to make will offset her gains from the outcome
completely. Of course, we could try to change the rules of
the game-which outcome (payment vector to charities) do
we select for which bid vector, and which bidder pays how
much-in order to make bidding truthfully beneficial, and
to make the outcome better with regard to the bidders" true
preferences. This is the field of mechanism design. In this
section, we will briefly discuss the options that mechanism
design provides for the expressive charity donation problem.
10.1 Strategic bids under the first-price
mechanism
We first point out some reasons for bidders to misreport
their preferences under the first-price mechanism described
in the paper up to this point. First of all, even when there is
only one charity, it may make sense to underbid one"s true
valuation for the charity. For example, suppose a bidder
would like a charity to receive a certain amount x, but does
not care if the charity receives more than that. Additionally,
suppose that the other bids guarantee that the charity will
receive at least x no matter what bid the bidder submits
(and the bidder knows this). Then the bidder is best off not
bidding at all (or submitting a utility for the charity of 0),
to avoid having to make any payment. (This is known in
economics as the free rider problem [14].
With multiple charities, another kind of manipulation may
occur, where the bidder attempts to steer others" payments
towards her preferred charity. Suppose that there are two
charities, and three bidders. The first bidder bids u1
1(πc1 ) =
1 if πc1 ≥ 1, u1
1(πc1 ) = 0 otherwise; u2
1(πc2 ) = 1 if πc2 ≥ 1,
u2
1(πc2 ) = 0 otherwise; and w1(u1) = u1 if u1 ≤ 1, w1(u1) =
1+ 1
100
(u1 −1) otherwise. The second bidder bids u1
2(πc1 ) =
1 if πc1 ≥ 1, u1
1(πc1 ) = 0 otherwise; u2
2(πc2 ) = 0 (always);
w2(u2) = 1
4
u2 if u2 ≤ 1, w2(u2) = 1
4
+ 1
100
(u2 −1) otherwise.
Now, the third bidder"s true preferences are accurately
represented5
by the bid u1
3(πc1 ) = 1 if πc1 ≥ 1, u1
3(πc1 ) = 0
otherwise; u2
3(πc2 ) = 3 if πc2 ≥ 1, u2
3(πc1 ) = 0 otherwise;
and w3(u3) = 1
3
u3 if u3 ≤ 1, w3(u3) = 1
3
+ 1
100
(u3 − 1)
otherwise. Now, it is straightforward to check that, if the third
bidder bids truthfully, regardless of whether the objective is
surplus maximization or total donated, charity 1 will receive
at least 1, and charity 2 will receive less than 1. The same is
true if bidder 3 does not place a bid at all (as in the previous
type of manipulation); hence bidder 2"s utility will be 1 in
this case. But now, if bidder 3 reports u1
3(πc1 ) = 0
everywhere; u2
3(πc2 ) = 3 if πc2 ≥ 1, u2
3(πc2 ) = 0 otherwise (this
part of the bid is truthful); and w3(u3) = 1
3
u3 if u3 ≤ 1,
w3(u3) = 1
3
otherwise; then charity 2 will receive at least
1, and bidder 3 will have to pay at most 1
3
. Because up to
this amount of payment, one unit of money corresponds to
three units of utility to bidder 3, it follows his utility is now
at least 3 − 1 = 2 > 1. We observe that in this case, the
strategic bidder is not only affecting how much the bidders
pay, but also how much the charities receive.
10.2 Mechanism design in the quasilinear
setting
There are four reasons why the mechanism design
approach is likely to be most successful in the setting of
quasilinear preferences. First, historically, mechanism design has
been been most successful when the quasilinear assumption
could be made. Second, because of this success, some very
general mechanisms have been discovered for the
quasilinear setting (for instance, the VCG mechanisms [24, 4, 10],
or the dAGVA mechanism [6, 1]) which we could apply
directly to the expressive charity donation problem. Third, as
we saw in Section 9, the clearing problem is much easier in
5
Formally, this means that if the bidder is forced to pay
the full amount that his bid allows for a particular vector of
payments to charities, the bidder is indifferent between this
and not participating in the mechanism at all. (Compare
this to bidding truthfully in a first-price auction.)
58
this setting, and thus we are less likely to run into
computational trouble for the mechanism design problem. Fourth, as
we will show shortly, the quasilinearity assumption in some
cases allows for decomposing the mechanism design problem
over the charities (as it did for the simple clearing problem).
Moreover, in the quasilinear setting (unlike in the general
setting), it makes sense to pursue social welfare (the sum
of the utilities) as the objective, because now 1) units of
utility correspond directly to units of money, so that we do
not have the problem of the bidders arbitrarily scaling their
utilities; and 2) it is no longer possible to give a payment
willingness function of 0 while still affecting the donations
through a utility function.
Before presenting the decomposition result, we introduce
some terms from game theory. A type is a preference profile
that a bidder can have and can report (thus, a type report
is a bid). Incentive compatibility (IC) means that bidders
are best off reporting their preferences truthfully; either
regardless of the others" types (in dominant strategies), or in
expectation over them (in Bayes-Nash equilibrium).
Individual rationality (IR) means agents are at least as well off
participating in the mechanism as not participating; either
regardless of the others" types (ex-post), or in expectation
over them (ex-interim). A mechanism is budget balanced
if there is no flow of money into or out of the system-in
general (ex-post), or in expectation over the type reports
(ex-ante). A mechanism is efficient if it (always) produces
the efficient allocation of wealth to charities.
Theorem 7. Suppose all agents" preferences are
quasilinear. Furthermore, suppose that there exists a single-charity
mechanism M that, for a certain subclass P of (quasilinear)
preferences, under a given solution concept S
(implementation in dominant strategies or Bayes-Nash equilibrium) and
a given notion of individual rationality R (ex post, ex
interim, or none), satisfies a certain notion of budget balance
(ex post, ex ante, or none), and is ex-post efficient. Then
there exists such a mechanism for any number of charities.
Two mechanisms that satisfy efficiency (and can in fact be
applied directly to the multiple-charity problem without use
of the previous theorem) are the VCG (which is incentive
compatible in dominant strategies) and dAGVA (which is
incentive compatible only in Bayes-Nash equilibrium)
mechanisms. Each of them, however, has a drawback that would
probably make it impractical in the setting of donations to
charities. The VCG mechanism is not budget balanced. The
dAGVA mechanism does not satisfy ex-post individual
rationality. In the next subsection, we will investigate if we
can do better in the setting of donations to charities.
10.3 Impossibility of efficiency
In this subsection, we show that even in a very restricted
setting, and with minimal requirements on IC and IR
constraints, it is impossible to create a mechanism that is
efficient.
Theorem 8. There is no mechanism which is ex-post
budget balanced, ex-post efficient, and ex-interim individually
rational with Bayes-Nash equilibrium as the solution concept
(even with only one charity, only two quasilinear bidders,
with identical type distributions (uniform over two types,
with either both utility functions being step functions or both
utility functions being concave piecewise linear functions)).
The case of step-functions in this theorem corresponds
exactly to the case of a single, fixed-size, nonexcludable public
good (the public good being that the charity receives the
desired amount)-for which such an impossibility result is
already known [14]. Many similar results are known,
probably the most famous of which is the Myerson-Satterthwaite
impossibility result, which proves the impossibility of
efficient bilateral trade under the same requirements [15].
Theorem 7 indicates that there is no reason to decide on
donations to multiple charities under a single mechanism
(rather than a separate one for each charity), when an
efficient mechanism with the desired properties exists for the
single-charity case. However, because under the
requirements of Theorem 8, no such mechanism exists, there may
be a benefit to bringing the charities under the same
umbrella. The next proposition shows that this is indeed the
case.
Proposition 2. There exist settings with two charities
where there exists no ex-post budget balanced, ex-post
efficient, and ex-interim individually rational mechanism with
Bayes-Nash equilibrium as the solution concept for either
charity alone; but there exists an ex-post budget balanced,
ex-post efficient, and ex-post individually rational
mechanism with dominant strategies as the solution concept for
both charities together. (Even when the conditions are the
same as in Theorem 8, apart from the fact that there are
now two charities.)
11. CONCLUSION
We introduced a bidding language for expressing very
general types of matching offers over multiple charities. We
formulated the corresponding clearing problem (deciding how
much each bidder pays, and how much each charity receives),
and showed that it is NP-complete to approximate to any
ratio even in very restricted settings. We gave a mixed-integer
program formulation of the clearing problem, and showed
that for concave bids (where utility functions and payment
willingness function are concave), the program reduces to a
linear program and can hence be solved in polynomial time.
We then showed that the clearing problem for a subclass of
concave bids is at least as hard as the decision variant of
linear programming, suggesting that we cannot do much better
than a linear programming implementation for such bids.
Subsequently, we showed that the clearing problem is much
easier when bids are quasilinear (where payment willingness
functions are linear)-for surplus, the problem decomposes
across charities, and for payment maximization, a greedy
approach is optimal if the bids are concave (although this
latter problem is weakly NP-complete when the bids are not
concave). For the quasilinear setting, we studied the
mechanism design question of making the bidders report their
preferences truthfully rather than strategically. We showed
that an ex-post efficient mechanism is impossible even with
only one charity and a very restricted class of bids. We
also showed that even though the clearing problem
decomposes over charities in the quasilinear setting, there may be
benefits to linking the charities from a mechanism design
standpoint.
There are many directions for future research. One is to
build a web-based implementation of the (first-price)
mechanism proposed in this paper. Another is to study the
computational scalability of our MIP/LP approach. It is also
59
important to identify other classes of bids (besides concave
ones) for which the clearing problem is tractable. Much
crucial work remains to be done on the mechanism design
problem. Finally, are there good iterative mechanisms for
charity donation?6
12. REFERENCES
[1] K. Arrow. The property rights doctrine and demand
revelation under incomplete information. In
M. Boskin, editor, Economics and human welfare.
New York Academic Press, 1979.
[2] L. M. Ausubel and P. Milgrom. Ascending auctions
with package bidding. Frontiers of Theoretical
Economics, 1, 2002. No. 1, Article 1.
[3] Y. Bartal, R. Gonen, and N. Nisan. Incentive
compatible multi-unit combinatorial auctions. In
Theoretical Aspects of Rationality and Knowledge
(TARK IX), Bloomington, Indiana, USA, 2003.
[4] E. H. Clarke. Multipart pricing of public goods. Public
Choice, 11:17-33, 1971.
[5] V. Conitzer and T. Sandholm. Complexity of
mechanism design. In Proceedings of the 18th Annual
Conference on Uncertainty in Artificial Intelligence
(UAI-02), pages 103-110, Edmonton, Canada, 2002.
[6] C. d"Aspremont and L. A. G´erard-Varet. Incentives
and incomplete information. Journal of Public
Economics, 11:25-45, 1979.
[7] M. R. Garey, D. S. Johnson, and L. Stockmeyer. Some
simplified NP-complete graph problems. Theoretical
Computer Science, 1:237-267, 1976.
[8] D. Goldburg and S. McElligott. Red cross statement
on official donation locations. 2001. Press release,
http://www.redcross.org/press/disaster/ds pr/
011017legitdonors.html.
[9] R. Gonen and D. Lehmann. Optimal solutions for
multi-unit combinatorial auctions: Branch and bound
heuristics. In Proceedings of the ACM Conference on
Electronic Commerce (ACM-EC), pages 13-20,
Minneapolis, MN, Oct. 2000.
[10] T. Groves. Incentives in teams. Econometrica,
41:617-631, 1973.
[11] L. Khachiyan. A polynomial algorithm in linear
programming. Soviet Math. Doklady, 20:191-194,
1979.
[12] R. Lavi, A. Mu"Alem, and N. Nisan. Towards a
characterization of truthful combinatorial auctions. In
Proceedings of the Annual Symposium on Foundations
of Computer Science (FOCS), 2003.
[13] D. Lehmann, L. I. O"Callaghan, and Y. Shoham.
Truth revelation in rapid, approximately efficient
combinatorial auctions. Journal of the ACM,
49(5):577-602, 2002. Early version appeared in
ACMEC-99.
6
Compare, for example, iterative mechanisms in the
combinatorial auction setting [19, 25, 2].
[14] A. Mas-Colell, M. Whinston, and J. R. Green.
Microeconomic Theory. Oxford University Press, 1995.
[15] R. Myerson and M. Satterthwaite. Efficient
mechanisms for bilateral trading. Journal of Economic
Theory, 28:265-281, 1983.
[16] G. L. Nemhauser and L. A. Wolsey. Integer and
Combinatorial Optimization. John Wiley & Sons,
1999. Section 4, page 11.
[17] N. Nisan. Bidding and allocation in combinatorial
auctions. In Proceedings of the ACM Conference on
Electronic Commerce (ACM-EC), pages 1-12,
Minneapolis, MN, 2000.
[18] N. Nisan and A. Ronen. Computationally feasible
VCG mechanisms. In Proceedings of the ACM
Conference on Electronic Commerce (ACM-EC),
pages 242-252, Minneapolis, MN, 2000.
[19] D. C. Parkes. iBundle: An efficient ascending price
bundle auction. In Proceedings of the ACM Conference
on Electronic Commerce (ACM-EC), pages 148-157,
Denver, CO, Nov. 1999.
[20] M. H. Rothkopf, A. Pekeˇc, and R. M. Harstad.
Computationally manageable combinatorial auctions.
Management Science, 44(8):1131-1147, 1998.
[21] T. Sandholm. Algorithm for optimal winner
determination in combinatorial auctions. Artificial
Intelligence, 135:1-54, Jan. 2002. Conference version
appeared at the International Joint Conference on
Artificial Intelligence (IJCAI), pp. 542-547,
Stockholm, Sweden, 1999.
[22] T. Sandholm, S. Suri, A. Gilpin, and D. Levine.
CABOB: A fast optimal algorithm for combinatorial
auctions. In Proceedings of the Seventeenth
International Joint Conference on Artificial
Intelligence (IJCAI), pages 1102-1108, Seattle, WA,
2001.
[23] J. Tagliabue. Global AIDS Funds Is Given Attention,
but Not Money. The New York Times, June 1, 2003.
Reprinted on
http://www.healthgap.org/press releases/a03/
060103 NYT HGAP G8 fund.html.
[24] W. Vickrey. Counterspeculation, auctions, and
competitive sealed tenders. Journal of Finance,
16:8-37, 1961.
[25] P. R. Wurman and M. P. Wellman. AkBA: A
progressive, anonymous-price combinatorial auction.
In Proceedings of the ACM Conference on Electronic
Commerce (ACM-EC), pages 21-29, Minneapolis,
MN, Oct. 2000.
[26] M. Yokoo. The characterization of strategy/false-name
proof combinatorial auction protocols: Price-oriented,
rationing-free protocol. In Proceedings of the
Eighteenth International Joint Conference on Artificial
Intelligence (IJCAI), Acapulco, Mexico, Aug. 2003.
60 | bidding language;linear programming;expressive negotiation;supporter of charity;combinatorial auction;economic efficiency;charity supporter;expressive charity donation;threshold bid;incentive compatibility;market clear;concave bid;donation to charity;payment willingness function;negotiating material;quasilinearity;bidding framework;donation-clearing;mechanism design |
train_J-67 | Mechanism Design for Online Real-Time Scheduling | For the problem of online real-time scheduling of jobs on a single processor, previous work presents matching upper and lower bounds on the competitive ratio that can be achieved by a deterministic algorithm. However, these results only apply to the non-strategic setting in which the jobs are released directly to the algorithm. Motivated by emerging areas such as grid computing, we instead consider this problem in an economic setting, in which each job is released to a separate, self-interested agent. The agent can then delay releasing the job to the algorithm, inflate its length, and declare an arbitrary value and deadline for the job, while the center determines not only the schedule, but the payment of each agent. For the resulting mechanism design problem (in which we also slightly strengthen an assumption from the non-strategic setting), we present a mechanism that addresses each incentive issue, while only increasing the competitive ratio by one. We then show a matching lower bound for deterministic mechanisms that never pay the agents. | 1. INTRODUCTION
We consider the problem of online scheduling of jobs on
a single processor. Each job is characterized by a release
time, a deadline, a processing time, and a value for successful
completion by its deadline. The objective is to maximize the
sum of the values of the jobs completed by their respective
deadlines. The key challenge in this online setting is that
the schedule must be constructed in real-time, even though
nothing is known about a job until its release time.
Competitive analysis [6, 10], with its roots in [12], is a
well-studied approach for analyzing online algorithms by
comparing them against the optimal offline algorithm, which
has full knowledge of the input at the beginning of its
execution. One interpretation of this approach is as a game
between the designer of the online algorithm and an adversary.
First, the designer selects the online algorithm. Then, the
adversary observes the algorithm and selects the sequence of
jobs that maximizes the competitive ratio: the ratio of the
value of the jobs completed by an optimal offline algorithm
to the value of those completed by the online algorithm.
Two papers paint a complete picture in terms of
competitive analysis for this setting, in which the algorithm is
assumed to know k, the maximum ratio between the value
densities (value divided by processing time) of any two jobs.
For k = 1, [4] presents a 4-competitive algorithm, and proves
that this is a lower bound on the competitive ratio for
deterministic algorithms. The same paper also generalizes the
lower bound to (1 +
√
k)2
for any k ≥ 1, and [15] then
presents a matching (1 +
√
k)2
-competitive algorithm.
The setting addressed by these papers is completely
nonstrategic, and the algorithm is assumed to always know the
true characteristics of each job upon its release. However,
in domains such as grid computing (see, for example, [7,
8]) this assumption is invalid, because buyers of processor
time choose when and how to submit their jobs.
Furthermore, sellers not only schedule jobs but also determine
the amount that they charge buyers, an issue not addressed
in the non-strategic setting.
Thus, we consider an extension of the setting in which
each job is owned by a separate, self-interested agent.
Instead of being released to the algorithm, each job is now
released only to its owning agent. Each agent now has four
different ways in which it can manipulate the algorithm: it
decides when to submit the job to the algorithm after the
true release time, it can artificially inflate the length of the
job, and it can declare an arbitrary value and deadline for
the job. Because the agents are self-interested, they will
choose to manipulate the algorithm if doing so will cause
61
their job to be completed; and, indeed, one can find
examples in which agents have incentive to manipulate the
algorithms presented in [4] and [15].
The addition of self-interested agents moves the problem
from the area of algorithm design to that of mechanism
design [17], the science of crafting protocols for self-interested
agents. Recent years have seen much activity at the
interface of computer science and mechanism design (see, e.g.,
[9, 18, 19]). In general, a mechanism defines a protocol for
interaction between the agents and the center that
culminates with the selection of an outcome. In our setting, a
mechanism will take as input a job from each agent, and
return a schedule for the jobs, and a payment to be made by
each agent to the center. A basic solution concept of
mechanism design is incentive compatibility, which, in our setting,
requires that it is always in each agent"s best interests to
immediately submit its job upon release, and to truthfully
declare its value, length, and deadline.
In order to evaluate a mechanism using competitive
analysis, the adversary model must be updated. In the new
model, the adversary still determines the sequence of jobs,
but it is the self-interested agents who determine the
observed input of the mechanism. Thus, in order to achieve a
competitive ratio of c, an online mechanism must both be
incentive compatible, and always achieve at least 1
c
of the
value that the optimal offline mechanism achieves on the
same sequence of jobs.
The rest of the paper is structured as follows. In
Section 2, we formally define and review results from the
original, non-strategic setting. After introducing the incentive
issues through an example, we formalize the mechanism
design setting in Section 3. In Section 4 we present our first
main result, a ((1 +
√
k)2
+ 1)-competitive mechanism, and
formally prove incentive compatibility and the competitive
ratio. We also show how we can simplify this mechanism for
the special case in which k = 1 and each agent cannot alter
the length of its job. Returning the general setting, we show
in Section 5 that this competitive ratio is a lower bound for
deterministic mechanisms that do not pay agents. Finally,
in Section 6, we discuss related work other than the directly
relevant [4] and [15], before concluding with Section 7.
2. NON-STRATEGIC SETTING
In this section, we formally define the original, non-strategic
setting, and recap previous results.
2.1 Formulation
There exists a single processor on which jobs can execute,
and N jobs, although this number is not known beforehand.
Each job i is characterized by a tuple θi = (ri, di, li, vi),
which denotes the release time, deadline, length of
processing time required, and value, respectively. The space Θi of
possible tuples is the same for each job and consists of all
θi such that ri, di, li, vi ∈ + (thus, the model of time is
continuous). Each job is released at time ri, at which point
its three other characteristics are known. Nothing is known
about the job before its arrival. Each deadline is firm (or,
hard), which means that no value is obtained for a job that
is completed after its deadline. Preemption of jobs is
allowed, and it takes no time to switch between jobs. Thus,
job i is completed if and only if the total time it executes
on the processor before di is at least li.
Let θ = (θ1, . . . , θN ) denote the vector of tuples for all
jobs, and let θ−i = (θ1, . . . , θi−1, θi+1, . . . , θN ) denote the
same vector without the tuple for job i. Thus, (θi, θ−i)
denotes a complete vector of tuples.
Define the value density ρi = vi
li
of job i to be the ratio of
its value to its length. For an input θ, denote the maximum
and minimum value densities as ρmin = mini ρi and ρmax =
maxi ρi. The importance ratio is then defined to be ρmax
ρmin
,
the maximal ratio of value densities between two jobs. The
algorithm is assumed to always know an upper bound k on
the importance ratio. For simplicity, we normalize the range
of possible value densities so that ρmin = 1.
An online algorithm is a function f : Θ1 × . . . × ΘN →
O that maps the vector of tuples (for any number N) to
an outcome o. An outcome o ∈ O is simply a schedule of
jobs on the processor, recorded by the function S : + →
{0, 1, . . . , N}, which maps each point in time to the active
job, or to 0 if the processor is idle.
To denote the total elapsed time that a job has spent on
the processor at time t, we will use the function ei(t) =
t
0
µ(S(x) = i)dx, where µ(·) is an indicator function that
returns 1 if the argument is true, and zero otherwise. A
job"s laxity at time t is defined to be di − t − li + ei(t) ,
the amount of time that it can remain inactive and still be
completed by its deadline. A job is abandoned if it cannot
be completed by its deadline (formally, if di −t+ei(t) < li).
Also, overload S(·) and ei(·) so that they can also take a
vector θ as an argument. For example, S(θ, t) is shorthand
for the S(t) of the outcome f(θ), and it denotes the active
job at time t when the input is θ.
Since a job cannot be executed before its release time, the
space of possible outcomes is restricted in that S(θ, t) = i
implies ri ≤ t. Also, because the online algorithm must
produce the schedule over time, without knowledge of future
inputs, it must make the same decision at time t for inputs
that are indistinguishable at this time. Formally, let θ(t)
denote the subset of the tuples in θ that satisfy ri ≤ t. The
constraint is then that θ(t) = θ (t) implies S(θ, t) = S(θ , t).
The objective function is the sum of the values of the jobs
that are completed by their respective deadlines: W(o, θ) =
i vi · µ(ei(θ, di) ≥ li) . Let W∗
(θ) = maxo∈O W(o, θ)
denote the maximum possible total value for the profile θ.
In competitive analysis, an online algorithm is evaluated
by comparing it against an optimal offline algorithm.
Because the offline algorithm knows the entire input θ at time
0 (but still cannot start each job i until time ri), it
always achieves W∗
(θ). An online algorithm f(·) is (strictly)
c-competitive if there does not exist an input θ such that
c · W(f(θ), θ) < W∗
(θ). An algorithm that is c-competitive
is also said to achieve a competitive ratio of c.
We assume that there does not exist an overload period
of infinite duration. A period of time [ts
, tf
] is overloaded
if the sum of the lengths of the jobs whose release time and
deadline both fall within the time period exceeds the
duration of the interval (formally, if tf
−ts
≤ i|(ts≤ri,di≤tf ) li).
Without such an assumption, it is not possible to achieve a
finite competitive ratio [15].
2.2 Previous Results
In the non-strategic setting, [4] presents a 4-competitive
algorithm called TD1 (version 2) for the case of k = 1, while
[15] presents a (1+
√
k)2
-competitive algorithm called Dover
for the general case of k ≥ 1. Matching lower bounds for
deterministic algorithms for both of these cases were shown
62
in [4]. In this section we provide a high-level description of
TD1 (version 2) using an example.
TD1 (version 2) divides the schedule into intervals, each
of which begins when the processor transitions from idle to
busy (call this time tb
), and ends with the completion of
a job. The first active job of an interval may have laxity;
however, for the remainder of the interval, preemption of the
active job is only considered when some other job has zero
laxity. For example, when the input is the set of jobs listed
in Table 1, the first interval is the complete execution of
job 1 over the range [0.0, 0.9]. No preemption is considered
during this interval, because job 2 has laxity until time 1.5.
Then, a new interval starts at tb
= 0.9 when job 2 becomes
active. Before job 2 can finish, preemption is considered at
time 4.8, when job 3 is released with zero laxity.
In order to decide whether to preempt the active job, TD1
(version 2) uses two more variables: te
and p loss. The
former records the latest deadline of a job that would be
abandoned if the active job executes to completion (or, if
no such job exists, the time that the active job will finish
if it is not preempted). In this case, te
= 17.0. The value
te
−tb
represents the an upper bound on the amount of
possible execution time lost to the optimal offline algorithm
due to the completion of the active job. The other variable,
p loss, is equal to the length of the first active job of the
current interval. Because in general this job could have
laxity, the offline algorithm may be able to complete it outside
of the range [tb
, te
].1
If the algorithm completes the active
job and this job"s length is at least te
−tb
+p loss
4
, then the
algorithm is guaranteed to be 4-competitive for this
interval (note that k = 1 implies that all jobs have the same
value density and thus that lengths can used to compute
the competitive ratio). Because this is not case at time 4.8
(since te
−tb
+p loss
4
= 17.0−0.9+4.0
4
> 4.0 = l2), the algorithm
preempts job 2 for job 3, which then executes to completion.
Job ri di li vi
1 0.0 0.9 0.9 0.9
2 0.5 5.5 4.0 4.0
3 4.8 17.0 12.2 12.2
01 5 17
6
?
6
?
6
?
Table 1: Input used to recap TD1 (version 2) [4].
The up and down arrows represent ri and di,
respectively, while the length of the box equals li.
3. MECHANISM DESIGN SETTING
However, false information about job 2 would cause TD1
(version 2) to complete this job. For example, if job 2"s
deadline were declared as ˆd2 = 4.7, then it would have zero laxity
at time 0.7. At this time, the algorithm would preempt job
1 for job 2, because te
−tb
+p loss
4
= 4.7−0.0+1.0
4
> 0.9 = l1.
Job 2 would then complete before the arrival of job 3.2
1
While it would be easy to alter the algorithm to recognize
that this is not possible for the jobs in Table 1, our example
does not depend on the use of p loss.
2
While we will not describe the significantly more complex
In order to address incentive issues such as this one, we
need to formalize the setting as a mechanism design
problem. In this section we first present the mechanism design
formulation, and then define our goals for the mechanism.
3.1 Formulation
There exists a center, who controls the processor, and
N agents, where the value of N is unknown by the center
beforehand. Each job i is owned by a separate agent i. The
characteristics of the job define the agent"s type θi ∈ Θi.
At time ri, agent i privately observes its type θi, and has
no information about job i before ri. Thus, jobs are still
released over time, but now each job is revealed only to the
owning agent.
Agents interact with the center through a direct
mechanism Γ = (Θ1, . . . , ΘN , g(·)), in which each agent declares a
job, denoted by ˆθi = (ˆri, ˆdi, ˆli, ˆvi), and g : Θ1×. . .×ΘN → O
maps the declared types to an outcome o ∈ O. An outcome
o = (S(·), p1, . . . , pN ) consists of a schedule and a payment
from each agent to the mechanism.
In a standard mechanism design setting, the outcome is
enforced at the end of the mechanism. However, since the
end is not well-defined in this online setting, we choose to
model returning the job if it is completed and collecting a
payment from each agent i as occurring at ˆdi, which,
according to the agent"s declaration, is the latest relevant point of
time for that agent. That is, even if job i is completed before
ˆdi, the center does not return the job to agent i until that
time. This modelling decision could instead be viewed as a
decision by the mechanism designer from a larger space of
possible mechanisms. Indeed, as we will discuss later, this
decision of when to return a completed job is crucial to our
mechanism.
Each agent"s utility, ui(g(ˆθ), θi) = vi · µ(ei(ˆθ, di) ≥ li) ·
µ( ˆdi ≤ di) − pi(ˆθ), is a quasi-linear function of its value for
its job (if completed and returned by its true deadline) and
the payment it makes to the center. We assume that each
agent is a rational, expected utility maximizer.
Agent declarations are restricted in that an agent cannot
declare a length shorter than the true length, since the center
would be able to detect such a lie if the job were completed.
On the other hand, in the general formulation we will allow
agents to declare longer lengths, since in some settings it
may be possible add unnecessary work to a job. However,
we will also consider a restricted formulation in which this
type of lie is not possible. The declared release time ˆri
is the time that the agent chooses to submit job i to the
center, and it cannot precede the time ri at which the job
is revealed to the agent. The agent can declare an arbitrary
deadline or value. To summarize, agent i can declare any
type ˆθi = (ˆri, ˆdi, ˆli, ˆvi) such that ˆli ≥ li and ˆri ≥ ri.
While in the non-strategic setting it was sufficient for the
algorithm to know the upper bound k on the ratio ρmax
ρmin
,
in the mechanism design setting we will strengthen this
assumption so that the mechanism also knows ρmin (or,
equivalently, the range [ρmin, ρmax] of possible value densities).3
Dover
, we note that it is similar in its use of intervals and
its preference for the active job. Also, we note that the
lower bound we will show in Section 5 implies that false
information can also benefit a job in Dover
.
3
Note that we could then force agent declarations to satisfy
ρmin ≤ ˆvi
ˆli
≤ ρmax. However, this restriction would not
63
While we feel that it is unlikely that a center would know k
without knowing this range, we later present a mechanism
that does not depend on this extra knowledge in a restricted
setting.
The restriction on the schedule is now that S(ˆθ, t) = i
implies ˆri ≤ t, to capture the fact that a job cannot be
scheduled on the processor before it is declared to the mechanism.
As before, preemption of jobs is allowed, and job switching
takes no time.
The constraints due to the online mechanism"s lack of
knowledge of the future are that ˆθ(t) = ˆθ (t) implies S(ˆθ, t) =
S(ˆθ , t), and ˆθ( ˆdi) = ˆθ ( ˆdi) implies pi(ˆθ) = pi(ˆθ ) for each
agent i. The setting can then be summarized as follows.
1Overview of the Setting:
for all t do
The center instantiates S(ˆθ, t) ← i, for some i s.t. ˆri ≤ t
if ∃i, (ri = t) then
θi is revealed to agent i
if ∃i, (t ≥ ri) and agent i has not declared a job then
Agent i can declare any job ˆθi, s.t. ˆri = t and ˆli ≥ li
if ∃i, ( ˆdi = t) ∧ (ei(ˆθ, t) ≥ li) then
Completed job i is returned to agent i
if ∃i, ( ˆdi = t) then
Center sets and collects payment pi(ˆθ) from agent i
3.2 Mechanism Goals
Our aim as mechanism designer is to maximize the value
of completed jobs, subject to the constraints of incentive
compatibility and individual rationality.
The condition for (dominant strategy) incentive
compatibility is that for each agent i, regardless of its true type
and of the declared types of all other agents, agent i cannot
increase its utility by unilaterally changing its declaration.
Definition 1. A direct mechanism Γ satisfies incentive
compatibility (IC) if ∀i, θi, θi, ˆθ−i :
ui(g(θi, ˆθ−i), θi) ≥ ui(g(θi, ˆθ−i), θi)
From an agent perspective, dominant strategies are
desirable because the agent does not have to reason about either
the strategies of the other agents or the distribution from
the which other agent"s types are drawn. From a
mechanism designer perspective, dominant strategies are
important because we can reasonably assume that an agent who
has a dominant strategy will play according to it. For these
reasons, in this paper we require dominant strategies, as
opposed to a weaker equilibrium concept such as Bayes-Nash,
under which we could improve upon our positive results.4
decrease the lower bound on the competitive ratio.
4
A possible argument against the need for incentive
compatibility is that an agent"s lie may actually improve the
schedule. In fact, this was the case in the example we showed
for the false declaration ˆd2 = 4.7. However, if an agent lies
due to incorrect beliefs over the future input, then the lie
could instead make the schedule the worse (for example, if
job 3 were never released, then job 1 would have been
unnecessarily abandoned). Furthermore, if we do not know the
beliefs of the agents, and thus cannot predict how they will
lie, then we can no longer provide a competitive guarantee
for our mechanism.
While restricting ourselves to incentive compatible direct
mechanisms may seem limiting at first, the Revelation
Principle for Dominant Strategies (see, e.g., [17]) tells us that if
our goal is dominant strategy implementation, then we can
make this restriction without loss of generality.
The second goal for our mechanism, individual rationality,
requires that agents who truthfully reveal their type never
have negative utility. The rationale behind this goal is that
participation in the mechanism is assumed to be voluntary.
Definition 2. A direct mechanism Γ satisfies individual
rationality (IR) if ∀i, θi, ˆθ−i, ui(g(θi, ˆθ−i), θi) ≥ 0.
Finally, the social welfare function that we aim to
maximize is the same as the objective function of the non-strategic
setting: W(o, θ) = i vi · µ(ei(θ, di) ≥ li) . As in the
nonstrategic setting, we will evaluate an online mechanism using
competitive analysis to compare it against an optimal offline
mechanism (which we will denote by Γoffline). An offline
mechanism knows all of the types at time 0, and thus can
always achieve W∗
(θ).5
Definition 3. An online mechanism Γ is (strictly)
ccompetitive if it satisfies IC and IR, and if there does not
exist a profile of agent types θ such that c·W(g(θ), θ) < W∗
(θ).
4. RESULTS
In this section, we first present our main positive result: a
(1+
√
k)2
+1 -competitive mechanism (Γ1). After providing
some intuition as to why Γ1 satisfies individual rationality
and incentive compatibility, we formally prove first these two
properties and then the competitive ratio. We then consider
a special case in which k = 1 and agents cannot lie about the
length of their job, which allows us to alter this mechanism
so that it no longer requires either knowledge of ρmin or the
collection of payments from agents.
Unlike TD1 (version 2) and Dover
, Γ1 gives no
preference to the active job. Instead, it always executes the
available job with the highest priority: (ˆvi +
√
k · ei(ˆθ, t) · ρmin).
Each agent whose job is completed is then charged the
lowest value that it could have declared such that its job still
would have been completed, holding constant the rest of its
declaration.
By the use of a payment rule similar to that of a
secondprice auction, Γ1 satisfies both IC with respect to values
and IR. We now argue why it satisfies IC with respect to
the other three characteristics. Declaring an improved job
(i.e., declaring an earlier release time, a shorter length, or
a later deadline) could possibly decrease the payment of an
agent. However, the first two lies are not possible in our
setting, while the third would cause the job, if it is completed,
to be returned to the agent after the true deadline. This is
the reason why it is important to always return a completed
job at its declared deadline, instead of at the point at which
it is completed.
5
Another possibility is to allow only the agents to know
their types at time 0, and to force Γoffline to be incentive
compatible so that agents will truthfully declare their types
at time 0. However, this would not affect our results, since
executing a VCG mechanism (see, e.g., [17]) at time 0 both
satisfies incentive compatibility and always maximizes social
welfare.
64
Mechanism 1 Γ1
Execute S(ˆθ, ·) according to Algorithm 1
for all i do
if ei(ˆθ, ˆdi) ≥ ˆli {Agent i"s job is completed} then
pi(ˆθ) ← arg minvi≥0(ei(((ˆri, ˆdi, ˆli, vi), ˆθ−i), ˆdi) ≥ ˆli)
else
pi(ˆθ) ← 0
Algorithm 1
for all t do
Avail ← {i|(t ≥ ˆri)∧(ei(ˆθ, t) < ˆli)∧(ei(ˆθ, t)+ ˆdi−t ≥ ˆli)}
{Set of all released, non-completed, non-abandoned jobs}
if Avail = ∅ then
S(ˆθ, t) ← arg maxi∈Avail(ˆvi +
√
k · ei(ˆθ, t) · ρmin)
{Break ties in favor of lower ˆri}
else
S(ˆθ, t) ← 0
It remains to argue why an agent does not have incentive
to worsen its job. The only possible effects of an inflated
length are delaying the completion of the job and causing it
to be abandoned, and the only possible effects of an earlier
declared deadline are causing to be abandoned and causing
it to be returned earlier (which has no effect on the agent"s
utility in our setting). On the other hand, it is less obvious
why agents do not have incentive to declare a later release
time. Consider a mechanism Γ1 that differs from Γ1 in that
it does not preempt the active job i unless there exists
another job j such that (ˆvi +
√
k·li(ˆθ, t)·ρmin) < ˆvj. Note that
as an active job approaches completion in Γ1, its condition
for preemption approaches that of Γ1.
However, the types in Table 2 for the case of k = 1 show
why an agent may have incentive to delay the arrival of its
job under Γ1. Job 1 becomes active at time 0, and job 2
is abandoned upon its release at time 6, because 10 + 10 =
v1 +l1 > v2 = 13. Then, at time 8, job 1 is preempted by job
3, because 10 + 10 = v1 + l1 < v3 = 22. Job 3 then executes
to completion, forcing job 1 to be abandoned. However, job
2 had more weight than job 1, and would have prevented
job 3 from being executed if it had been the active job at
time 8, since 13 + 13 = v2 + l2 > v3 = 22. Thus, if agent
1 had falsely declared ˆr1 = 20, then job 3 would have been
abandoned at time 8, and job 1 would have completed over
the range [20, 30].
Job ri di li vi
1 0 30 10 10
2 6 19 13 13
3 8 30 22 22
0 6 10 20 30
6
?
6
?
6
?
Table 2: Jobs used to show why a slightly altered
version of Γ1 would not be incentive compatible with
respect to release times.
Intuitively, Γ1 avoids this problem because of two
properties. First, when a job becomes active, it must have a greater
priority than all other available jobs. Second, because a job"s
priority can only increase through the increase of its elapsed
time, ei(ˆθ, t), the rate of increase of a job"s priority is
independent of its characteristics. These two properties together
imply that, while a job is active, there cannot exist a time
at which its priority is less than the priority that one of
these other jobs would have achieved by executing on the
processor instead.
4.1 Proof of Individual Rationality and
Incentive Compatibility
After presenting the (trivial) proof of IR, we break the
proof of IC into lemmas.
Theorem 1. Mechanism Γ1 satisfies individual
rationality.
Proof. For arbitrary i, θi, ˆθ−i, if job i is not completed,
then agent i pays nothing and thus has a utility of zero;
that is, pi(θi, ˆθ−i) = 0 and ui(g(θi, ˆθ−i), θi) = 0. On the
other hand, if job i is completed, then its value must
exceed agent i"s payment. Formally, ui(g(θi, ˆθ−i), θi) = vi −
arg minvi≥0(ei(((ri, di, li, vi), ˆθ−i), di) ≥ li) ≥ 0 must hold,
since vi = vi satisfies the condition.
To prove IC, we need to show that for an arbitrary agent
i, and an arbitrary profile ˆθ−i of declarations of the other
agents, agent i can never gain by making a false declaration
ˆθi = θi, subject to the constraints that ˆri ≥ ri and ˆli ≥ li.
We start by showing that, regardless of ˆvi, if truthful
declarations of ri, di, and li do not cause job i to be completed,
then worse declarations of these variables (that is,
declarations that satisfy ˆri ≥ ri, ˆli ≥ li and ˆdi ≤ di) can never
cause the job to be completed. We break this part of the
proof into two lemmas, first showing that it holds for the
release time, regardless of the declarations of the other
variables, and then for length and deadline.
Lemma 2. In mechanism Γ1, the following condition holds
for all i, θi, ˆθ−i: ∀ ˆvi, ˆli ≥ li, ˆdi ≤ di, ˆri ≥ ri,
ei ((ˆri, ˆdi, ˆli, ˆvi), ˆθ−i), ˆdi ≥ ˆli =⇒
ei ((ri, ˆdi, ˆli, ˆvi), ˆθ−i), ˆdi ≥ ˆli
Proof. Assume by contradiction that this condition does
not hold- that is, job i is not completed when ri is truthfully
declared, but is completed for some false declaration ˆri ≥
ri. We first analyze the case in which the release time is
truthfully declared, and then we show that job i cannot be
completed when agent i delays submitting it to the center.
Case I: Agent i declares ˆθi = (ri, ˆdi, ˆli, ˆvi).
First, define the following three points in the execution of
job i.
• Let ts
= arg mint S((ˆθi, ˆθ−i), t) = i be the time that
job i first starts execution.
• Let tp
= arg mint>ts S((ˆθi, ˆθ−i), t) = i be the time
that job i is first preempted.
• Let ta
= arg mint ei((ˆθi, ˆθ−i), t) + ˆdi − t < ˆli be the
time that job i is abandoned.
65
If ts
and tp
are undefined because job i never becomes
active, then let ts
= tp
= ta
.
Also, partition the jobs declared by other agents before ta
into the following three sets.
• X = {j|(ˆrj < tp
) ∧ (j = i)} consists of the jobs (other
than i) that arrive before job i is first preempted.
• Y = {j|(tp
≤ ˆrj ≤ ta
)∧(ˆvj > ˆvi +
√
k·ei((ˆθi, ˆθ−i), ˆrj)}
consists of the jobs that arrive in the range [tp
, ta
] and
that when they arrive have higher priority than job i
(note that we are make use of the normalization).
• Z = {j|(tp
≤ ˆrj ≤ ta
)∧(ˆvj ≤ ˆvi +
√
k ·ei((ˆθi, ˆθ−i), ˆrj)}
consists of the jobs that arrive in the range [tp
, ta
] and
that when they arrive have lower priority than job i.
We now show that all active jobs during the range (tp
, ta
]
must be either i or in the set Y . Unless tp
= ta
(in which case
this property trivially holds), it must be the case that job i
has a higher priority than an arbitrary job x ∈ X at time tp
,
since at the time just preceding tp
job x was available and job
i was active. Formally, ˆvx +
√
k · ex((ˆθi, ˆθ−i), tp
) < ˆvi +
√
k ·
ei((ˆθi, ˆθ−i), tp
) must hold.6
We can then show that, over the
range [tp
, ta
], no job x ∈ X runs on the processor. Assume
by contradiction that this is not true. Let tf
∈ [tp
, ta
] be
the earliest time in this range that some job x ∈ X is active,
which implies that ex((ˆθi, ˆθ−i), tf
) = ex((ˆθi, ˆθ−i), tp
). We
can then show that job i has a higher priority at time tf
as
follows: ˆvx+
√
k·ex((ˆθi, ˆθ−i), tf
) = ˆvx+
√
k·ex((ˆθi, ˆθ−i), tp
) <
ˆvi +
√
k · ei((ˆθi, ˆθ−i), tp
) ≤ ˆvi +
√
k · ei((ˆθi, ˆθ−i), tf
),
contradicting the fact that job x is active at time tf
.
A similar argument applies to an arbitrary job z ∈ Z,
starting at it release time ˆrz > tp
, since by definition job i
has a higher priority at that time. The only remaining jobs
that can be active over the range (tp
, ta
] are i and those in
the set Y .
Case II: Agent i declares ˆθi = (ˆri, ˆdi, ˆli, ˆvi), where ˆri > ri.
We now show that job i cannot be completed in this case,
given that it was not completed in case I. First, we can
restrict the range of ˆri that we need to consider as follows.
Declaring ˆri ∈ (ri, ts
] would not affect the schedule, since
ts
would still be the first time that job i executes. Also,
declaring ˆri > ta
could not cause the job to be completed,
since di − ta
< ˆli holds, which implies that job i would be
abandoned at its release. Thus, we can restrict consideration
to ˆri ∈ (ts
, ta
].
In order for declaring ˆθi to cause job i to be completed, a
necessary condition is that the execution of some job yc
∈ Y
must change during the range (tp
, ta
], since the only jobs
other than i that are active during that range are in Y .
Let tc
= arg mint∈(tp,ta][∃yc
∈ Y, (S((ˆθi, ˆθ−i), t) = yc
) ∧
(S((ˆθi, ˆθ−i), t) = yc
)] be the first time that such a change
occurs. We will now show that for any ˆri ∈ (ts
, ta
], there
cannot exist a job with higher priority than yc
at time tc
,
contradicting (S((ˆθi, ˆθ−i), t) = yc
).
First note that job i cannot have a higher priority, since
there would have to exist a t ∈ (tp
, tc
) such that ∃y ∈
6
For simplicity, when we give the formal condition for a job x
to have a higher priority than another job y, we will assume
that job x"s priority is strictly greater than job y"s, because,
in the case of a tie that favors x, future ties would also be
broken in favor of job x.
Y, (S((ˆθi, ˆθ−i), t) = y) ∧ (S((ˆθi, ˆθ−i), t) = i), contradicting
the definition of tc
.
Now consider an arbitrary y ∈ Y such that y = yc
. In case
I, we know that job y has lower priority than yc
at time tc
;
that is, ˆvy +
√
k·ey((ˆθi, ˆθ−i), tc
) < ˆvyc +
√
k·eyc ((ˆθi, ˆθ−i), tc
).
Thus, moving to case II, job y must replace some other job
before tc
. Since ˆry ≥ tp
, the condition is that there must
exist some t ∈ (tp
, tc
) such that ∃w ∈ Y ∪{i}, (S((ˆθi, ˆθ−i), t) =
w) ∧ (S((ˆθi, ˆθ−i), t) = y). Since w ∈ Y would contradict the
definition of tc
, we know that w = i. That is, the job that y
replaces must be i. By definition of the set Y , we know that
ˆvy > ˆvi +
√
k · ei((ˆθi, ˆθ−i), ˆry). Thus, if ˆry ≤ t, then job i
could not have executed instead of y in case I. On the other
hand, if ˆry > t, then job y obviously could not execute at
time t, contradicting the existence of such a time t.
Now consider an arbitrary job x ∈ X. We know that
in case I job i has a higher priority than job x at time
ts
, or, formally, that ˆvx +
√
k · ex((ˆθi, ˆθ−i), ts
) < ˆvi +
√
k ·
ei((ˆθi, ˆθ−i), ts
). We also know that ˆvi +
√
k·ei((ˆθi, ˆθ−i), tc
) <
ˆvyc +
√
k · eyc ((ˆθi, ˆθ−i), tc
). Since delaying i"s arrival will
not affect the execution up to time ts
, and since job x
cannot execute instead of a job y ∈ Y at any time t ∈
(tp
, tc
] by definition of tc
, the only way for job x"s
priority to increase before tc
as we move from case I to II
is to replace job i over the range (ts
, tc
]. Thus, an
upper bound on job x"s priority when agent i declares ˆθi is:
ˆvx+
√
k· ex((ˆθi, ˆθ−i), ts
)+ei((ˆθi, ˆθ−i), tc
)−ei((ˆθi, ˆθ−i), ts
) <
ˆvi +
√
k· ei((ˆθi, ˆθ−i), ts
)+ei((ˆθi, ˆθ−i), tc
)−ei((ˆθi, ˆθ−i), ts
) =
ˆvi +
√
k · ei((ˆθi, ˆθ−i), tc
) < ˆvyc +
√
k · eyc ((ˆθi, ˆθ−i), tc
).
Thus, even at this upper bound, job yc
would execute
instead of job x at time tc
. A similar argument applies to
an arbitrary job z ∈ Z, starting at it release time ˆrz. Since
the sets {i}, X, Y, Z partition the set of jobs released before
ta
, we have shown that no job could execute instead of job
yc
, contradicting the existence of tc
, and completing the
proof.
Lemma 3. In mechanism Γ1, the following condition holds
for all i, θi, ˆθ−i: ∀ ˆvi, ˆli ≥ li, ˆdi ≤ di,
ei ((ri, ˆdi, ˆli, ˆvi), ˆθ−i), ˆdi ≥ ˆli =⇒
ei ((ri, di, li, ˆvi), ˆθ−i), ˆdi ≥ li
Proof. Assume by contradiction there exists some
instantiation of the above variables such that job i is not
completed when li and di are truthfully declared, but is
completed for some pair of false declarations ˆli ≥ li and
ˆdi ≤ di.
Note that the only effect that ˆdi and ˆli have on the
execution of the algorithm is on whether or not i ∈ Avail.
Specifically, they affect the two conditions: (ei(ˆθ, t) < ˆli)
and (ei(ˆθ, t) + ˆdi − t ≥ ˆli). Because job i is completed when
ˆli and ˆdi are declared, the former condition (for
completion) must become false before the latter. Since truthfully
declaring li ≤ ˆli and di ≥ ˆdi will only make the former
condition become false earlier and the latter condition become
false later, the execution of the algorithm will not be
affected when moving to truthful declarations, and job i will
be completed, a contradiction.
We now use these two lemmas to show that the payment
for a completed job can only increase by falsely declaring
worse ˆli, ˆdi, and ˆri.
66
Lemma 4. In mechanism Γ1, the following condition holds
for all i, θi, ˆθ−i: ∀ ˆli ≥ li, ˆdi ≤ di, ˆri ≥ ri,
arg min
vi≥0
ei ((ˆri, ˆdi, ˆli, vi), ˆθ−i), ˆdi ≥ ˆli ≥
arg min
vi≥0
ei ((ri, di, li, vi), ˆθ−i), di ≥ li
Proof. Assume by contradiction that this condition does
not hold. This implies that there exists some value vi such
that the condition (ei(((ˆri, ˆdi, ˆli, vi), ˆθ−i), ˆdi) ≥ ˆli) holds,
but (ei(((ri, di, li, vi), ˆθ−i), di) ≥ li) does not. Applying
Lemmas 2 and 3: (ei(((ˆri, ˆdi, ˆli, vi), ˆθ−i), ˆdi) ≥ ˆli) =⇒
(ei(((ri, ˆdi, ˆli, vi), ˆθ−i), ˆdi) ≥ ˆli) =⇒
(ei(((ri, di, li, vi), ˆθ−i), di) ≥ li), a contradiction.
Finally, the following lemma tells us that the completion
of a job is monotonic in its declared value.
Lemma 5. In mechanism Γ1, the following condition holds
for all i, ˆθi, ˆθ−i: ∀ ˆvi ≥ ˆvi,
ei ((ˆri, ˆdi, ˆli, ˆvi), ˆθ−i), ˆdi ≥ ˆli =⇒
ei ((ˆri, ˆdi, ˆli, ˆvi), ˆθ−i), ˆdi ≥ ˆli
The proof, by contradiction, of this lemma is omitted
because it is essentially identical to that of Lemma 2 for ˆri. In
case I, agent i declares (ˆri, ˆdi, ˆli, ˆvi) and the job is not
completed, while in case II he declares (ˆri, ˆdi, ˆli, ˆvi) and the job
is completed. The analysis of the two cases then proceeds as
before- the execution will not change up to time ts
because
the initial priority of job i decreases as we move from case
I to II; and, as a result, there cannot be a change in the
execution of a job other than i over the range (tp
, ta
].
We can now combine the lemmas to show that no
profitable deviation is possible.
Theorem 6. Mechanism Γ1 satisfies incentive
compatibility.
Proof. For an arbitrary agent i, we know that ˆri ≥ ri
and ˆli ≥ li hold by assumption. We also know that agent
i has no incentive to declare ˆdi > di, because job i would
never be returned before its true deadline. Then, because
the payment function is non-negative, agent i"s utility could
not exceed zero. By IR, this is the minimum utility it would
achieve if it truthfully declared θi. Thus, we can restrict
consideration to ˆθi that satisfy ˆri ≥ ri, ˆli ≥ li, and ˆdi ≤ di.
Again using IR, we can further restrict consideration to ˆθi
that cause job i to be completed, since any other ˆθi yields a
utility of zero.
If truthful declaration of θi causes job i to be completed,
then by Lemma 4 any such false declaration ˆθi could not
decrease the payment of agent i. On the other hand, if
truthful declaration does not cause job i to be completed,
then declaring such a ˆθi will cause agent i to have negative
utility, since vi < arg minvi≥0 ei(((ri, di, li, vi), ˆθ−i), ˆdi) ≥
li ≤ arg minvi≥0 ei(((ˆri, ˆdi, ˆli, vi), ˆθ−i), ˆdi) ≥ ˆli holds by
Lemmas 5 and 4, respectively.
4.2 Proof of Competitive Ratio
The proof of the competitive ratio, which makes use of
techniques adapted from those used in [15], is also broken
into lemmas. Having shown IC, we can assume truthful
declaration (ˆθ = θ). Since we have also shown IR, in order
to prove the competitive ratio it remains to bound the loss
of social welfare against Γoffline.
Denote by (1, 2, . . . , F) the sequence of jobs completed by
Γ1. Divide time into intervals If = (topen
f , tclose
f ], one for
each job f in this sequence. Set tclose
f to be the time at
which job f is completed, and set topen
f = tclose
f−1 for f ≥ 2,
and topen
1 = 0 for f = 1. Also, let tbegin
f be the first time
that the processor is not idle in interval If .
Lemma 7. For any interval If , the following inequality
holds: tclose
f − tbegin
f ≤ (1 + 1√
k
) · vf
Proof. Interval If begins with a (possibly zero length)
period of time in which the processor is idle because there is
no available job. Then, it continuously executes a sequence
of jobs (1, 2, . . . , c), where each job i in this sequence is
preempted by job i + 1, except for job c, which is completed
(thus, job c in this sequence is the same as job f is the global
sequence of completed jobs). Let ts
i be the time that job i
begins execution. Note that ts
1 = tbegin
f .
Over the range [tbegin
f , tclose
f ], the priority (vi+
√
k·ei(θ, t))
of the active job is monotonically increasing with time,
because this function linearly increases while a job is active,
and can only increase at a point in time when preemption
occurs. Thus, each job i > 1 in this sequence begins
execution at its release time (that is, ts
i = ri), because its priority
does not increase while it is not active.
We now show that the value of the completed job c
exceeds the product of
√
k and the time spent in the interval
on jobs 1 through c−1, or, more formally, that the following
condition holds: vc ≥
√
k c−1
h=1(eh(θ, ts
h+1) − eh(θ, ts
h)). To
show this, we will prove by induction that the stronger
condition vi ≥
√
k i−1
h=1 eh(θ, ts
h+1) holds for all jobs i in the
sequence.
Base Case: For i = 1, v1 ≥
√
k 0
h=1 eh(θ, ts
h+1) = 0,
since the sum is over zero elements.
Inductive Step: For an arbitrary 1 ≤ i < c, we assume
that vi ≥
√
k i−1
h=1 eh(θ, ts
h+1) holds. At time ts
i+1, we
know that vi+1 ≥ vi +
√
k · ei(θ, ts
i+1) holds, because ts
i+1 =
ri+1. These two inequalities together imply that vi+1 ≥√
k i
h=1 eh(θ, ts
h+1), completing the inductive step.
We also know that tclose
f − ts
c ≤ lc ≤ vc must hold, by the
simplifying normalization of ρmin = 1 and the fact that job
c"s execution time cannot exceed its length. We can thus
bound the total execution time of If by: tclose
f − tbegin
f =
(tclose
f −ts
c)+ c−1
h=1(eh(θ, ts
h+1)−eh(θ, ts
h)) ≤ (1+ 1√
k
)vf .
We now consider the possible execution of uncompleted
jobs by Γoffline. Associate each job i that is not completed
by Γ1 with the interval during which it was abandoned. All
jobs are now associated with an interval, since there are no
gaps between the intervals, and since no job i can be
abandoned after the close of the last interval at tclose
F . Because
the processor is idle after tclose
F , any such job i would
become active at some time t ≥ tclose
F , which would lead to the
completion of some job, creating a new interval and
contradicting the fact that IF is the last one.
67
The following lemma is equivalent to Lemma 5.6 of [15],
but the proof is different for our mechanism.
Lemma 8. For any interval If and any job i abandoned
in If , the following inequality holds: vi ≤ (1 +
√
k)vf .
Proof. Assume by contradiction that there exists a job
i abandoned in If such that vi > (1 +
√
k)vf . At tclose
f ,
the priority of job f is vf +
√
k · lf < (1 +
√
k)vf . Because
the priority of the active job monotonically increases over
the range [tbegin
f , tclose
f ], job i would have a higher priority
than the active job (and thus begin execution) at some time
t ∈ [tbegin
f , tclose
f ]. Again applying monotonicity, this would
imply that the priority of the active job at tclose
f exceeds
(1 +
√
k)vf , contradicting the fact that it is (1 +
√
k)vf .
As in [15], for each interval If , we give Γoffline the
following gift: k times the amount of time in the range
[tbegin
f , tclose
f ] that it does not schedule a job. Additionally,
we give the adversary vf , since the adversary may be able
to complete this job at some future time, due to the fact
that Γ1 ignores deadlines. The following lemma is Lemma
5.10 in [15], and its proof now applies directly.
Lemma 9. [15] With the above gifts the total net gain
obtained by the clairvoyant algorithm from scheduling the jobs
abandoned during If is not greater than (1 +
√
k) · vf .
The intuition behind this lemma is that the best that
the adversary can do is to take almost all of the gift of
k ·(tclose
f −tbegin
f ) (intuitively, this is equivalent to executing
jobs with the maximum possible value density over the time
that Γ1 is active), and then begin execution of a job
abandoned by Γ1 right before tclose
f . By Lemma 8, the value of
this job is bounded by (1 +
√
k) · vf . We can now combine
the results of these lemmas to prove the competitive ratio.
Theorem 10. Mechanism Γ1 is (1+
√
k)2+1 -competitive.
Proof. Using the fact that the way in which jobs are
associated with the intervals partitions the entire set of jobs,
we can show the competitive ratio by showing that Γ1 is
(1+
√
k)2
+1 -competitive for each interval in the sequence
(1, . . . , F). Over an arbitrary interval If , the offline
algorithm can achieve at most (tclose
f −tbegin
f )·k+vf +(1+
√
k)vf ,
from the two gifts and the net gain bounded by Lemma
9. Applying Lemma 7, this quantity is then bounded from
above by (1+ 1√
k
)·vf ·k+vf +(1+
√
k)vf = ((1+
√
k)2
+1)·vf .
Since Γ1 achieves vf , the competitive ratio holds.
4.3 Special Case: Unalterable length and k=1
While so far we have allowed each agent to lie about all
four characteristics of its job, lying about the length of the
job is not possible in some settings. For example, a user
may not know how to alter a computational problem in a
way that both lengthens the job and allows the solution of
the original problem to be extracted from the solution to
the altered problem. Another restriction that is natural in
some settings is uniform value densities (k = 1), which was
the case considered by [4]. If the setting satisfies these two
conditions, then, by using Mechanism Γ2, we can achieve a
competitive ratio of 5 (which is the same competitive ratio
as Γ1 for the case of k = 1) without knowledge of ρmin and
without the use of payments. The latter property may be
necessary in settings that are more local than grid
computing (e.g., within a department) but in which the users are
still self-interested.7
Mechanism 2 Γ2
Execute S(ˆθ, ·) according to Algorithm 2
for all i do
pi(ˆθ) ← 0
Algorithm 2
for all t do
Avail ← {i|(t ≥ ˆri)∧(ei(ˆθ, t) < li)∧(ei(ˆθ, t)+ ˆdi−t ≥ li)}
if Avail = ∅ then
S(ˆθ, t) ← arg maxi∈Avail(li + ei(ˆθ, t))
{Break ties in favor of lower ˆri}
else
S(ˆθ, t) ← 0
Theorem 11. When k = 1, and each agent i cannot
falsely declare li, Mechanism Γ2 satisfies individual
rationality and incentive compatibility.
Theorem 12. When k = 1, and each agent i cannot
falsely declare li, Mechanism Γ2 is 5-competitive.
Since this mechanism is essentially a simplification of Γ1,
we omit proofs of these theorems. Basically, the fact that
k = 1 and ˆli = li both hold allows Γ2 to substitute the
priority (li +ei(ˆθ, t)) for the priority used in Γ1; and, since ˆvi
is ignored, payments are no longer needed to ensure incentive
compatibility.
5. COMPETITIVE LOWER BOUND
We now show that the competitive ratio of (1 +
√
k)2
+
1 achieved by Γ1 is a lower bound for deterministic
online mechanisms. To do so, we will appeal to third
requirement on a mechanism, non-negative payments (NNP),
which requires that the center never pays an agent (formally,
∀i, ˆθ, pi(ˆθi) ≥ 0). Unlike IC and IR, this requirement is not
standard in mechanism design. We note, however, that both
Γ1 and Γ2 satisfy it trivially, and that, in the following proof,
zero only serves as a baseline utility for an agent, and could
be replaced by any non-positive function of ˆθ−i.
The proof of the lower bound uses an adversary argument
similar to that used in [4] to show a lower bound of (1 +√
k)2
in the non-strategic setting, with the main novelty
lying in the perturbation of the job sequence and the related
incentive compatibility arguments. We first present a lemma
relating to the recurrence used for this argument, with the
proof omitted due to space constraints.
Lemma 13. For any k ≥ 1, for the recurrence defined by
li+1 = λ · li − k · i
h=1 lh and l1 = 1, where (1 +
√
k)2
− 1 <
λ < (1 +
√
k)2
, there exists an integer m ≥ 1 such that
lm+k· m−1
h=1
lh
lm
> λ.
7
While payments are not required in this setting, Γ2 can be
changed to collect a payments without affecting incentive
compatibility by charging some fixed fraction of li for each
job i that is completed.
68
Theorem 14. There does not exist a deterministic online
mechanism that satisfies NNP and that achieves a
competitive ratio less than (1 +
√
k)2
+ 1.
Proof. Assume by contradiction that there exists a
deterministic online mechanism Γ that satisfies NNP and that
achieves a competitive ratio of c = (1 +
√
k)2
+ 1 − for
some > 0 (and, by implication, satisfies IC and IR as
well). Since a competitive ratio of c implies a competitive
ratio of c + x, for any x > 0, we assume without loss of
generality that < 1. First, we will construct a profile of
agent types θ using an adversary argument. After possibly
slightly perturbing θ to assure that a strictness property is
satisfied, we will then use a more significant perturbation of
θ to reach a contradiction.
We now construct the original profile θ. Pick an α such
that 0 < α < , and define δ = α
ck+3k
. The adversary uses
two sequences of jobs: minor and major. Minor jobs i are
characterized by li = δ, vi = k · δ, and zero laxity. The first
minor job is released at time 0, and ri = di−1 for all i > 1.
The sequence stops whenever Γ completes any job.
Major jobs also have zero laxity, but they have the
smallest possible value ratio (that is, vi = li). The lengths of the
major jobs that may be released, starting with i = 1, are
determined by the following recurrence relation.
li+1 = (c − 1 + α) · li − k ·
i
h=1
lh
l1 = 1
The bounds on α imply that (1 +
√
k)2
− 1 < c − 1 + α <
(1+
√
k)2
, which allows us to apply Lemma 13. Let m be the
smallest positive number such that
lm+k· m−1
h=1
lh
lm
> c−1+α.
The first major job has a release time of 0, and each major
job i > 1 has a release time of ri = di−1 − δ, just before
the deadline of the previous job. The adversary releases
major job i ≤ m if and only if each major job j < i was
executed continuously over the range [ri, ri+1]. No major
job is released after job m.
In order to achieve the desired competitive ratio, Γ must
complete some major job f, because Γoffline can always at
least complete major job 1 (for a value of 1), and Γ can
complete at most one minor job (for a value of α
c+3
< 1
c
).
Also, in order for this job f to be released, the processor
time preceding rf can only be spent executing major jobs
that are later abandoned. If f < m, then major job f + 1
will be released and it will be the final major job. Γ cannot
complete job f +1, because rf +lf = df > rf+1. Therefore,
θ consists of major jobs 1 through f + 1 (or, f, if f = m),
plus minor jobs from time 0 through time df .
We now possibly perturb θ slightly. By IR, we know
that vf ≥ pf (θ). Since we will later need this inequality
to be strict, if vf = pf (θ), then change θf to θf , where
rf = rf , but vf , lf , and df are all incremented by δ over
their respective values in θf . By IC, job f must still be
completed by Γ for the profile (θf , θ−f ). If not, then by
IR and NNP we know that pf (θf , θ−f ) = 0, and thus that
uf (g(θf , θ−f ), θf ) = 0. However, agent f could then increase
its utility by falsely declaring the original type of θf ,
receiving a utility of: uf (g(θf , θ−f ), θf ) = vf − pf (θ) = δ > 0,
violating IC. Furthermore, agent f must be charged the same
amount (that is, pf (θf , θ−f ) = pf (θ)), due to a similar
incentive compatibility argument. Thus, for the remainder of
the proof, assume that vf > pf (θ).
We now use a more substantial perturbation of θ to
complete the proof. If f < m, then define θf to be identical
to θf , except that df = df+1 + lf , allowing job f to be
completely executed after job f + 1 is completed. If f = m,
then instead set df = df +lf . IC requires that for the profile
(θf , θ−f ), Γ still executes job f continuously over the range
[rf , rf +lf ], thus preventing job f +1 from being completed.
Assume by contradiction that this were not true. Then, at
the original deadline of df , job f is not completed. Consider
the possible profile (θf , θ−f , θx), which differs from the new
profile only in the addition of a job x which has zero laxity,
rx = df , and vx = lx = max(df − df , (c + 1) · (lf + lf+1)).
Because this new profile is indistinguishable from (θf , θ−f )
to Γ before time df , it must schedule jobs in the same way
until df . Then, in order to achieve the desired competitive
ratio, it must execute job x continuously until its deadline,
which is by construction at least as late as the new deadline
df of job f. Thus, job f will not be completed, and, by
IR and NNP, it must be the case that pf (θf , θ−f , θx) =
0 and uf (g(θf , θ−f , θx), θf ) = 0. Using the fact that θ is
indistinguishable from (θf , θ−f , θx) up to time df , if agent
f falsely declared his type to be the original θf , then its job
would be completed by df and it would be charged pf (θ).
Its utility would then increase to uf (g(θf , θ−f , θx), θf ) =
vf − pf (θ) > 0, contradicting IC.
While Γ"s execution must be identical for both (θf , θ−f )
and (θf , θ−f ), Γoffline can take advantage of the change. If
f < m, then Γ achieves a value of at most lf +δ (the value of
job f if it were perturbed), while Γoffline achieves a value of
at least k·( f
h=1 lh −2δ)+lf+1 +lf by executing minor jobs
until rf+1, followed by job f +1 and then job f (we subtract
two δ"s instead of one because the last minor job before rf+1
may have to be abandoned). Substituting in for lf+1, the
competitive ratio is then at least:
k·(
f
h=1
lh−2δ)+lf+1+lf
lf +δ
=
k·(
f
h=1
lh)−2k·δ+(c−1+α)·lf −k·(
f
h=1
lh)+lf
lf +δ
=
c·lf +(α·lf −2k·δ)
lf +δ
≥
c·lf +((ck+3k)δ−2k·δ)
lf +δ
> c.
If instead f = m, then Γ achieves a value of at most lm +δ,
while Γoffline achieves a value of at least k · ( m
h=1 lh −
2δ) + lm by completing minor jobs until dm = rm + lm,
and then completing job m. The competitive ratio is then
at least:
k·( m
h=1 lh−2δ)+lm
lm+δ
=
k·( m−1
h=1
lh)−2k·δ+klm+lm
lm+δ
>
(c−1+α)·lm−2k·δ+klm
lm+δ
= (c+k−1)·lm+(αlm−2k·δ)
lm+δ
> c.
6. RELATED WORK
In this section we describe related work other than the
two papers ([4] and [15]) on which this paper is based.
Recent work related to this scheduling domain has focused on
competitive analysis in which the online algorithm uses a
faster processor than the offline algorithm (see, e.g., [13,
14]). Mechanism design was also applied to a scheduling
problem in [18]. In their model, the center owns the jobs
in an offline setting, and it is the agents who can execute
them. The private information of an agent is the time it will
require to execute each job. Several incentive compatible
mechanisms are presented that are based on approximation
algorithms for the computationally infeasible optimization
problem. This paper also launched the area of
algorithmic mechanism design, in which the mechanism must
sat69
isfy computational requirements in addition to the standard
incentive requirements. A growing sub-field in this area is
multicast cost-sharing mechanism design (see, e.g., [1]), in
which the mechanism must efficiently determine, for each
agent in a multicast tree, whether the agent receives the
transmission and the price it must pay. For a survey of
this and other topics in distributed algorithmic mechanism
design, see [9].
Online execution presents a different type of algorithmic
challenge, and several other papers study online algorithms
or mechanisms in economic settings. For example, [5]
considers an online market clearing setting, in which the
auctioneer matches buy and sells bids (which are assumed to be
exogenous) that arrive and expire over time. In [2], a
general method is presented for converting an online algorithm
into an online mechanism that is incentive compatible with
respect to values. Truthful declaration of values is also
considered in [3] and [16], which both consider multi-unit online
auctions. The main difference between the two is that the
former considers the case of a digital good, which thus has
unlimited supply. It is pointed out in [16] that their results
continue to hold when the setting is extended so that bidders
can delay their arrival.
The only other paper we are aware of that addresses the
issue of incentive compatibility in a real-time system is [11],
which considers several variants of a model in which the
center allocates bandwidth to agents who declare both their
value and their arrival time. A dominant strategy IC
mechanism is presented for the variant in which every point in time
is essentially independent, while a Bayes-Nash IC
mechanism is presented for the variant in which the center"s
current decision affects the cost of future actions.
7. CONCLUSION
In this paper, we considered an online scheduling domain
for which algorithms with the best possible competitive
ratio had been found, but for which new solutions were
required when the setting is extended to include self-interested
agents. We presented a mechanism that is incentive
compatible with respect to release time, deadline, length and value,
and that only increases the competitive ratio by one. We
also showed how this mechanism could be simplified when
k = 1 and each agent cannot lie about the length of its job.
We then showed a matching lower bound on the
competitive ratio that can be achieved by a deterministic mechanism
that never pays the agents.
Several open problems remain in this setting. One is to
determine whether the lower bound can be strengthened by
removing the restriction of non-negative payments. Also,
while we feel that it is reasonable to strengthen the
assumption of knowing the maximum possible ratio of value
densities (k) to knowing the actual range of possible value
densities, it would be interesting to determine whether there
exists a ((1 +
√
k)2
+ 1)-competitive mechanism under the
original assumption. Finally, randomized mechanisms
provide an unexplored area for future work.
8. REFERENCES
[1] A. Archer, J. Feigenbaum, A. Krishnamurthy,
R. Sami, and S. Shenker, Approximation and collusion
in multicast cost sharing, Games and Economic
Behavior (to appear).
[2] B. Awerbuch, Y. Azar, and A. Meyerson, Reducing
truth-telling online mechanisms to online
optimization, Proceedings on the 35th Symposium on
the Theory of Computing, 2003.
[3] Z. Bar-Yossef, K. Hildrum, and F. Wu,
Incentive-compatible online auctions for digital goods,
Proceedings of the 13th Annual ACM-SIAM
Symposium on Discrete Algorithms, 2002.
[4] S. Baruah, G. Koren, D. Mao, B. Mishra,
A. Raghunathan, L. Rosier, D. Shasha, and F. Wang,
On the competitiveness of on-line real-time task
scheduling, Journal of Real-Time Systems 4 (1992),
no. 2, 125-144.
[5] A. Blum, T. Sandholm, and M. Zinkevich, Online
algorithms for market clearing, Proceedings of the
13th Annual ACM-SIAM Symposium on Discrete
Algorithms, 2002.
[6] A. Borodin and R. El-Yaniv, Online computation and
competitive analysis, Cambridge University Press,
1998.
[7] R. Buyya, D. Abramson, J. Giddy, and H. Stockinger,
Economic models for resource management and
scheduling in grid computing, The Journal of
Concurrency and Computation: Practice and
Experience 14 (2002), 1507-1542.
[8] N. Camiel, S. London, N. Nisan, and O. Regev, The
popcorn project: Distributed computation over the
internet in java, 6th International World Wide Web
Conference, 1997.
[9] J. Feigenbaum and S. Shenker, Distributed algorithmic
mechanism design: Recent results and future
directions, Proceedings of the 6th International
Workshop on Discrete Algorithms and Methods for
Mobile Computing and Communications, 2002,
pp. 1-13.
[10] A. Fiat and G. Woeginger (editors), Online
algorithms: The state of the art, Springer Verlag, 1998.
[11] E. Friedman and D. Parkes, Pricing wifi at
starbucksissues in online mechanism design, EC"03, 2003.
[12] R. L. Graham, Bounds for certain multiprocessor
anomalies, Bell System Technical Journal 45 (1966),
1563-1581.
[13] B. Kalyanasundaram and K. Pruhs, Speed is as
powerful as clairvoyance, Journal of the ACM 47
(2000), 617-643.
[14] C. Koo, T. Lam, T. Ngan, and K. To, On-line
scheduling with tight deadlines, Theoretical Computer
Science 295 (2003), 251-261.
[15] G. Koren and D. Shasha, D-over: An optimal on-line
scheduling algorithm for overloaded real-time systems,
SIAM Journal of Computing 24 (1995), no. 2,
318-339.
[16] R. Lavi and N. Nisan, Competitive analysis of online
auctions, EC"00, 2000.
[17] A. Mas-Colell, M. Whinston, and J. Green,
Microeconomic theory, Oxford University Press, 1995.
[18] N. Nisan and A. Ronen, Algorithmic mechanism
design, Games and Economic Behavior 35 (2001),
166-196.
[19] C. Papadimitriou, Algorithms, games, and the
internet, STOC, 2001, pp. 749-753.
70 | deterministic mechanism;game theory;non-strategic setting;online algorithm;profitable deviation;monotonicity;online scheduling of job;competitive ratio;deadline;importance ratio;incentive compatibility;job online scheduling;individual rationality;zero laxity;quasi-linear function;schedule;deterministic algorithm;mechanism design |
train_J-69 | Robust Incentive Techniques for Peer-to-Peer Networks | Lack of cooperation (free riding) is one of the key problems that confronts today"s P2P systems. What makes this problem particularly difficult is the unique set of challenges that P2P systems pose: large populations, high turnover, asymmetry of interest, collusion, zero-cost identities, and traitors. To tackle these challenges we model the P2P system using the Generalized Prisoner"s Dilemma (GPD), and propose the Reciprocative decision function as the basis of a family of incentives techniques. These techniques are fully distributed and include: discriminating server selection, maxflowbased subjective reputation, and adaptive stranger policies. Through simulation, we show that these techniques can drive a system of strategic users to nearly optimal levels of cooperation. | 1. INTRODUCTION
Many peer-to-peer (P2P) systems rely on cooperation among
selfinterested users. For example, in a file-sharing system, overall
download latency and failure rate increase when users do not share
their resources [3]. In a wireless ad-hoc network, overall packet
latency and loss rate increase when nodes refuse to forward packets
on behalf of others [26]. Further examples are file preservation [25],
discussion boards [17], online auctions [16], and overlay routing
[6]. In many of these systems, users have natural disincentives to
cooperate because cooperation consumes their own resources and
may degrade their own performance. As a result, each user"s
attempt to maximize her own utility effectively lowers the overall
A
BC
Figure 1: Example of asymmetry of interest. A wants service from B,
B wants service form C, and C wants service from A.
utility of the system. Avoiding this tragedy of the commons [18]
requires incentives for cooperation.
We adopt a game-theoretic approach in addressing this problem. In
particular, we use a prisoners" dilemma model to capture the
essential tension between individual and social utility, asymmetric
payoff matrices to allow asymmetric transactions between peers,
and a learning-based [14] population dynamic model to specify the
behavior of individual peers, which can be changed continuously.
While social dilemmas have been studied extensively, P2P
applications impose a unique set of challenges, including:
• Large populations and high turnover: A file sharing
system such as Gnutella and KaZaa can exceed 100, 000
simultaneous users, and nodes can have an average life-time of the
order of minutes [33].
• Asymmetry of interest: Asymmetric transactions of P2P
systems create the possibility for asymmetry of interest. In
the example in Figure 1, A wants service from B, B wants
service from C, and C wants service from A.
• Zero-cost identity: Many P2P systems allow peers to
continuously switch identities (i.e., whitewash).
Strategies that work well in traditional prisoners" dilemma games
such as Tit-for-Tat [4] will not fare well in the P2P context.
Therefore, we propose a family of scalable and robust incentive
techniques, based upon a novel Reciprocative decision function, to
address these challenges and provide different tradeoffs:
• Discriminating Server Selection: Cooperation requires
familiarity between entities either directly or indirectly.
However, the large populations and high turnover of P2P systems
makes it less likely that repeat interactions will occur with
a familiar entity. We show that by having each peer keep a
102
private history of the actions of other peers toward her, and
using discriminating server selection, the Reciprocative
decision function can scale to large populations and moderate
levels of turnover.
• Shared History: Scaling to higher turnover and mitigating
asymmetry of interest requires shared history. Consider the
example in Figure 1. If everyone provides service, then the
system operates optimally. However, if everyone keeps only
private history, no one will provide service because B does
not know that A has served C, etc. We show that with shared
history, B knows that A served C and consequently will serve
A. This results in a higher level of cooperation than with
private history. The cost of shared history is a distributed
infrastructure (e.g., distributed hash table-based storage) to store
the history.
• Maxflow-based Subjective Reputation: Shared history
creates the possibility for collusion. In the example in Figure 1,
C can falsely claim that A served him, thus deceiving B into
providing service. We show that a maxflow-based algorithm
that computes reputation subjectively promotes cooperation
despite collusion among 1/3 of the population. The basic idea
is that B would only believe C if C had already provided
service to B. The cost of the maxflow algorithm is its O(V 3
)
running time, where V is the number of nodes in the system.
To eliminate this cost, we have developed a constant mean
running time variation, which trades effectiveness for
complexity of computation. We show that the maxflow-based
algorithm scales better than private history in the presence of
colluders without the centralized trust required in previous
work [9] [20].
• Adaptive Stranger Policy: Zero-cost identities allows
noncooperating peers to escape the consequences of not
cooperating and eventually destroy cooperation in the system if not
stopped. We show that if Reciprocative peers treat strangers
(peers with no history) using a policy that adapts to the
behavior of previous strangers, peers have little incentive to
whitewash and whitewashing can be nearly eliminated from
the system. The adaptive stranger policy does this without
requiring centralized allocation of identities, an entry fee for
newcomers, or rate-limiting [13] [9] [25].
• Short-term History: History also creates the possibility that
a previously well-behaved peer with a good reputation will
turn traitor and use his good reputation to exploit other peers.
The peer could be making a strategic decision or someone
may have hijacked her identity (e.g., by compromising her
host). Long-term history exacerbates this problem by
allowing peers with many previous transactions to exploit that
history for many new transactions. We show that short-term
history prevents traitors from disrupting cooperation.
The rest of the paper is organized as follows. We describe the model
in Section 2 and the reciprocative decision function in Section 3. We
then proceed to the incentive techniques in Section 4. In Section 4.1,
we describe the challenges of large populations and high turnover
and show the effectiveness of discriminating server selection and
shared history. In Section 4.2, we describe collusion and
demonstrate how subjective reputation mitigates it. In Section 4.3, we
present the problem of zero-cost identities and show how an
adaptive stranger policy promotes persistent identities. In Section 4.4,
we show how traitors disrupt cooperation and how short-term
history deals with them. We discuss related work in Section 5 and
conclude in Section 6.
2. MODEL AND ASSUMPTIONS
In this section, we present our assumptions about P2P systems and
their users, and introduce a model that aims to capture the behavior
of users in a P2P system.
2.1 Assumptions
We assume a P2P system in which users are strategic, i.e., they
act rationally to maximize their benefit. However, to capture some
of the real-life unpredictability in the behavior of users, we allow
users to randomly change their behavior with a low probability (see
Section 2.4).
For simplicity, we assume a homogeneous system in which all peers
issue and satisfy requests at the same rate. A peer can satisfy any
request, and, unless otherwise specified, peers request service
uniformly at random from the population.1
. Finally, we assume that all
transactions incur the same cost to all servers and provide the same
benefit to all clients.
We assume that users can pollute shared history with false
recommendations (Section 4.2), switch identities at zero-cost
(Section 4.3), and spoof other users (Section 4.4). We do not assume
any centralized trust or centralized infrastructure.
2.2 Model
To aid the development and study of the incentive schemes, in this
section we present a model of the users" behaviors. In particular,
we model the benefits and costs of P2P interactions (the game) and
population dynamics caused by mutation, learning, and turnover.
Our model is designed to have the following properties that
characterize a large set of P2P systems:
• Social Dilemma: Universal cooperation should result in
optimal overall utility, but individuals who exploit the
cooperation of others while not cooperating themselves (i.e.,
defecting) should benefit more than users who do cooperate.
• Asymmetric Transactions: A peer may want service from
another peer while not currently being able to provide the
service that the second peer wants. Transactions should be
able to have asymmetric payoffs.
• Untraceable Defections: A peer should not be able to
determine the identity of peers who have defected on her. This
models the difficulty or expense of determining that a peer
could have provided a service, but didn"t. For example, in the
Gnutella file sharing system [21], a peer may simply ignore
queries despite possessing the desired file, thus preventing
the querying peer from identifying the defecting peer.
• Dynamic Population: Peers should be able to change their
behavior and enter or leave the system independently and
continuously.
1The exception is discussed in Section 4.1.1
103
Cooperate
Defect
Cooperate DefectClient
Server
sc RR /
sc ST / sc PP /
sc TS /
Figure 2: Payoff matrix for the Generalized Prisoner"s Dilemma. T, R,
P, and S stand for temptation, reward, punishment and sucker,
respectively.
2.3 Generalized Prisoner"s Dilemma
The Prisoner"s Dilemma, developed by Flood, Dresher, and Tucker
in 1950 [22] is a non-cooperative repeated game satisfying the
social dilemma requirement. Each game consists of two players who
can defect or cooperate. Depending how each acts, the players
receive a payoff. The players use a strategy to decide how to act.
Unfortunately, existing work either uses a specific asymmetric payoff
matrix or only gives the general form for a symmetric one [4].
Instead, we use the Generalized Prisoner"s Dilemma (GPD), which
specifies the general form for an asymmetric payoff matrix that
preserves the social dilemma. In the GPD, one player is the client and
one player is the server in each game, and it is only the decision
of the server that is meaningful for determining the outome of the
transaction. A player can be a client in one game and a server in
another. The client and server receive the payoff from a generalized
payoff matrix (Figure 2). Rc, Sc, Tc, and Pc are the client"s payoff
and Rs, Ss, Ts, and Ps are the server"s payoff. A GPD payoff
matrix must have the following properties to create a social dilemma:
1. Mutual cooperation leads to higher payoffs than mutual
defection (Rs + Rc > Ps + Pc).
2. Mutual cooperation leads to higher payoffs than one player
suckering the other (Rs + Rc > Sc + Ts and Rs + Rc >
Ss + Tc).
3. Defection dominates cooperation (at least weakly) at the
individual level for the entity who decides whether to
cooperate or defect: (Ts ≥ Rs and Ps ≥ Ss and (Ts > Rs or
Ps > Ss))
The last set of inequalities assume that clients do not incur a cost
regardless of whether they cooperate or defect, and therefore clients
always cooperate. These properties correspond to similar properties
of the classic Prisoner"s Dilemma and allow any form of
asymmetric transaction while still creating a social dilemma.
Furthermore, one or more of the four possible actions (client
cooperate and defect, and server cooperate and defect) can be
untraceable. If one player makes an untraceable action, the other player
does not know the identity of the first player.
For example, to model a P2P application like file sharing or
overlay routing, we use the specific payoff matrix values shown in
Figure 3. This satisfies the inequalities specified above, where only the
server can choose between cooperating and defecting. In addition,
for this particular payoff matrix, clients are unable to trace server
defections. This is the payoff matrix that we use in our simulation
results.
Request
Service
Don't Request
7 / -1
0 / 0
0 / 0
0 / 0
Provide
Service
Ignore
Request
Client
Server
Figure 3: The payoff matrix for an application like P2P file sharing or
overlay routing.
2.4 Population Dynamics
A characteristic of P2P systems is that peers change their
behavior and enter or leave the system independently and continuously.
Several studies [4] [28] of repeated Prisoner"s Dilemma games use
an evolutionary model [19] [34] of population dynamics. An
evolutionary model is not suitable for P2P systems because it only
specifies the global behavior and all changes occur at discrete times.
For example, it may specify that a population of 5 100%
Cooperate players and 5 100% Defect players evolves into a population
with 3 and 7 players, respectively. It does not specify which specific
players switched. Furthermore, all the switching occurs at the end
of a generation instead of continuously, like in a real P2P system. As
a result, evolutionary population dynamics do not accurately model
turnover, traitors, and strangers.
In our model, entities take independent and continuous actions that
change the composition of the population. Time consists of rounds.
In each round, every player plays one game as a client and one game
as a server. At the end of a round, a player may: 1) mutate 2) learn,
3) turnover, or 4) stay the same. If a player mutates, she switches to
a randomly picked strategy. If she learns, she switches to a strategy
that she believes will produce a higher score (described in more
detail below). If she maintains her identity after switching strategies,
then she is referred to as a traitor. If a player suffers turnover, she
leaves the system and is replaced with a newcomer who uses the
same strategy as the exiting player.
To learn, a player collects local information about the performance
of different strategies. This information consists of both her
personal observations of strategy performance and the observations of
those players she interacts with. This models users communicating
out-of-band about how strategies perform. Let s be the running
average of the performance of a player"s current strategy per round
and age be the number of rounds she has been using the strategy. A
strategy"s rating is
RunningAverage(s ∗ age)
RunningAverage(age)
.
We use the age and compute the running average before the ratio to
prevent young samples (which are more likely to be outliers) from
skewing the rating. At the end of a round, a player switches to
highest rated strategy with a probability proportional to the difference
in score between her current strategy and the highest rated strategy.
104
3. RECIPROCATIVE DECISION
FUNCTION
In this section, we present the new decision function, Reciprocative,
that is the basis for our incentive techniques. A decision function
maps from a history of a player"s actions to a decision whether to
cooperate with or defect on that player. A strategy consists of a
decision function, private or shared history, a server selection
mechanism, and a stranger policy. Our approach to incentives is to
design strategies which maximize both individual and social benefit.
Strategic users will choose to use such strategies and thereby drive
the system to high levels of cooperation. Two examples of
simple decision functions are 100% Cooperate and 100% Defect.
100% Cooperate models a naive user who does not yet realize
that she is being exploited. 100% Defect models a greedy user
who is intent on exploiting the system. In the absence of incentive
techniques, 100% Defect users will quickly dominate the 100%
Cooperate users and destroy cooperation in the system.
Our requirements for a decision function are that (1) it can use
shared and subjective history, (2) it can deal with untraceable
defections, and (3) it is robust against different patterns of defection.
Previous decision functions such as Tit-for-Tat[4] and Image[28]
(see Section 5) do not satisfy these criteria. For example, Tit-for-Tat
and Image base their decisions on both cooperations and defections,
therefore cannot deal with untraceable defections . In this section
and the remaining sections we demonstrate how the
Reciprocativebased strategies satisfy all of the requirements stated above.
The probability that a Reciprocative player cooperates with a peer
is a function of its normalized generosity. Generosity measures the
benefit an entity has provided relative to the benefit it has
consumed. This is important because entities which consume more
services than they provide, even if they provide many services, will
cause cooperation to collapse. For some entity i, let pi and ci be the
services i has provided and consumed, respectively. Entity i"s
generosity is simply the ratio of the service it provides to the service it
consumes:
g(i) = pi/ci. (1)
One possibility is to cooperate with a probability equal to the
generosity. Although this is effective in some cases, in other cases, a
Reciprocative player may consume more than she provides (e.g.,
when initially using the Stranger Defect policy in 4.3). This will
cause Reciprocative players to defect on each other. To prevent this
situation, a Reciprocative player uses its own generosity as a
measuring stick to judge its peer"s generosity. Normalized generosity
measures entity i"s generosity relative to entity j"s generosity. More
concretely, entity i"s normalized generosity as perceived by entity
j is
gj(i) = g(i)/g(j). (2)
In the remainder of this section, we describe our simulation
framework, and use it to demonstrate the benefits of the baseline
Reciprocative decision function.
Parameter Nominal value Section
Population Size 100 2.4
Run Time 1000 rounds 2.4
Payoff Matrix File Sharing 2.3
Ratio using 100% Cooperate 1/3 3
Ratio using 100% Defect 1/3 3
Ratio using Reciprocative 1/3 3
Mutation Probability 0.0 2.4
Learning Probability 0.05 2.4
Turnover Probability 0.0001 2.4
Hit Rate 1.0 4.1.1
Table 1: Default simulation parameters.
3.1 Simulation Framework
Our simulator implements the model described in Section 2. We use
the asymmetric file sharing payoff matrix (Figure 3) with
untraceable defections because it models transactions in many P2P
systems like file-sharing and packet forwarding in ad-hoc and overlay
networks. Our simulation study is composed of different scenarios
reflecting the challenges of various non-cooperative behaviors.
Table 1 presents the nominal parameter values used in our simulation.
The Ratio using rows refer to the initial ratio of the total
population using a particular strategy. In each scenario we vary the value
range of a specific parameter to reflect a particular situation or
attack. We then vary the exact properties of the Reciprocative strategy
to defend against that situation or attack.
3.2 Baseline Results
0
20
40
60
80
100
120
0
200
400
600
800
1000
Population
Time
(a) Total Population: 60
0
20
40
60
80
100
120
0
200
400
600
800
1000
Time
(b) Total Population: 120
Defector
Cooperator
Recip. Private
Figure 4: The evolution of strategy populations over time. Time the
number of elapsed rounds. Population is the number of players using
a strategy.
In this section, we present the dynamics of the game for the
basic scenario presented in Table 1 to familiarize the reader and set
a baseline for more complicated scenarios. Figures 4(a) (60
players) and (b) (120 players) show players switching to higher
scoring strategies over time in two separate runs of the simulator. Each
point in the graph represents the number of players using a
particular strategy at one point in time. Figures 5(a) and (b) show the
corresponding mean overall score per round. This measures the degree
of cooperation in the system: 6 is the maximum possible (achieved
when everybody cooperates) and 0 is the minimum (achieved when
everybody defects). From the file sharing payoff matrix, a net of 6
means everyone is able to download a file and a 0 means that no one
105
0
1
2
3
4
5
6
0
200
400
600
800
1000
MeanOverallScore/Round
Time
(a) Total Population: 60
0
1
2
3
4
5
6
0
200
400
600
800
1000
Time
(b) Total Population: 120
Figure 5: The mean overall per round score over time.
is able to do so. We use this metric in all later results to evaluate our
incentive techniques.
Figure 5(a) shows that the Reciprocative strategy using private
history causes a system of 60 players to converge to a cooperation
level of 3.7, but drops to 0.5 for 120 players. One would expect the
60 player system to reach the optimal level of cooperation (6)
because all the defectors are eliminated from the system. It does not
because of asymmetry of interest. For example, suppose player B
is using Reciprocative with private history. Player A may happen to
ask for service from player B twice in succession without
providing service to player B in the interim. Player B does not know of the
service player A has provided to others, so player B will reject
service to player A, even though player A is cooperative. We discuss
solutions to asymmetry of interest and the failure of Reciprocative
in the 120 player system in Section 4.1.
4. RECIPROCATIVE-BASED INCENTIVE
TECHNIQUES
In this section we present our incentives techniques and evaluate
their behavior by simulation. To make the exposition clear we group
our techniques by the challenges they address: large populations
and high turnover (Section 4.1), collusions (Section 4.2), zero-cost
identities (Section 4.3), and traitors (Section 4.4).
4.1 Large Populations and High Turnover
The large populations and high turnover of P2P systems makes it
less likely that repeat interactions will occur with a familiar entity.
Under these conditions, basing decisions only on private history
(records about interactions the peer has been directly involved in)
is not effective. In addition, private history does not deal well with
asymmetry of interest. For example, if player B has cooperated with
others but not with player A himself in the past, player A has no
indication of player B"s generosity, thus may unduly defect on him.
We propose two mechanisms to alleviate the problem of few repeat
transactions: server-selection and shared history.
4.1.1 Server Selection
A natural way to increase the probability of interacting with familiar
peers is by discriminating server selection. However, the
asymmetry of transactions challenges selection mechanisms. Unlike in the
prisoner"s dilemma payoff matrix, where players can benefit one
another within a single transaction, transactions in GPD are
asymmetric. As a result, a player who selects her donor for the second
time without contributing to her in the interim may face a defection.
In addition, due to untraceability of defections, it is impossible to
maintain blacklists to avoid interactions with known defectors.
In order to deal with asymmetric transactions, every player holds
(fixed size) lists of both past donors and past recipients, and selects
a server from one of these lists at random with equal
probabilities. This way, users approach their past recipients and give them a
chance to reciprocate.
In scenarios with selective users we omit the complete availability
assumption to prevent players from being clustered into a lot of
very small groups; thus, we assume that every player can perform
the requested service with probability p (for the results presented in
this section, p = .3). In addition, in order to avoid bias in favor of
the selective players, all players (including the non-discriminative
ones) select servers for games.
Figure 6 demonstrates the effectiveness of the proposed selection
mechanism in scenarios with large population sizes. We fix the
initial ratio of Reciprocative in the population (33%) while varying
the population size (between 24 to 1000) (Notice that while in
Figures 4(a) and (b), the data points demonstrates the evolution of the
system over time, each data point in this figure is the result of an
entire simulation for a specific scenario). The figure shows that the
Reciprocative decision function using private history in conjunction
with selective behavior can scale to large populations.
In Figure 7 we fix the population size and vary the turnover rate.
It demonstrates that while selective behavior is effective for low
turnover rates, as turnover gets higher, selective behavior does not
scale. This occurs because selection is only effective as long as
players from the past stay alive for long enough such that they can
be selected for future games.
4.1.2 Shared history
In order to mitigate asymmetry of interest and scale to higher
turnover rate, there is a need in shared history. Shared history means
that every peer keeps records about all of the interactions that
occur in the system, regardless of whether he was directly involved
in them or not. It allows players to leverage off of the experiences
of others in cases of few repeat transactions. It only requires that
someone has interacted with a particular player for the entire
population to observe it, thus scales better to large populations and high
turnovers, and also tolerates asymmetry of interest. Some examples
of shared history schemes are [20] [23] [28].
Figure 7 shows the effectiveness of shared history under high
turnover rates. In this figure, we fix the population size and vary the
turnover rate. While selective players with private history can only
tolerate a moderate turnover, shared history scales to turnovers of
up to approximately 0.1. This means that 10% of the players leave
the system at the end of each round. In Figure 6 we fix the turnover
and vary the population size. It shows that shared history causes
the system to converge to optimal cooperation and performance,
regardless of the size of the population.
These results show that shared history addresses all three
challenges of large populations, high turnover, and asymmetry of
transactions. Nevertheless, shared history has two disadvantages. First,
106
0
1
2
3
4
5
6
0
50
100
150
200
250
300
350
400
MeanOverallScore/Round
NumPlayers
Shared Non-Sel
Private Non-Sel
Private Selective
Figure 6: Private vs. Shared History as a function of population size.
0
1
2
3
4
5
6
0.0001 0.001 0.01 0.1
MeanOverallScore/Round
Turnover
Shared Non-Sel
Private Non-Sel
Private Selective
Figure 7: Performance of selection mechanism under turnover. The
x-axis is the turnover rate. The y-axis is the mean overall per round
score.
while a decentralized implementation of private history is
straightforward, implementation of shared-history requires communication
overhead or centralization. A decentralized shared history can be
implemented, for example, on top of a DHT, using a peer-to-peer
storage system [36] or by disseminating information to other
entities in a similar way to routing protocols. Second, and more
fundamental, shared history is vulnerable to collusion. In the next section
we propose a mechanism that addresses this problem.
4.2 Collusion and Other Shared History
Attacks
4.2.1 Collusion
While shared history is scalable, it is vulnerable to collusion.
Collusion can be either positive (e.g. defecting entities claim that other
defecting entities cooperated with them) or negative (e.g. entities
claim that other cooperative entities defected on them). Collusion
subverts any strategy in which everyone in the system agrees on the
reputation of a player (objective reputation). An example of
objective reputation is to use the Reciprocative decision function with
shared history to count the total number of cooperations a player
has given to and received from all entities in the system; another
example is the Image strategy [28]. The effect of collusion is
magnified in systems with zero-cost identities, where users can create
fake identities that report false statements.
Instead, to deal with collusion, entities can compute reputation
subjectively, where player A weighs player B"s opinions based on how
much player A trusts player B. Our subjective algorithm is based
on maxflow [24] [32]. Maxflow is a graph theoretic problem, which
given a directed graph with weighted edges asks what is the greatest
rate at which material can be shipped from the source to the target
without violating any capacity constraints. For example, in figure 8
each edge is labeled with the amount of traffic that can travel on it.
The maxflow algorithm computes the maximum amount of traffic
that can go from the source (s) to the target (t) without violating the
constraints. In this example, even though there is a loop of high
capacity edges, the maxflow between the source and the target is only
2 (the numbers in brackets represent the actual flow on each edge
in the solution).
100(0)
1(1)
5(1)
s t
10(1)
100(1)
1(1)
100(1)
20(0)
Figure 8: Each edge in the graph is labeled with its capacity and the
actual flow it carries in brackets. The maxflow between the source and
the target in the graph is 2.
C
C
CCCC
100100100100
100
00
0
0
20
20
0
0
A
B
Figure 9: This graph illustrates the robustness of maxflow in the
presence of colluders who report bogus high reputation values.
We apply the maxflow algorithm by constructing a graph whose
vertices are entities and the edges are the services that entities have
received from each other. This information can be stored using the
same methods as the shared history. A maxflow is the greatest level
of reputation the source can give to the sink without violating
reputation capacity constraints. As a result, nodes who dishonestly
report high reputation values will not be able to subvert the
reputation system.
Figure 9 illustrates a scenario in which all the colluders (labeled
with C) report high reputation values for each other. When node A
computes the subjective reputation of B using the maxflow
algorithm, it will not be affected by the local false reputation values,
rather the maxflow in this case will be 0. This is because no service
has been received from any of the colluders.
107
In our algorithm, the benefit that entity i has received (indirectly)
from entity j is the maxflow from j to i. Conversely, the benefit that
entity i has provided indirectly to j is the maxflow from i to j. The
subjective reputation of entity j as perceived by i is:
min
maxflow(j to i)
maxflow(i to j)
, 1 (3)
0
1
2
3
4
5
6
0
100
200
300
400
500
600
700
800
900
1000
MeanOverallScore/Round
Population
Shared
Private
Subjective
Figure 10: Subjective shared history compared to objective shared
history and private history in the presence of colluders.
Algorithm 1 CONSTANTTIMEMAXFLOW Bound the mean running time
of Maxflow to a constant.
method CTMaxflow(self, src, dst)
1: self.surplus ← self.surplus + self.increment
{Use the running mean as a prediction.}
2: if random() > (0.5∗self.surplus/self.mean iterations) then
3: return None {Not enough surplus to run.}
4: end if
{Get the flow and number of iterations used from the maxflow alg.}
5: flow, iterations ← Maxflow(self.G, src, dst)
6: self.surplus ← self.surplus − iterations
{Keep a running mean of the number of iterations used.}
7: self.mean iterations ← self.α ∗ self.mean iterations + (1 −
self.α) ∗ iterations
8: return flow
The cost of maxflow is its long running time. The standard
preflowpush maxflow algorithm has a worst case running time of O(V 3
).
Instead, we use Algorithm 1 which has a constant mean running
time, but sometimes returns no flow even though one exists. The
essential idea is to bound the mean number of nodes examined
during the maxflow computation. This bounds the overhead, but also
bounds the effectiveness. Despite this, the results below show that
a maxflow-based Reciprocative decision function scales to higher
populations than one using private history.
Figure 10 compares the effectiveness of subjective reputation to
objective reputation in the presence of colluders. In these
scenarios, defectors collude by claiming that other colluders that they
encounter gave them 100 cooperations for that encounter. Also, the
parameters for Algorithm 1 are set as follows: increment = 100,
α = 0.9.
As in previous sections, Reciprocative with private history results
in cooperation up to a point, beyond which it fails. The difference
here is that objective shared history fails for all population sizes.
This is because the Reciprocative players cooperate with the
colluders because of their high reputations. However, subjective
history can reach high levels of cooperation regardless of colluders.
This is because there are no high weight paths in the cooperation
graph from colluders to any non-colluders, so the maxflow from
a colluder to any non-colluder is 0. Therefore, a subjective
Reciprocative player will conclude that that colluder has not provided
any service to her and will reject service to the colluder. Thus, the
maxflow algorithm enables Reciprocative to maintain the
scalability of shared history without being vulnerable to collusion or
requiring centralized trust (e.g., trusted peers). Since we bound the
running time of the maxflow algorithm, cooperation decreases as
the population size increases, but the key point is that the subjective
Reciprocative decision function scales to higher populations than
one using private history. This advantage only increases over time
as CPU power increases and more cycles can be devoted to running
the maxflow algorithm (by increasing the increment parameter).
Despite the robustness of the maxflow algorithm to the simple form
of collusion described previously, it still has vulnerabilities to more
sophisticated attacks. One is for an entity (the mole) to provide
service and then lie positively about other colluders. The other
colluders can then exploit their reputation to receive service. However,
the effectiveness of this attack relies on the amount of service that
the mole provides. Since the mole is paying all of the cost of
providing service and receiving none of the benefit, she has a strong
incentive to stop colluding and try another strategy. This forces the
colluders to use mechanisms to maintain cooperation within their
group, which may drive the cost of collusion to exceed the benefit.
4.2.2 False reports
Another attack is for a defector to lie about receiving or providing
service to another entity. There are four possibile actions that can be
lied about: providing service, not providing service, receiving
service, and not receiving service. Falsely claiming to receive service
is the simple collusion attack described above. Falsely claiming not
to have provided service provides no benefit to the attacker.
Falsely claiming to have provided service or not to have received
it allows an attacker to boost her own reputation and/or lower the
reputation of another entity. An entity may want to lower another
entity"s reputation in order to discourage others from selecting it
and exclusively use its service. These false claims are clearly
identifiable in the shared history as inconsistencies where one entity
claims a transaction occurred and another claims it did not. To limit
this attack, we modify the maxflow algorithm so that an entity
always believes the entity that is closer to him in the flow graph. If
both entities are equally distant, then the disputed edge in the flow is
not critical to the evaluation and is ignored. This modification
prevents those cases where the attacker is making false claims about an
entity that is closer than her to the evaluating entity, which prevents
her from boosting her own reputation. The remaining possibilities
are for the attacker to falsely claim to have provided service to or
not to have received it from a victim entity that is farther from the
evalulator than her. In these cases, an attacker can only lower the
reputation of the victim. The effectiveness of doing this is limited
by the number of services provided and received by the attacker,
which makes executing this attack expensive.
108
4.3 Zero-Cost Identities
History assumes that entities maintain persistent identities.
However, in most P2P systems, identities are zero-cost. This is desirable
for network growth as it encourages newcomers to join the system.
However, this also allows misbehaving users to escape the
consequences of their actions by switching to new identities (i.e.,
whitewashing). Whitewashers can cause the system to collapse if they
are not punished appropriately. Unfortunately, a player cannot tell
if a stranger is a whitewasher or a legitimate newcomer. Always
cooperating with strangers encourages newcomers to join, but at the
same time encourages whitewashing behavior. Always defecting on
strangers prevents whitewashing, but discourages newcomers from
joining and may also initiate unfavorable cycles of defection.
This tension suggests that any stranger policy that has a fixed
probability of cooperating with strangers will fail by either being too
stingy when most strangers are newcomers or too generous when
most strangers are whitewashers. Our solution is the Stranger
Adaptive stranger policy. The idea is to be generous to strangers
when they are being generous and stingy when they are stingy.
Let ps and cs be the number of services that strangers have
provided and consumed, respectively. The probability that a player
using Stranger Adaptive helps a stranger is ps/cs. However, we do
not wish to keep these counts permanently (for reasons described in
Section 4.4). Also, players may not know when strangers defect
because defections are untraceable (as described in Section 2).
Consequently, instead of keeping ps and cs, we assume that k = ps + cs,
where k is a constant and we keep the running ratio r = ps/cs.
When we need to increment ps or cs, we generate the current
values of ps and cs from k and r:
cs = k/(1 + r)
ps = cs ∗ r
We then compute the new r as follows:
r = (ps + 1)/cs , if the stranger provided service
r = ps/(cs + 1) , if the stranger consumed service
This method allows us to keep a running ratio that reflects the
recent generosity of strangers without knowing when strangers have
defected.
0
1
2
3
4
5
6
0.0001 0.001 0.01 0.1 1
MeanOverallScore/Round
Turnover
Stranger Cooperate
Stranger Defect
Stranger Adaptive
Figure 11: Different stranger policies for Reciprocative with shared
history. The x-axis is the turnover rate on a log scale. The y-axis is the
mean overall per round score.
Figures 11 and 12 compare the effectiveness of the
Reciprocative strategy using different policies toward strangers. Figure 11
0
1
2
3
4
5
6
0.0001 0.001 0.01 0.1 1
MeanOverallScore/Round
Turnover
Stranger Cooperate
Stranger Defect
Stranger Adaptive
Figure 12: Different stranger policies for Reciprocative with private
history. The x-axis is the turnover rate on a log scale. The y-axis is the
mean overall per round score.
compares different stranger policies for Reciprocative with shared
history, while Figure 12 is with private history. In both figures,
the players using the 100% Defect strategy change their
identity (whitewash) after every transaction and are indistinguishable
from legitimate newcomers. The Reciprocative players using the
Stranger Cooperate policy completely fail to achieve cooperation.
This stranger policy allows whitewashers to maximize their payoff
and consequently provides a high incentive for users to switch to
whitewashing.
In contrast, Figure 11 shows that the Stranger Defect policy is
effective with shared history. This is because whitewashers always
appear to be strangers and therefore the Reciprocative players will
always defect on them. This is consistent with previous work [13]
showing that punishing strangers deals with whitewashers.
However, Figure 12 shows that Stranger Defect is not effective with
private history. This is because Reciprocative requires some initial
cooperation to bootstrap. In the shared history case, a Reciprocative
player can observe that another player has already cooperated with
others. With private history, the Reciprocative player only knows
about the other players" actions toward her. Therefore, the initial
defection dictated by the Stranger Defect policy will lead to later
defections, which will prevent Reciprocative players from ever
cooperating with each other. In other simulations not shown here,
the Stranger Defect stranger policy fails even with shared history
when there are no initial 100% Cooperate players.
Figure 11 shows that with shared history, the Stranger
Adaptive policy performs as well as Stranger Defect policy until the
turnover rate is very high (10% of the population turning over after
every transaction). In these scenarios, Stranger Adaptive is using
k = 10 and each player keeps a private r. More importantly, it is
significantly better than Stranger Defect policy with private
history because it can bootstrap cooperation. Although the Stranger
Defect policy is marginally more effective than Stranger
Adaptive at very high rates of turnover, P2P systems are unlikely to
operate there because other services (e.g., routing) also cannot tolerate
very high turnover.
We conclude that of the stranger policies that we have explored,
Stranger Adaptive is the most effective. By using Stranger
Adaptive, P2P systems with zero-cost identities and a sufficiently
low turnover can sustain cooperation without a centralized
allocation of identities.
109
4.4 Traitors
Traitors are players who acquire high reputation scores by
cooperating for a while, and then traitorously turn into defectors before
leaving the system. They model both users who turn deliberately
to gain a higher score and cooperators whose identities have been
stolen and exploited by defectors. A strategy that maintains
longterm history without discriminating between old and recent actions
becomes highly vulnerable to exploitation by these traitors.
The top two graphs in Figure 13 demonstrate the effect of traitors
on cooperation in a system where players keep long-term history
(never clear history). In these simulations, we run for 2000 rounds
and allow cooperative players to keep their identities when
switching to the 100% Defector strategy. We use the default values for the
other parameters. Without traitors, the cooperative strategies thrive.
With traitors, the cooperative strategies thrive until a cooperator
turns traitor after 600 rounds. As this cooperator exploits her
reputation to achieve a high score, other cooperative players notice this
and follow suit via learning. Cooperation eventually collapses. On
the other hand, if we maintain short-term history and/or
discounting ancient history vis-a-vis recent history, traitors can be quickly
detected, and the overall cooperation level stays high, as shown in
the bottom two graphs in Figure 13.
0
20
40
60
80
100
1K 2K
Long-TermHistory
No Traitors
Population
0
20
40
60
80
100
1K 2K
Traitors
Defector
Cooperator
Recip. Shared
0
20
40
60
80
100
1K 2K
Short-TermHistory
Time
Population
0
20
40
60
80
100
1K 2K
Time
Figure 13: Keeping long-term vs. short-term history both with and
without traitors.
5. RELATED WORK
Previous work has examined the incentive problem as applied to
societies in general and more recently to Internet applications and
peer-to-peer systems in particular. A well-known phenomenon in
this context is the tragedy of the commons [18] where resources
are under-provisioned due to selfish users who free-ride on the
system"s resources, and is especially common in large networks [29]
[3].
The problem has been extensively studied adopting a game
theoretic approach. The prisoners" dilemma model provides a
natural framework to study the effectiveness of different strategies in
establishing cooperation among players. In a simulation
environment with many repeated games, persistent identities, and no
collusion, Axelrod [4] shows that the Tit-for-Tat strategy dominates.
Our model assumes growth follows local learning rather than
evolutionary dynamics [14], and also allows for more kinds of attacks.
Nowak and Sigmund [28] introduce the Image strategy and
demonstrate its ability to establish cooperation among players despite few
repeat transactions by the employment of shared history. Players
using Image cooperate with players whose global count of
cooperations minus defections exceeds some threshold. As a result, an
Image player is either vulnerable to partial defectors (if the
threshold is set too low) or does not cooperate with other Image players
(if the threshold is set too high).
In recent years, researchers have used economic mechanism
design theory to tackle the cooperation problem in Internet
applications. Mechanism design is the inverse of game theory. It asks how
to design a game in which the behavior of strategic players results
in the socially desired outcome. Distributed Algorithmic
Mechanism Design seeks solutions within this framework that are both
fully distributed and computationally tractable [12]. [10] and [11]
are examples of applying DAMD to BGP routing and multicast cost
sharing. More recently, DAMD has been also studied in dynamic
environments [38]. In this context, demonstrating the superiority of
a cooperative strategy (as in the case of our work) is consistent with
the objective of incentivizing the desired behavior among selfish
players.
The unique challenges imposed by peer-to-peer systems inspired
additional body of work [5] [37], mainly in the context of packet
forwarding in wireless ad-hoc routing [8] [27] [30] [35], and file
sharing [15] [31]. Friedman and Resnick [13] consider the
problem of zero-cost identities in online environments and find that in
such systems punishing all newcomers is inevitable. Using a
theoretical model, they demonstrate that such a system can converge to
cooperation only for sufficiently low turnover rates, which our
results confirm. [6] and [9] show that whitewashing and collusion can
have dire consequences for peer-to-peer systems and are difficult to
prevent in a fully decentralized system.
Some commercial file sharing clients [1] [2] provide incentive
mechanisms which are enforced by making it difficult for the user
to modify the source code. These mechanisms can be circumvented
by a skilled user or by a competing company releasing a compatible
client without the incentive restrictions. Also, these mechanisms are
still vulnerable to zero-cost identities and collusion. BitTorrent [7]
uses Tit-for-Tat as a method for resource allocation, where a user"s
upload rate dictates his download rate.
6. CONCLUSIONS
In this paper we take a game theoretic approach to the
problem of cooperation in peer-to-peer networks. Addressing the
challenges imposed by P2P systems, including large populations, high
turnover, asymmetry of interest and zero-cost identities, we propose
a family of scalable and robust incentive techniques, based upon
the Reciprocative decision function, to support cooperative
behavior and improve overall system performance.
We find that the adoption of shared history and discriminating
server selection techniques can mitigate the challenge of few repeat
transactions that arises due to large population size, high turnover
and asymmetry of interest. Furthermore, cooperation can be
established even in the presence of zero-cost identities through the use of
an adaptive policy towards strangers. Finally, colluders and traitors
can be kept in check via subjective reputations and short-term
history, respectively.
110
7. ACKNOWLEDGMENTS
We thank Mary Baker, T.J. Giuli, Petros Maniatis, the
anonymous reviewer, and our shepherd, Margo Seltzer, for their useful
comments that helped improve the paper. This work is supported
in part by the National Science Foundation under ITR awards
ANI-0085879 and ANI-0331659, and Career award ANI-0133811.
Views and conclusions contained in this document are those of the
authors and should not be interpreted as representing the official
policies, either expressed or implied, of NSF, or the U.S.
government.
8. REFERENCES
[1] Kazaa. http://www.kazaa.com.
[2] Limewire. http://www.limewire.com.
[3] ADAR, E., AND HUBERMAN, B. A. Free Riding on Gnutella. First
Monday 5, 10 (October 2000).
[4] AXELROD, R. The Evolution of Cooperation. Basic Books, 1984.
[5] BURAGOHAIN, C., AGRAWAL, D., AND SURI, S. A
Game-Theoretic Framework for Incentives in P2P Systems. In
International Conference on Peer-to-Peer Computing (Sep 2003).
[6] CASTRO, M., DRUSCHEL, P., GANESH, A., ROWSTRON, A., AND
WALLACH, D. S. Security for Structured Peer-to-Peer Overlay
Networks. In Proceedings of Multimedia Computing and Networking
2002 (MMCN "02) (2002).
[7] COHEN, B. Incentives build robustness in bittorrent. In 1st Workshop
on Economics of Peer-to-Peer Systems (2003).
[8] CROWCROFT, J., GIBBENS, R., KELLY, F., AND ˘ OSTRING, S.
Modeling Incentives for Collaboration in Mobile ad-hoc Networks.
In Modeling and Optimization in Mobile, ad-hoc and Wireless
Networks (2003).
[9] DOUCEUR, J. R. The Sybil Attack. In Electronic Proceedings of the
International Workshop on Peer-to-Peer Systems (2002).
[10] FEIGENBAUM, J., PAPADIMITRIOU, C., SAMI, R., AND SHENKER,
S. A BGP-based Mechanism for Lowest-Cost Routing. In
Proceedings of the ACM Symposium on Principles of Distributed
Computing (2002).
[11] FEIGENBAUM, J., PAPADIMITRIOU, C., AND SHENKER, S. Sharing
the Cost of Multicast Transmissions. In Journal of Computer and
System Sciences (2001), vol. 63, pp. 21-41.
[12] FEIGENBAUM, J., AND SHENKER, S. Distributed Algorithmic
Mechanism Design: Recent Results and Future Directions. In
Proceedings of the International Workshop on Discrete Algorithms
and Methods for Mobile Computing and Communications (2002).
[13] FRIEDMAN, E., AND RESNICK, P. The Social Cost of Cheap
Pseudonyms. Journal of Economics and Management Strategy 10, 2
(1998), 173-199.
[14] FUDENBERG, D., AND LEVINE, D. K. The Theory of Learning in
Games. The MIT Press, 1999.
[15] GOLLE, P., LEYTON-BROWN, K., MIRONOV, I., AND
LILLIBRIDGE, M. Incentives For Sharing in Peer-to-Peer Networks.
In Proceedings of the 3rd ACM conference on Electronic Commerce,
October 2001 (2001).
[16] GROSS, B., AND ACQUISTI, A. Balances of Power on EBay: Peers
or Unquals? In Workshop on economics of peer-to-peer networks
(2003).
[17] GU, B., AND JARVENPAA, S. Are Contributions to P2P Technical
Forums Private or Public Goods? - An Empirical Investigation. In 1st
Workshop on Economics of Peer-to-Peer Systems (2003).
[18] HARDIN, G. The Tragedy of the Commons. Science 162 (1968),
1243-1248.
[19] JOSEF HOFBAUER AND KARL SIGMUND. Evolutionary Games and
Population Dynamics. Cambridge University Press, 1998.
[20] KAMVAR, S. D., SCHLOSSER, M. T., AND GARCIA-MOLINA, H.
The EigenTrust Algorithm for Reputation Management in P2P
Networks. In Proceedings of the Twelfth International World Wide
Web Conference (May 2003).
[21] KAN, G. Peer-to-Peer: Harnessing the Power of Disruptive
Technologies, 1st ed. O"Reilly & Associates, Inc., March 2001,
ch. Gnutella, pp. 94-122.
[22] KUHN, S. Prisoner"s Dilemma. In The Stanford Encyclopedia of
Philosophy, Edward N. Zalta, Ed., Summer ed. 2003.
[23] LEE, S., SHERWOOD, R., AND BHATTACHARJEE, B. Cooperative
Peer Groups in Nice. In Proceedings of the IEEE INFOCOM (2003).
[24] LEVIEN, R., AND AIKEN, A. Attack-Resistant Trust Metrics for
Public Key Certification. In Proceedings of the USENIX Security
Symposium (1998), pp. 229-242.
[25] MANIATIS, P., ROUSSOPOULOS, M., GIULI, T. J., ROSENTHAL,
D. S. H., BAKER, M., AND MULIADI, Y. Preserving Peer Replicas
by Rate-Limited Sampled Voting. In ACM Symposium on Operating
Systems Principles (2003).
[26] MARTI, S., GIULI, T. J., LAI, K., AND BAKER, M. Mitigating
Routing Misbehavior in Mobile ad-hoc Networks. In Proceedings of
MobiCom (2000), pp. 255-265.
[27] MICHIARDI, P., AND MOLVA, R. A Game Theoretical Approach to
Evaluate Cooperation Enforcement Mechanisms in Mobile ad-hoc
Networks. In Modeling and Optimization in Mobile, ad-hoc and
Wireless Networks (2003).
[28] NOWAK, M. A., AND SIGMUND, K. Evolution of Indirect
Reciprocity by Image Scoring. Nature 393 (1998), 573-577.
[29] OLSON, M. The Logic of Collective Action: Public Goods and the
Theory of Groups. Harvard University Press, 1971.
[30] RAGHAVAN, B., AND SNOEREN, A. Priority Forwarding in ad-hoc
Networks with Self-Ineterested Parties. In Workshop on Economics of
Peer-to-Peer Systems (June 2003).
[31] RANGANATHAN, K., RIPEANU, M., SARIN, A., AND FOSTER, I.
To Share or Not to Share: An Analysis of Incentives to Contribute in
Collaborative File Sharing Environments. In Workshop on Economics
of Peer-to-Peer Systems (June 2003).
[32] REITER, M. K., AND STUBBLEBINE, S. G. Authentication Metric
Analysis and Design. ACM Transactions on Information and System
Security 2, 2 (1999), 138-158.
[33] SAROIU, S., GUMMADI, P. K., AND GRIBBLE, S. D. A
Measurement Study of Peer-to-Peer File Sharing Systems. In
Proceedings of Multimedia Computing and Networking 2002
(MMCN "02) (2002).
[34] SMITH, J. M. Evolution and the Theory of Games. Cambridge
University Press, 1982.
[35] URPI, A., BONUCCELLI, M., AND GIORDANO, S. Modeling
Cooperation in Mobile ad-hoc Networks: a Formal Description of
Selfishness. In Modeling and Optimization in Mobile, ad-hoc and
Wireless Networks (2003).
[36] VISHNUMURTHY, V., CHANDRAKUMAR, S., AND SIRER, E. G.
KARMA : A Secure Economic Framework for P2P Resource
Sharing. In Workshop on Economics of Peer-to-Peer Networks (2003).
[37] WANG, W., AND LI, B. To Play or to Control: A Game-based
Control-Theoretic Approach to Peer-to-Peer Incentive Engineering.
In International Workshop on Quality of Service (June 2003).
[38] WOODARD, C. J., AND PARKES, D. C. Strategyproof mechanisms
for ad-hoc network formation. In Workshop on Economics of
Peer-to-Peer Systems (June 2003).
111 | generosity;prisoner dilemma;p2p system;reputation;free-ride;selfinterested user;peer-to-peer;maxflow-based algorithm;stranger adaptive;cheap pseudonym;game-theoretic approach;reciprocative peer;mutual cooperation;parameter nominal value;adaptive stranger policy;incentive for cooperation;collusion;whitewasher;reciprocative decision function;stranger defect;asymmetric payoff;whitewash;incentive |
train_J-70 | Self-interested Automated Mechanism Design and Implications for Optimal Combinatorial Auctions∗ | Often, an outcome must be chosen on the basis of the preferences reported by a group of agents. The key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen. In a recently proposed approach-called automated mechanism design-a mechanism is computed for the preference aggregation setting at hand. This has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time. Unlike the earlier work on automated mechanism design that studied a benevolent designer, in this paper we study automated mechanism design problems where the designer is self-interested. In this case, the center cares only about which outcome is chosen and what payments are made to it. The reason that the agents" preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism. In this setting, we show that designing optimal deterministic mechanisms is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen. We then show how allowing for randomization in the mechanism makes problems in this setting computationally easy. Finally, we show that the payment-maximizing AMD problem is closely related to an interesting variant of the optimal (revenuemaximizing) combinatorial auction design problem, where the bidders have best-only preferences. We show that here, too, designing an optimal deterministic auction is NPcomplete, but designing an optimal randomized auction is easy. | 1. INTRODUCTION
In multiagent settings, often an outcome must be
chosen on the basis of the preferences reported by a group of
agents. Such outcomes could be potential presidents, joint
plans, allocations of goods or resources, etc. The preference
aggregator generally does not know the agents" preferences
a priori. Rather, the agents report their preferences to the
coordinator. Unfortunately, an agent may have an incentive
to misreport its preferences in order to mislead the
mechanism into selecting an outcome that is more desirable to
the agent than the outcome that would be selected if the
agent revealed its preferences truthfully. Such manipulation
is undesirable because preference aggregation mechanisms
are tailored to aggregate preferences in a socially desirable
way, and if the agents reveal their preferences insincerely, a
socially undesirable outcome may be chosen.
Manipulability is a pervasive problem across preference
aggregation mechanisms. A seminal negative result, the
Gibbard-Satterthwaite theorem, shows that under any
nondictatorial preference aggregation scheme, if there are at
least 3 possible outcomes, there are preferences under which
an agent is better off reporting untruthfully [10, 23]. (A
preference aggregation scheme is called dictatorial if one of
the agents dictates the outcome no matter what preferences
the other agents report.)
What the aggregator would like to do is design a
preference aggregation mechanism so that 1) the self-interested
agents are motivated to report their preferences truthfully,
and 2) the mechanism chooses an outcome that is desirable
from the perspective of some objective. This is the classic
setting of mechanism design in game theory. In this paper,
we study the case where the designer is self-interested, that
is, the designer does not directly care about how the
out132
come relates to the agents" preferences, but is rather
concerned with its own agenda for which outcome should be
chosen, and with maximizing payments to itself. This is the
mechanism design setting most relevant to electronic
commerce.
In the case where the mechanism designer is interested in
maximizing some notion of social welfare, the importance
of collecting the agents" preferences is clear. It is perhaps
less obvious why they should be collected when the designer
is self-interested and hence its objective is not directly
related to the agents" preferences. The reason for this is that
often the agents" preferences impose limits on how the
designer chooses the outcome and payments. The most
common such constraint is that of individual rationality (IR),
which means that the mechanism cannot make any agent
worse off than the agent would have been had it not
participated in the mechanism. For instance, in the setting
of optimal auction design, the designer (auctioneer) is only
concerned with how much revenue is collected, and not per
se with how well the allocation of the good (or goods)
corresponds to the agents" preferences. Nevertheless, the
designer cannot force an agent to pay more than its valuation
for the bundle of goods allocated to it. Therefore, even a
self-interested designer will choose an outcome that makes
the agents reasonably well off. On the other hand, the
designer will not necessarily choose a social welfare
maximizing outcome. For example, if the designer always chooses
an outcome that maximizes social welfare with respect to
the reported preferences, and forces each agent to pay the
difference between the utility it has now and the utility it
would have had if it had not participated in the mechanism,
it is easy to see that agents may have an incentive to
misreport their preferences-and this may actually lead to less
revenue being collected. Indeed, one of the counterintuitive
results of optimal auction design theory is that sometimes
the good is allocated to nobody even when the auctioneer
has a reservation price of 0.
Classical mechanism design provides some general
mechanisms, which, under certain assumptions, satisfy some
notion of nonmanipulability and maximize some objective. The
upside of these mechanisms is that they do not rely on
(even probabilistic) information about the agents"
preferences (e.g., the Vickrey-Clarke-Groves (VCG) mechanism [24,
4, 11]), or they can be easily applied to any probability
distribution over the preferences (e.g., the dAGVA
mechanism [8, 2], the Myerson auction [18], and the Maskin-Riley
multi-unit auction [17]). However, the general mechanisms
also have significant downsides:
• The most famous and most broadly applicable general
mechanisms, VCG and dAGVA, only maximize social
welfare. If the designer is self-interested, as is the case
in many electronic commerce settings, these
mechanisms do not maximize the designer"s objective.
• The general mechanisms that do focus on a
selfinterested designer are only applicable in very restricted
settings-such as Myerson"s expected revenue
maximizing auction for selling a single item, and Maskin
and Riley"s expected revenue maximizing auction for
selling multiple identical units of an item.
• Even in the restricted settings in which these
mechanisms apply, the mechanisms only allow for payment
maximization. In practice, the designer may also be
interested in the outcome per se. For example, an
auctioneer may care which bidder receives the item.
• It is often assumed that side payments can be used
to tailor the agents" incentives, but this is not always
practical. For example, in barter-based electronic
marketplaces-such as Recipco, firstbarter.com,
BarterOne, and Intagio-side payments are not
allowed. Furthermore, among software agents, it might
be more desirable to construct mechanisms that do not
rely on the ability to make payments, because many
software agents do not have the infrastructure to make
payments.
In contrast, we follow a recent approach where the
mechanism is designed automatically for the specific problem at
hand. This approach addresses all of the downsides listed
above. We formulate the mechanism design problem as an
optimization problem. The input is characterized by the
number of agents, the agents" possible types (preferences),
and the aggregator"s prior distributions over the agents"
types. The output is a nonmanipulable mechanism that is
optimal with respect to some objective. This approach is
called automated mechanism design.
The automated mechanism design approach has four
advantages over the classical approach of designing general
mechanisms. First, it can be used even in settings that
do not satisfy the assumptions of the classical mechanisms
(such as availability of side payments or that the objective
is social welfare). Second, it may allow one to circumvent
impossibility results (such as the Gibbard-Satterthwaite
theorem) which state that there is no mechanism that is
desirable across all preferences. When the mechanism is designed
for the setting at hand, it does not matter that it would
not work more generally. Third, it may yield better
mechanisms (in terms of stronger nonmanipulability guarantees
and/or better outcomes) than classical mechanisms because
the mechanism capitalizes on the particulars of the setting
(the probabilistic information that the designer has about
the agents" types). Given the vast amount of information
that parties have about each other today, this approach is
likely to lead to tremendous savings over classical
mechanisms, which largely ignore that information. For example,
imagine a company automatically creating its procurement
mechanism based on statistical knowledge about its
suppliers, rather than using a classical descending procurement
auction. Fourth, the burden of design is shifted from
humans to a machine.
However, automated mechanism design requires the
mechanism design optimization problem to be solved anew for
each setting. Hence its computational complexity becomes
a key issue. Previous research has studied this question for
benevolent designers-that wish to maximize, for example,
social welfare [5, 6]. In this paper we study the
computational complexity of automated mechanism design in the
case of a self-interested designer. This is an important
setting for automated mechanism design due to the shortage
of general mechanisms in this area, and the fact that in
most e-commerce settings the designer is self-interested. We
also show that this problem is closely related to a particular
optimal (revenue-maximizing) combinatorial auction design
problem.
133
The rest of this paper is organized as follows. In
Section 2, we justify the focus on nonmanipulable mechanisms.
In Section 3, we define the problem we study. In Section 4,
we show that designing an optimal deterministic mechanism
is NP-complete even when the designer only cares about the
payments made to it. In Section 5, we show that designing
an optimal deterministic mechanism is also NP-complete
when payments are not possible and the designer is only
interested in the outcome chosen. In Section 6, we show that
an optimal randomized mechanism can be designed in
polynomial time even in the general case. Finally, in Section 7,
we show that for designing optimal combinatorial auctions
under best-only preferences, our results on AMD imply that
this problem is NP-complete for deterministic auctions, but
easy for randomized auctions.
2. JUSTIFYING THE FOCUS ON
NONMANIPULABLE MECHANISMS
Before we define the computational problem of automated
mechanism design, we should justify our focus on
nonmanipulable mechanisms. After all, it is not immediately
obvious that there are no manipulable mechanisms that, even
when agents report their types strategically and hence
sometimes untruthfully, still reach better outcomes (according to
whatever objective we use) than any nonmanipulable
mechanism. This does, however, turn out to be the case: given
any mechanism, we can construct a nonmanipulable
mechanism whose performance is identical, as follows. We build
an interface layer between the agents and the original
mechanism. The agents report their preferences (or types) to
the interface layer; subsequently, the interface layer inputs
into the original mechanism the types that the agents would
have strategically reported to the original mechanism, if their
types were as declared to the interface layer. The resulting
outcome is the outcome of the new mechanism. Since the
interface layer acts strategically on each agent"s behalf,
there is never an incentive to report falsely to the interface
layer; and hence, the types reported by the interface layer
are the strategic types that would have been reported
without the interface layer, so the results are exactly as they
would have been with the original mechanism. This
argument is known in the mechanism design literature as the
revelation principle [16]. (There are computational difficulties
with applying the revelation principle in large combinatorial
outcome and type spaces [7, 22]. However, because here we
focus on flatly represented outcome and type spaces, this is
not a concern here.) Given this, we can focus on truthful
mechanisms in the rest of the paper.
3. DEFINITIONS
We now formalize the automated mechanism design
setting.
Definition 1. In an automated mechanism design
setting, we are given:
• a finite set of outcomes O;
• a finite set of N agents;
• for each agent i,
1. a finite set of types Θi,
2. a probability distribution γi over Θi (in the case of
correlated types, there is a single joint distribution
γ over Θ1 × . . . × ΘN ), and
3. a utility function ui : Θi × O → R; 1
• An objective function whose expectation the designer
wishes to maximize.
There are many possible objective functions the designer
might have, for example, social welfare (where the designer
seeks to maximize the sum of the agents" utilities), or the
minimum utility of any agent (where the designer seeks to
maximize the worst utility had by any agent). In both
of these cases, the designer is benevolent, because the
designer, in some sense, is pursuing the agents" collective
happiness. However, in this paper, we focus on the case of
a self-interested designer. A self-interested designer cares
only about the outcome chosen (that is, the designer does
not care how the outcome relates to the agents" preferences,
but rather has a fixed preference over the outcomes), and
about the net payments made by the agents, which flow to
the designer.
Definition 2. A self-interested designer has an objective
function given by g(o) +
N
i=1
πi, where g : O → R indicates
the designer"s own preference over the outcomes, and πi is
the payment made by agent i. In the case where g = 0
everywhere, the designer is said to be payment maximizing.
In the case where payments are not possible, g constitutes
the objective function by itself.
We now define the kinds of mechanisms under study. By
the revelation principle, we can restrict attention to
truthful, direct revelation mechanisms, where agents report their
types directly and never have an incentive to misreport them.
Definition 3. We consider the following kinds of
mechanism:
• A deterministic mechanism without payments consists
of an outcome selection function o : Θ1 × Θ2 × . . . ×
ΘN → O.
• A randomized mechanism without payments consists
of a distribution selection function p : Θ1 × Θ2 × . . . ×
ΘN → P(O), where P(O) is the set of probability
distributions over O.
• A deterministic mechanism with payments consists of
an outcome selection function o : Θ1 ×Θ2 ×. . .×ΘN →
O and for each agent i, a payment selection function
πi : Θ1 × Θ2 × . . . × ΘN → R, where πi(θ1, . . . , θN )
gives the payment made by agent i when the reported
types are θ1, . . . , θN .
1
Though this follows standard game theory notation [16],
the fact that the agent has both a utility function and a type
is perhaps confusing. The types encode the various possible
preferences that the agent may turn out to have, and the
agent"s type is not known to the aggregator. The utility
function is common knowledge, but because the agent"s type
is a parameter in the agent"s utility function, the aggregator
cannot know what the agent"s utility is without knowing the
agent"s type.
134
• A randomized mechanism with payments consists of
a distribution selection function p : Θ1 × Θ2 × . . . ×
ΘN → P(O), and for each agent i, a payment selection
function πi : Θ1 × Θ2 × . . . × ΘN → R.2
There are two types of constraint on the designer in
building the mechanism.
3.1 Individual rationality (IR) constraints
The first type of constraint is the following. The utility of
each agent has to be at least as great as the agent"s fallback
utility, that is, the utility that the agent would receive if it
did not participate in the mechanism. Otherwise that agent
would not participate in the mechanism-and no agent"s
participation can ever hurt the mechanism designer"s
objective because at worst, the mechanism can ignore an agent
by pretending the agent is not there. (Furthermore, if no
such constraint applied, the designer could simply make the
agents pay an infinite amount.) This type of constraint is
called an IR (individual rationality) constraint. There are
three different possible IR constraints: ex ante, ex interim,
and ex post, depending on what the agent knows about its
own type and the others" types when deciding whether to
participate in the mechanism. Ex ante IR means that the
agent would participate if it knew nothing at all (not even
its own type). We will not study this concept in this paper.
Ex interim IR means that the agent would always
participate if it knew only its own type, but not those of the others.
Ex post IR means that the agent would always participate
even if it knew everybody"s type. We will define the
latter two notions of IR formally. First, we need to formalize
the concept of the fallback outcome. We assume that each
agent"s fallback utility is zero for each one of its types. This
is without loss of generality because we can add a constant
term to an agent"s utility function (for a given type),
without affecting the decision-making behavior of that expected
utility maximizing agent [16].
Definition 4. In any automated mechanism design
setting with an IR constraint, there is a fallback outcome o0 ∈
O where, for any agent i and any type θi ∈ Θi, we have
ui(θi, o0) = 0. (Additionally, in the case of a self-interested
designer, g(o0) = 0.)
We can now to define the notions of individual rationality.
Definition 5. Individual rationality (IR) is defined by:
• A deterministic mechanism is ex interim IR if for any
agent i, and any type θi ∈ Θi, we have
E(θ1,..,θi−1,θi+1,..,θN )|θi
[ui(θi, o(θ1, .., θN ))−πi(θ1, .., θN )]
≥ 0.
A randomized mechanism is ex interim IR if for any
agent i, and any type θi ∈ Θi, we have
E(θ1,..,θi−1,θi+1,..,θN )|θi
Eo|θ1,..,θn [ui(θi, o)−πi(θ1, .., θN )]
≥ 0.
• A deterministic mechanism is ex post IR if for any
agent i, and any type vector (θ1, . . . , θN ) ∈ Θ1 × . . . ×
ΘN , we have ui(θi, o(θ1, . . . , θN )) − πi(θ1, . . . , θN ) ≥
0.
2
We do not randomize over payments because as long as
the agents and the designer are risk neutral with respect to
payments, that is, their utility is linear in payments, there
is no reason to randomize over payments.
A randomized mechanism is ex post IR if for any agent
i, and any type vector (θ1, . . . , θN ) ∈ Θ1 × . . . × ΘN ,
we have Eo|θ1,..,θn [ui(θi, o) − πi(θ1, .., θN )] ≥ 0.
The terms involving payments can be left out in the case
where payments are not possible.
3.2 Incentive compatibility (IC) constraints
The second type of constraint says that the agents should
never have an incentive to misreport their type (as justified
above by the revelation principle). For this type of
constraint, the two most common variants (or solution concepts)
are implementation in dominant strategies, and
implementation in Bayes-Nash equilibrium.
Definition 6. Given an automated mechanism design
setting, a mechanism is said to implement its outcome and
payment functions in dominant strategies if truthtelling is
always optimal even when the types reported by the other
agents are already known. Formally, for any agent i, any
type vector (θ1, . . . , θi, . . . , θN ) ∈ Θ1 × . . . × Θi × . . . × ΘN ,
and any alternative type report ˆθi ∈ Θi, in the case of
deterministic mechanisms we have
ui(θi, o(θ1, . . . , θi, . . . , θN )) − πi(θ1, . . . , θi, . . . , θN ) ≥
ui(θi, o(θ1, . . . , ˆθi, . . . , θN )) − πi(θ1, . . . , ˆθi, . . . , θN ).
In the case of randomized mechanisms we have
Eo|θ1,..,θi,..,θn [ui(θi, o) − πi(θ1, . . . , θi, . . . , θN )] ≥
Eo|θ1,.., ˆθi,..,θn
[ui(θi, o) − πi(θ1, . . . , ˆθi, . . . , θN )].
The terms involving payments can be left out in the case
where payments are not possible.
Thus, in dominant strategies implementation, truthtelling
is optimal regardless of what the other agents report. If it
is optimal only given that the other agents are truthful, and
given that one does not know the other agents" types, we
have implementation in Bayes-Nash equilibrium.
Definition 7. Given an automated mechanism design
setting, a mechanism is said to implement its outcome and
payment functions in Bayes-Nash equilibrium if truthtelling
is always optimal to an agent when that agent does not yet
know anything about the other agents" types, and the other
agents are telling the truth. Formally, for any agent i, any
type θi ∈ Θi, and any alternative type report ˆθi ∈ Θi, in the
case of deterministic mechanisms we have
E(θ1,..,θi−1,θi+1,..,θN )|θi
[ui(θi, o(θ1, . . . , θi, . . . , θN ))−
πi(θ1, . . . , θi, . . . , θN )] ≥
E(θ1,..,θi−1,θi+1,..,θN )|θi
[ui(θi, o(θ1, . . . , ˆθi, . . . , θN ))−
πi(θ1, . . . , ˆθi, . . . , θN )].
In the case of randomized mechanisms we have
E(θ1,..,θi−1,θi+1,..,θN )|θi
Eo|θ1,..,θi,..,θn [ui(θi, o)−
πi(θ1, . . . , θi, . . . , θN )] ≥
E(θ1,..,θi−1,θi+1,..,θN )|θi
Eo|θ1,.., ˆθi,..,θn
[ui(θi, o)−
πi(θ1, . . . , ˆθi, . . . , θN )].
The terms involving payments can be left out in the case
where payments are not possible.
135
3.3 Automated mechanism design
We can now define the computational problem we study.
Definition 8. (AUTOMATED-MECHANISM-DESIGN
(AMD)) We are given:
• an automated mechanism design setting,
• an IR notion (ex interim, ex post, or none),
• a solution concept (dominant strategies or Bayes-Nash),
• whether payments are possible,
• whether randomization is possible,
• (in the decision variant of the problem) a target value
G.
We are asked whether there exists a mechanism of the
specified kind (in terms of payments and randomization) that
satisfies both the IR notion and the solution concept, and
gives an expected value of at least G for the objective.
An interesting special case is the setting where there is
only one agent. In this case, the reporting agent always
knows everything there is to know about the other agents"
types-because there are no other agents. Since ex post and
ex interim IR only differ on what an agent is assumed to
know about other agents" types, the two IR concepts
coincide here. Also, because implementation in dominant
strategies and implementation in Bayes-Nash equilibrium only
differ on what an agent is assumed to know about other agents"
types, the two solution concepts coincide here. This
observation will prove to be a useful tool in proving hardness
results: if we prove computational hardness in the
singleagent setting, this immediately implies hardness for both
IR concepts, for both solution concepts, for any number of
agents.
4.
PAYMENT-MAXIMIZINGDETERMINISTIC AMD IS HARD
In this section we demonstrate that it is NP-complete
to design a deterministic mechanism that maximizes the
expected sum of the payments collected from the agents.
We show that this problem is hard even in the single-agent
setting, thereby immediately showing it hard for both IR
concepts, for both solution concepts. To demonstrate
NPhardness, we reduce from the MINSAT problem.
Definition 9 (MINSAT). We are given a formula φ
in conjunctive normal form, represented by a set of Boolean
variables V and a set of clauses C, and an integer K (K <
|C|). We are asked whether there exists an assignment to the
variables in V such that at most K clauses in φ are satisfied.
MINSAT was recently shown to be NP-complete [14]. We
can now present our result.
Theorem 1. Payment-maximizing deterministic AMD is
NP-complete, even for a single agent, even with a uniform
distribution over types.
Proof. It is easy to show that the problem is in NP.
To show NP-hardness, we reduce an arbitrary MINSAT
instance to the following single-agent payment-maximizing
deterministic AMD instance. Let the agent"s type set be
Θ = {θc : c ∈ C} ∪ {θv : v ∈ V }, where C is the set of
clauses in the MINSAT instance, and V is the set of
variables. Let the probability distribution over these types be
uniform. Let the outcome set be O = {o0} ∪ {oc : c ∈
C} ∪ {ol : l ∈ L}, where L is the set of literals, that is,
L = {+v : v ∈ V } ∪ {−v : v ∈ V }. Let the notation v(l) = v
denote that v is the variable corresponding to the literal l,
that is, l ∈ {+v, −v}. Let l ∈ c denote that the literal l
occurs in clause c. Then, let the agent"s utility function
be given by u(θc, ol) = |Θ| + 1 for all l ∈ L with l ∈ c;
u(θc, ol) = 0 for all l ∈ L with l /∈ c; u(θc, oc) = |Θ| + 1;
u(θc, oc ) = 0 for all c ∈ C with c = c ; u(θv, ol) = |Θ| for all
l ∈ L with v(l) = v; u(θv, ol) = 0 for all l ∈ L with v(l) = v;
u(θv, oc) = 0 for all c ∈ C. The goal of the AMD instance
is G = |Θ| + |C|−K
|Θ|
, where K is the goal of the MINSAT
instance. We show the instances are equivalent. First, suppose
there is a solution to the MINSAT instance. Let the
assignment of truth values to the variables in this solution be given
by the function f : V → L (where v(f(v)) = v for all v ∈ V ).
Then, for every v ∈ V , let o(θv) = of(v) and π(θv) = |Θ|.
For every c ∈ C, let o(θc) = oc; let π(θc) = |Θ| + 1 if c
is not satisfied in the MINSAT solution, and π(θc) = |Θ|
if c is satisfied. It is straightforward to check that the IR
constraint is satisfied. We now check that the agent has no
incentive to misreport. If the agent"s type is some θv, then
any other report will give it an outcome that is no better,
for a payment that is no less, so it has no incentive to
misreport. If the agent"s type is some θc where c is a satisfied
clause, again, any other report will give it an outcome that
is no better, for a payment that is no less, so it has no
incentive to misreport. The final case to check is where the
agent"s type is some θc where c is an unsatisfied clause. In
this case, we observe that for none of the types, reporting it
leads to an outcome ol for a literal l ∈ c, precisely because
the clause is not satisfied in the MINSAT instance. Because
also, no type besides θc leads to the outcome oc, reporting
any other type will give an outcome with utility 0, while still
forcing a payment of at least |Θ| from the agent. Clearly the
agent is better off reporting truthfully, for a total utility of
0. This establishes that the agent never has an incentive to
misreport. Finally, we show that the goal is reached. If s is
the number of satisfied clauses in the MINSAT solution (so
that s ≤ K), the expected payment from this mechanism
is |V ||Θ|+s|Θ|+(|C|−s)(|Θ|+1)
|Θ|
≥ |V ||Θ|+K|Θ|+(|C|−K)(|Θ|+1)
|Θ|
=
|Θ| + |C|−K
|Θ|
= G. So there is a solution to the AMD
instance.
Now suppose there is a solution to the AMD instance,
given by an outcome function o and a payment function
π. First, suppose there is some v ∈ V such that o(θv) /∈
{o+v, o−v}. Then the utility that the agent derives from
the given outcome for this type is 0, and hence, by IR, no
payment can be extracted from the agent for this type.
Because, again by IR, the maximum payment that can be
extracted for any other type is |Θ| + 1, it follows that the
maximum expected payment that could be obtained is at
most (|Θ|−1)(|Θ|+1)
|Θ|
< |Θ| < G, contradicting that this is a
solution to the AMD instance. It follows that in the solution
to the AMD instance, for every v ∈ V , o(θv) ∈ {o+v, o−v}.
136
We can interpret this as an assignment of truth values to
the variables: v is set to true if o(θv) = o+v, and to false
if o(θv) = o−v. We claim this assignment is a solution to
the MINSAT instance. By the IR constraint, the maximum
payment we can extract from any type θv is |Θ|. Because
there can be no incentives for the agent to report falsely, for
any clause c satisfied by the given assignment, the maximum
payment we can extract for the corresponding type θc is |Θ|.
(For if we extracted more from this type, the agent"s utility
in this case would be less than 1; and if v is the variable
satisfying c in the assignment, so that o(θv) = ol where l occurs
in c, then the agent would be better off reporting θv instead
of the truthful report θc, to get an outcome worth |Θ|+1 to
it while having to pay at most |Θ|.) Finally, for any
unsatisfied clause c, by the IR constraint, the maximum payment
we can extract for the corresponding type θc is |Θ| + 1. It
follows that the expected payment from our mechanism is
at most V |Θ|+s|Θ|+(|C|−s)(|Θ|+1)
Θ
, where s is the number of
satisfied clauses. Because our mechanism achieves the goal,
it follows that V |Θ|+s|Θ|+(|C|−s)(|Θ|+1)
Θ
≥ G, which by simple
algebraic manipulations is equivalent to s ≤ K. So there is
a solution to the MINSAT instance.
Because payment-maximizing AMD is just the special case
of AMD for a self-interested designer where the designer has
no preferences over the outcome chosen, this immediately
implies hardness for the general case of AMD for a
selfinterested designer where payments are possible. However,
it does not yet imply hardness for the special case where
payments are not possible. We will prove hardness in this
case in the next section.
5. SELF-INTERESTED DETERMINISTIC
AMD WITHOUT PAYMENTS IS HARD
In this section we demonstrate that it is NP-complete to
design a deterministic mechanism that maximizes the
expectation of the designer"s objective when payments are not
possible. We show that this problem is hard even in the
single-agent setting, thereby immediately showing it hard
for both IR concepts, for both solution concepts.
Theorem 2. Without payments, deterministic AMD for
a self-interested designer is NP-complete, even for a single
agent, even with a uniform distribution over types.
Proof. It is easy to show that the problem is in NP.
To show NP-hardness, we reduce an arbitrary MINSAT
instance to the following single-agent self-interested
deterministic AMD without payments instance. Let the agent"s type
set be Θ = {θc : c ∈ C} ∪ {θv : v ∈ V }, where C is the
set of clauses in the MINSAT instance, and V is the set of
variables. Let the probability distribution over these types
be uniform. Let the outcome set be O = {o0} ∪ {oc : c ∈
C}∪{ol : l ∈ L}∪{o∗
}, where L is the set of literals, that is,
L = {+v : v ∈ V } ∪ {−v : v ∈ V }. Let the notation v(l) = v
denote that v is the variable corresponding to the literal l,
that is, l ∈ {+v, −v}. Let l ∈ c denote that the literal l
occurs in clause c. Then, let the agent"s utility function be
given by u(θc, ol) = 2 for all l ∈ L with l ∈ c; u(θc, ol) = −1
for all l ∈ L with l /∈ c; u(θc, oc) = 2; u(θc, oc ) = −1 for
all c ∈ C with c = c ; u(θc, o∗
) = 1; u(θv, ol) = 1 for all
l ∈ L with v(l) = v; u(θv, ol) = −1 for all l ∈ L with
v(l) = v; u(θv, oc) = −1 for all c ∈ C; u(θv, o∗
) = −1. Let
the designer"s objective function be given by g(o∗
) = |Θ|+1;
g(ol) = |Θ| for all l ∈ L; g(oc) = |Θ| for all c ∈ C. The goal
of the AMD instance is G = |Θ| + |C|−K
|Θ|
, where K is the
goal of the MINSAT instance. We show the instances are
equivalent. First, suppose there is a solution to the MINSAT
instance. Let the assignment of truth values to the variables
in this solution be given by the function f : V → L (where
v(f(v)) = v for all v ∈ V ). Then, for every v ∈ V , let
o(θv) = of(v). For every c ∈ C that is satisfied in the
MINSAT solution, let o(θc) = oc; for every unsatisfied c ∈ C,
let o(θc) = o∗
. It is straightforward to check that the IR
constraint is satisfied. We now check that the agent has no
incentive to misreport. If the agent"s type is some θv, it is
getting the maximum utility for that type, so it has no
incentive to misreport. If the agent"s type is some θc where c
is a satisfied clause, again, it is getting the maximum utility
for that type, so it has no incentive to misreport. The final
case to check is where the agent"s type is some θc where c is
an unsatisfied clause. In this case, we observe that for none
of the types, reporting it leads to an outcome ol for a literal
l ∈ c, precisely because the clause is not satisfied in the
MINSAT instance. Because also, no type leads to the outcome
oc, there is no outcome that the mechanism ever selects that
would give the agent utility greater than 1 for type θc, and
hence the agent has no incentive to report falsely. This
establishes that the agent never has an incentive to misreport.
Finally, we show that the goal is reached. If s is the number
of satisfied clauses in the MINSAT solution (so that s ≤ K),
then the expected value of the designer"s objective function
is |V ||Θ|+s|Θ|+(|C|−s)(|Θ|+1)
|Θ|
≥ |V ||Θ|+K|Θ|+(|C|−K)(|Θ|+1)
|Θ|
=
|Θ| + |C|−K
|Θ|
= G. So there is a solution to the AMD
instance.
Now suppose there is a solution to the AMD instance,
given by an outcome function o. First, suppose there is
some v ∈ V such that o(θv) /∈ {o+v, o−v}. The only other
outcome that the mechanism is allowed to choose under the
IR constraint is o0. This has an objective value of 0, and
because the highest value the objective function ever takes
is |Θ| + 1, it follows that the maximum expected value of
the objective function that could be obtained is at most
(|Θ|−1)(|Θ|+1)
|Θ|
< |Θ| < G, contradicting that this is a
solution to the AMD instance. It follows that in the solution
to the AMD instance, for every v ∈ V , o(θv) ∈ {o+v, o−v}.
We can interpret this as an assignment of truth values to
the variables: v is set to true if o(θv) = o+v, and to false
if o(θv) = o−v. We claim this assignment is a solution to
the MINSAT instance. By the above, for any type θv, the
value of the objective function in this mechanism will be
|Θ|. For any clause c satisfied by the given assignment,
the value of the objective function in the case where the
agent reports type θc will be at most |Θ|. (This is because
we cannot choose the outcome o∗
for such a type, as in
this case the agent would have an incentive to report θv
instead, where v is the variable satisfying c in the
assignment (so that o(θv) = ol where l occurs in c).) Finally,
for any unsatisfied clause c, the maximum value the
objective function can take in the case where the agent
reports type θc is |Θ| + 1, simply because this is the largest
value the function ever takes. It follows that the expected
value of the objective function for our mechanism is at most
V |Θ|+s|Θ|+(|C|−s)(|Θ|+1)
Θ
, where s is the number of satisfied
137
clauses. Because our mechanism achieves the goal, it follows
that V |Θ|+s|Θ|+(|C|−s)(|Θ|+1)
Θ
≥ G, which by simple algebraic
manipulations is equivalent to s ≤ K. So there is a solution
to the MINSAT instance.
Both of our hardness results relied on the constraint that
the mechanism should be deterministic. In the next section,
we show that the hardness of design disappears when we
allow for randomization in the mechanism.
6. RANDOMIZED AMD FOR A
SELFINTERESTED DESIGNER IS EASY
We now show how allowing for randomization over the
outcomes makes the problem of self-interested AMD tractable
through linear programming, for any constant number of
agents.
Theorem 3. Self-interested randomized AMD with a
constant number of agents is solvable in polynomial time by
linear programming, both with and without payments, both for
ex post and ex interim IR, and both for implementation in
dominant strategies and for implementation in Bayes-Nash
equilibrium-even if the types are correlated.
Proof. Because linear programs can be solved in
polynomial time [13], all we need to show is that the number of
variables and equations in our program is polynomial for any
constant number of agents-that is, exponential only in N.
Throughout, for purposes of determining the size of the
linear program, let T = maxi{|Θi|}. The variables of our linear
program will be the probabilities (p(θ1, θ2, . . . , θN ))(o) (at
most TN
|O| variables) and the payments πi(θ1, θ2, . . . , θN )
(at most NTN
variables). (We show the linear program for
the case where payments are possible; the case without
payments is easily obtained from this by simply omitting all the
payment variables in the program, or by adding additional
constraints forcing the payments to be 0.)
First, we show the IR constraints. For ex post IR, we add
the following (at most NTN
) constraints to the LP:
• For every i ∈ {1, 2, . . . , N}, and for every (θ1, θ2, . . . , θN )
∈ Θ1 × Θ2 × . . . × ΘN , we add
(
o∈O
(p(θ1, θ2, . . . , θN ))(o)u(θi, o)) − πi(θ1, θ2, . . . , θN ) ≥ 0.
For ex interim IR, we add the following (at most NT)
constraints to the LP:
• For every i ∈ {1, 2, . . . , N}, for every θi ∈ Θi, we add
θ1,... ,θN
γ(θ1, . . . , θN |θi)((
o∈O
(p(θ1, θ2, . . . , θN ))(o)u(θi, o))−
πi(θ1, θ2, . . . , θN )) ≥ 0.
Now, we show the solution concept constraints. For
implementation in dominant strategies, we add the following
(at most NTN+1
) constraints to the LP:
• For every i ∈ {1, 2, . . . , N}, for every
(θ1, θ2, . . . , θi, . . . , θN ) ∈ Θ1 × Θ2 × . . . × ΘN , and for every
alternative type report ˆθi ∈ Θi, we add the constraint
(
o∈O
(p(θ1, θ2, . . . , θi, . . . , θN ))(o)u(θi, o)) −
πi(θ1, θ2, . . . , θi, . . . , θN ) ≥
(
o∈O
(p(θ1, θ2, . . . , ˆθi, . . . , θN ))(o)u(θi, o)) −
πi(θ1, θ2, . . . , ˆθi, . . . , θN ).
Finally, for implementation in Bayes-Nash equilibrium, we
add the following (at most NT2
) constraints to the LP:
• For every i ∈ {1, 2, ..., N}, for every θi ∈ Θi, and for
every alternative type report ˆθi ∈ Θi, we add the constraint
θ1,...,θN
γ(θ1, ..., θN |θi)((
o∈O
(p(θ1, θ2, ..., θi, ..., θN ))(o)u(θi, o))
− πi(θ1, θ2, ..., θi, ..., θN )) ≥
θ1,...,θN
γ(θ1, ..., θN |θi)((
o∈O
(p(θ1, θ2, ..., ˆθi, ..., θN ))(o)u(θi, o))
− πi(θ1, θ2, ..., ˆθi, ..., θN )).
All that is left to do is to give the expression the designer
is seeking to maximize, which is:
•
θ1,...,θN
γ(θ1, ..., θN )((
o∈O
(p(θ1, θ2, ..., θi, ..., θN ))(o)g(o))
+
N
i=1
πi(θ1, θ2, ..., θN )).
As we indicated, the number of variables and constraints
is exponential only in N, and hence the linear program is of
polynomial size for constant numbers of agents. Thus the
problem is solvable in polynomial time.
7. IMPLICATIONS FOR AN OPTIMAL
COMBINATORIAL AUCTION DESIGN
PROBLEM
In this section, we will demonstrate some interesting
consequences of the problem of automated mechanism design
for a self-interested designer on designing optimal
combinatorial auctions.
Consider a combinatorial auction with a set S of items
for sale. For any bundle B ⊆ S, let ui(θi, B) be bidder
i"s utility for receiving bundle B when the bidder"s type is
θi. The optimal auction design problem is to specify the
rules of the auction so as to maximize expected revenue to
the auctioneer. (By the revelation principle, without loss
of generality, we can assume the auction is truthful.) The
optimal auction design problem is solved for the case of a
single item by the famous Myerson auction [18]. However,
designing optimal auctions in combinatorial auctions is a
recognized open research problem [3, 25]. The problem is
open even if there are only two items for sale. (The
twoitem case with a very special form of complementarity and
no substitutability has been solved recently [1].)
Suppose we have free disposal-items can be thrown away
at no cost. Also, suppose that the bidders" preferences have
the following structure: whenever a bidder receives a bundle
of items, the bidder"s utility for that bundle is determined
by the best item in the bundle only. (We emphasize that
138
which item is the best is allowed to depend on the bidder"s
type.)
Definition 10. Bidder i is said to have best-only
preferences over bundles of items if there exists a function vi :
Θi × S → R such that for any θi ∈ Θi, for any B ⊆ S,
ui(θi, B) = maxs∈B vi(θi, s).
We make the following useful observation in this setting:
there is no sense in awarding a bidder more than one item.
The reason is that if the bidder is reporting truthfully, taking
all but the highest valued item away from the bidder will
not hurt the bidder; and, by free disposal, doing so can only
reduce the incentive for this bidder to falsely report this
type, when the bidder actually has another type.
We now show that the problem of designing a
deterministic optimal auction here is NP-complete, by a reduction
from the payment maximizing AMD problem!
Theorem 4. Given an optimal combinatorial auction
design problem under best-only preferences (given by a set of
items S and for each bidder i, a finite type space Θi and a
function vi : Θi × S → R such that for any θi ∈ Θi, for any
B ⊆ S, ui(θi, B) = maxs∈B vi(θi, s)), designing the
optimal deterministic auction is NP-complete, even for a single
bidder with a uniform distribution over types.
Proof. The problem is in NP because we can
nondeterministically generate an allocation rule, and then set the
payments using linear programming.
To show NP-hardness, we reduce an arbitrary
paymentmaximizing deterministic AMD instance, with a single agent
and a uniform distribution over types, to the following
optimal combinatorial auction design problem instance with
a single bidder with best-only preferences. For every
outcome o ∈ O in the AMD instance (besides the outcome o0),
let there be one item so ∈ S. Let the type space be the
same, and let v(θi, so) = ui(θi, o) (where u is as specified in
the AMD instance). Let the expected revenue target value
be the same in both instances. We show the instances are
equivalent.
First suppose there exists a solution to the AMD instance,
given by an outcome function and a payment function. Then,
if the AMD solution chooses outcome o for a type, in the
optimal auction solution, allocate {so} to the bidder for this
type. (Unless o = o0, in which case we allocate {} to the
bidder.) Let the payment functions be the same in both
instances. Then, the utility that an agent receives for
reporting a type (given the true type) in either solution is the
same, so we have incentive compatibility in the optimal
auction solution. Moreover, because the type distribution and
the payment function are the same, the expected revenue to
the auctioneer/designer is the same. It follows that there
exists a solution to the optimal auction design instance.
Now suppose there exists a solution to the optimal auction
design instance. By the at-most-one-item observation, we
can assume without loss of generality that the solution never
allocates more than one item. Then, if the optimal auction
solution allocates item so to the bidder for a type, in the
AMD solution, let the mechanism choose outcome o for that
type. If the optimal auction solution allocates nothing to the
bidder for a type, in the AMD solution, let the mechanism
choose outcome o0 for that type. Let the payment functions
be the same. Then, the utility that an agent receives for
reporting a type (given the true type) in either solution is
the same, so we have incentive compatibility in the AMD
solution. Moreover, because the type distribution and the
payment function are the same, the expected revenue to the
designer/auctioneer is the same. It follows that there exists
a solution to the AMD instance.
Fortunately, we can also carry through the easiness result
for randomized mechanisms to this combinatorial auction
setting-giving us one of the few known polynomial-time
algorithms for an optimal combinatorial auction design
problem.
Theorem 5. Given an optimal combinatorial auction
design problem under best-only preferences (given by a set of
items S and for each bidder i, a finite type space Θi and a
function vi : Θi × S → R such that for any θi ∈ Θi, for
any B ⊆ S, ui(θi, B) = maxs∈B vi(θi, s)), if the number of
bidders is a constant k, then the optimal randomized
auction can be designed in polynomial time. (For any IC and
IR constraints.)
Proof. By the at-most-one-item observation, we can
without loss of generality restrict ourselves to allocations where
each bidder receives at most one item. There are fewer than
(|S| + 1)k
such allocations-that is, a polynomial number
of allocations. Because we can list the outcomes explicitly,
we can simply solve this as a payment-maximizing AMD
instance, with linear programming.
8. RELATED RESEARCH ON
COMPLEXITY IN MECHANISM DESIGN
There has been considerable recent interest in mechanism
design in computer science. Some of it has focused on
issues of computational complexity, but most of that work has
strived toward designing mechanisms that are easy to
execute (e.g. [20, 15, 19, 9, 12]), rather than studying the
complexity of designing the mechanism. The closest piece of
earlier work studied the complexity of automated mechanism
design by a benevolent designer [5, 6]. Roughgarden has
studied the complexity of designing a good network
topology for agents that selfishly choose the links they use [21].
This is related to mechanism design, but differs significantly
in that the designer only has restricted control over the rules
of the game because there is no party that can impose the
outcome (or side payments). Also, there is no explicit
reporting of preferences.
9. CONCLUSIONS AND FUTURE
RESEARCH
Often, an outcome must be chosen on the basis of the
preferences reported by a group of agents. The key difficulty
is that the agents may report their preferences insincerely
to make the chosen outcome more favorable to themselves.
Mechanism design is the art of designing the rules of the
game so that the agents are motivated to report their
preferences truthfully, and a desirable outcome is chosen. In a
recently emerging approach-called automated mechanism
design-a mechanism is computed for the specific preference
aggregation setting at hand. This has several advantages,
139
but the downside is that the mechanism design optimization
problem needs to be solved anew each time. Unlike earlier
work on automated mechanism design that studied a
benevolent designer, in this paper we studied automated
mechanism design problems where the designer is
self-interesteda setting much more relevant for electronic commerce. In
this setting, the center cares only about which outcome is
chosen and what payments are made to it. The reason that
the agents" preferences are relevant is that the center is
constrained to making each agent at least as well off as the agent
would have been had it not participated in the mechanism.
In this setting, we showed that designing an optimal
deterministic mechanism is NP-complete in two important
special cases: when the center is interested only in the payments
made to it, and when payments are not possible and the
center is interested only in the outcome chosen. These
hardness results imply hardness in all more general automated
mechanism design settings with a self-interested designer.
The hardness results apply whether the individual
rationality (participation) constraints are applied ex interim or ex
post, and whether the solution concept is dominant
strategies implementation or Bayes-Nash equilibrium
implementation. We then showed that allowing randomization in the
mechanism makes the design problem in all these settings
computationally easy. Finally, we showed that the
paymentmaximizing AMD problem is closely related to an interesting
variant of the optimal (revenue-maximizing) combinatorial
auction design problem, where the bidders have best-only
preferences. We showed that here, too, designing an
optimal deterministic mechanism is NP-complete even with one
agent, but designing an optimal randomized mechanism is
easy.
Future research includes studying automated mechanism
design with a self-interested designer in more restricted
settings such as auctions (where the designer"s objective may
include preferences about which bidder should receive the
good-as well as payments). We also want to study the
complexity of automated mechanism design in settings where the
outcome and type spaces have special structure so they can
be represented more concisely. Finally, we plan to assemble
a data set of real-world mechanism design problems-both
historical and current-and apply automated mechanism
design to those problems.
10. REFERENCES
[1] M. Armstrong. Optimal multi-object auctions. Review
of Economic Studies, 67:455-481, 2000.
[2] K. Arrow. The property rights doctrine and demand
revelation under incomplete information. In
M. Boskin, editor, Economics and human welfare.
New York Academic Press, 1979.
[3] C. Avery and T. Hendershott. Bundling and optimal
auctions of multiple products. Review of Economic
Studies, 67:483-497, 2000.
[4] E. H. Clarke. Multipart pricing of public goods. Public
Choice, 11:17-33, 1971.
[5] V. Conitzer and T. Sandholm. Complexity of
mechanism design. In Proceedings of the 18th Annual
Conference on Uncertainty in Artificial Intelligence
(UAI-02), pages 103-110, Edmonton, Canada, 2002.
[6] V. Conitzer and T. Sandholm. Automated mechanism
design: Complexity results stemming from the
single-agent setting. In Proceedings of the 5th
International Conference on Electronic Commerce
(ICEC-03), pages 17-24, Pittsburgh, PA, USA, 2003.
[7] V. Conitzer and T. Sandholm. Computational
criticisms of the revelation principle. In Proceedings of
the ACM Conference on Electronic Commerce
(ACM-EC), New York, NY, 2004. Short paper.
Full-length version appeared in the AAMAS-03
workshop on Agent-Mediated Electronic Commerce
(AMEC).
[8] C. d"Aspremont and L. A. G´erard-Varet. Incentives
and incomplete information. Journal of Public
Economics, 11:25-45, 1979.
[9] J. Feigenbaum, C. Papadimitriou, and S. Shenker.
Sharing the cost of muliticast transmissions. Journal
of Computer and System Sciences, 63:21-41, 2001.
Early version in Proceedings of the Annual ACM
Symposium on Theory of Computing (STOC), 2000.
[10] A. Gibbard. Manipulation of voting schemes.
Econometrica, 41:587-602, 1973.
[11] T. Groves. Incentives in teams. Econometrica,
41:617-631, 1973.
[12] J. Hershberger and S. Suri. Vickrey prices and
shortest paths: What is an edge worth? In
Proceedings of the Annual Symposium on Foundations
of Computer Science (FOCS), 2001.
[13] L. Khachiyan. A polynomial algorithm in linear
programming. Soviet Math. Doklady, 20:191-194,
1979.
[14] R. Kohli, R. Krishnamurthi, and P. Mirchandani. The
minimum satisfiability problem. SIAM Journal of
Discrete Mathematics, 7(2):275-283, 1994.
[15] D. Lehmann, L. I. O"Callaghan, and Y. Shoham.
Truth revelation in rapid, approximately efficient
combinatorial auctions. Journal of the ACM,
49(5):577-602, 2002. Early version appeared in
Proceedings of the ACM Conference on Electronic
Commerce (ACM-EC), 1999.
[16] A. Mas-Colell, M. Whinston, and J. R. Green.
Microeconomic Theory. Oxford University Press, 1995.
[17] E. S. Maskin and J. Riley. Optimal multi-unit
auctions. In F. Hahn, editor, The Economics of
Missing Markets, Information, and Games,
chapter 14, pages 312-335. Clarendon Press, Oxford,
1989.
[18] R. Myerson. Optimal auction design. Mathematics of
Operation Research, 6:58-73, 1981.
[19] N. Nisan and A. Ronen. Computationally feasible
VCG mechanisms. In Proceedings of the ACM
Conference on Electronic Commerce (ACM-EC),
pages 242-252, Minneapolis, MN, 2000.
[20] N. Nisan and A. Ronen. Algorithmic mechanism
design. Games and Economic Behavior, 35:166-196,
2001. Early version in Proceedings of the Annual ACM
Symposium on Theory of Computing (STOC), 1999.
[21] T. Roughgarden. Designing networks for selfish users
is hard. In Proceedings of the Annual Symposium on
Foundations of Computer Science (FOCS), 2001.
[22] T. Sandholm. Issues in computational Vickrey
auctions. International Journal of Electronic
Commerce, 4(3):107-129, 2000. Special Issue on
140
Applying Intelligent Agents for Electronic Commerce.
A short, early version appeared at the Second
International Conference on Multi-Agent Systems
(ICMAS), pages 299-306, 1996.
[23] M. A. Satterthwaite. Strategy-proofness and Arrow"s
conditions: existence and correspondence theorems for
voting procedures and social welfare functions.
Journal of Economic Theory, 10:187-217, 1975.
[24] W. Vickrey. Counterspeculation, auctions, and
competitive sealed tenders. Journal of Finance,
16:8-37, 1961.
[25] R. V. Vohra. Research problems in combinatorial
auctions. Mimeo, version Oct. 29, 2001.
141 | preference aggregator;desirable outcome;statistical knowledge;automated mechanism design;revenue maximization;nonmanipulable mechanism;payment maximizing;complementarity;combinatorial auction;minsat;fallback outcome;manipulability;individual rationality;automate mechanism design;self-interested amd;classical mechanism;mechanism design |
train_J-71 | A Dynamic Pari-Mutuel Market for Hedging, Wagering, and Information Aggregation | I develop a new mechanism for risk allocation and information speculation called a dynamic pari-mutuel market (DPM). A DPM acts as hybrid between a pari-mutuel market and a continuous double auction (CDA), inheriting some of the advantages of both. Like a pari-mutuel market, a DPM offers infinite buy-in liquidity and zero risk for the market institution; like a CDA, a DPM can continuously react to new information, dynamically incorporate information into prices, and allow traders to lock in gains or limit losses by selling prior to event resolution. The trader interface can be designed to mimic the familiar double auction format with bid-ask queues, though with an addition variable called the payoff per share. The DPM price function can be viewed as an automated market maker always offering to sell at some price, and moving the price appropriately according to demand. Since the mechanism is pari-mutuel (i.e., redistributive), it is guaranteed to pay out exactly the amount of money taken in. I explore a number of variations on the basic DPM, analyzing the properties of each, and solving in closed form for their respective price functions. | 1. INTRODUCTION
A wide variety of financial and wagering mechanisms have
been developed to support hedging (i.e., insuring) against
exposure to uncertain events and/or speculative trading on
uncertain events. The dominant mechanism used in
financial circles is the continuous double auction (CDA), or in
some cases the CDA with market maker (CDAwMM). The
primary mechanism used for sports wagering is a bookie or
bookmaker, who essentially acts exactly as a market maker.
Horse racing and jai alai wagering traditionally employ the
pari-mutuel mechanism. Though there is no formal or
logical separation between financial trading and wagering, the
two endeavors are socially considered distinct. Recently,
there has been a move to employ CDAs or CDAwMMs for
all types of wagering, including on sports, horse racing,
political events, world news, and many other uncertain events,
and a simultaneous and opposite trend to use bookie systems
for betting on financial markets. These trends highlight the
interchangeable nature of the mechanisms and further blur
the line between investing and betting. Some companies at
the forefront of these movements are growing exponentially,
with some industry observers declaring the onset of a
revolution in the wagering business.1
Each mechanism has pros and cons for the market
institution and the participating traders. A CDA only matches
willing traders, and so poses no risk whatsoever for the
market institution. But a CDA can suffer from illiquidity in the
form huge bid-ask spreads or even empty bid-ask queues if
trading is light and thus markets are thin. A successful CDA
must overcome a chicken-and-egg problem: traders are
attracted to liquid markets, but liquid markets require a large
number of traders. A CDAwMM and the similar bookie
mechanism have built-in liquidity, but at a cost: the market
maker itself, usually affiliated with the market institution, is
exposed to significant risk of large monetary losses. Both the
CDA and CDAwMM offer incentives for traders to leverage
information continuously as soon as that information
becomes available. As a result, prices are known to capture
the current state of information exceptionally well.
Pari-mutuel markets effectively have infinite liquidity:
anyone can place a bet on any outcome at any time, without
the need for a matching offer from another bettor or a
market maker. Pari-mutuel markets also involve no risk for the
market institution, since they only redistribute money from
losing wagers to winning wagers. However, pari-mutuel
mar1
http://www.wired.com/news/ebiz/0,1272,61051,00.html
170
kets are not suitable for situations where information arrives
over time, since there is a strong disincentive for placing bets
until either (1) all information is revealed, or (2) the market
is about to close. For this reason, pari-mutuel prices prior
to the market"s close cannot be considered a reflection of
current information. Pari-mutuel market participants
cannot buy low and sell high: they cannot cash out gains (or
limit losses) before the event outcome is revealed. Because
the process whereby information arrives continuously over
time is the rule rather than the exception, the applicability
of the standard pari-mutuel mechanism is questionable in a
large number of settings.
In this paper, I develop a new mechanism suitable for
hedging, speculating, and wagering, called a dynamic
parimutuel market (DPM). A DPM can be thought of as a
hybrid between a pari-mutuel market and a CDA. A DPM is
indeed pari-mutuel in nature, meaning that it acts only to
redistribute money from some traders to others, and so
exposes the market institution to no volatility (no risk). A
constant, pre-determined subsidy is required to start the
market. The subsidy can in principle be arbitrarily small and
might conceivably come from traders (via antes or
transaction fees) rather than the market institution, though a
nontrivial outside subsidy may actually encourage trading
and information aggregation. A DPM has the infinite
liquidity of a pari-mutuel market: traders can always purchase
shares in any outcome at any time, at some price
automatically set by the market institution. A DPM is also able
to react to and incorporate information arriving over time,
like a CDA. The market institution changes the price for
particular outcomes based on the current state of wagering.
If a particular outcome receives a relatively large number of
wagers, its price increases; if an outcome receives relatively
few wagers, its price decreases. Prices are computed
automatically using a price function, which can differ depending
on what properties are desired. The price function
determines the instantaneous price per share for an infinitesimal
quantity of shares; the total cost for purchasing n shares
is computed as the integral of the price function from 0 to
n. The complexity of the price function can be hidden from
traders by communicating only the ask prices for various lots
of shares (e.g., lots of 100 shares), as is common practice in
CDAs and CDAwMMs. DPM prices do reflect current
information, and traders can cash out in an aftermarket to lock
in gains or limit losses before the event outcome is revealed.
While there is always a market maker willing to accept buy
orders, there is not a market maker accepting sell orders,
and thus no guaranteed liquidity for selling: instead, selling
is accomplished via a standard CDA mechanism. Traders
can always hedge-sell by purchasing the opposite outcome
than they already own.
2. BACKGROUND AND RELATED WORK
2.1 Pari-mutuel markets
Pari-mutuel markets are common at horse races [1, 22, 24,
25, 26], dog races, and jai alai games. In a pari-mutuel
market people place wagers on which of two or more mutually
exclusive and exhaustive outcomes will occur at some time
in the future. After the true outcome becomes known, all
of the money that is lost by those who bet on the incorrect
outcome is redistributed to those who bet on the correct
outcome, in direct proportion to the amount they wagered.
More formally, if there are k mutually exclusive and
exhaustive outcomes (e.g., k horses, exactly one of which will win),
and M1, M2, . . . , Mk dollars are bet on each outcome, and
outcome i occurs, then everyone who bet on an outcome
j = i loses their wager, while everyone who bet on outcome
i receives
Pk
j=1 Mj/Mi dollars for every dollar they wagered.
That is, every dollar wagered on i receives an equal share of
all money wagered. An equivalent way to think about the
redistribution rule is that every dollar wagered on i is
refunded, then receives an equal share of all remaining money
bet on the losing outcomes, or
P
j=i Mj/Mi dollars.
In practice, the market institution (e.g., the racetrack)
first takes a certain percent of the total amount wagered,
usually about 20% in the United States, then redistributes
whatever money remains to the winners in proportion to
their amount bet.
Consider a simple example with two outcomes, A and B.
The outcomes are mutually exclusive and exhaustive,
meaning that Pr(A ∧ B) = 0 and Pr(A) + Pr(B) = 1. Suppose
$800 is bet on A and $200 on B. Now suppose that A
occurs (e.g., horse A wins the race). People who wagered on
B lose their money, or $200 in total. People who wagered
on A win and each receives a proportional share of the total
$1000 wagered (ignoring fees). Specifically, each $1 wager
on A entitles its owner a 1/800 share of the $1000, or $1.25.
Every dollar bet in a pari-mutuel market has an equal
payoff, regardless of when the wager was placed or how much
money was invested in the various outcomes at the time the
wager was placed. The only state that matters is the final
state: the final amounts wagered on all the outcomes when
the market closes, and the identity of the correct outcome.
As a result, there is a disincentive to place a wager early
if there is any chance that new information might become
available. Moreover, there are no guarantees about the
payoff rate of a particular bet, except that it will be nonnegative
if the correct outcome is chosen. Payoff rates can fluctuate
arbitrarily until the market closes. So a second reason not
to bet early is to wait to get a better sense of the final
payout rates. This is in contrast to CDAs and CDAwMMs, like
the stock market, where incentives exist to invest as soon as
new information is revealed.
Pari-mutuel bettors may be allowed to switch their
chosen outcome, or even cancel their bet, prior to the market"s
close. However, they cannot cash out of the market early,
to either lock in gains or limit losses, if new information
favors one outcome over another, as is possible in a CDA
or a CDAwMM. If bettors can cancel or change their bets,
then an aftermarket to sell existing wagers is not sensible:
every dollar wagered is worth exactly $1 up until the
market"s close-no one would buy at greater than $1 and no one
would sell at less than $1. Pari-mutuel bettors must wait
until the outcome is revealed to realize any profit or loss.
Unlike a CDA, in a pari-mutuel market, anyone can place
a wager of any amount at any time-there is in a sense
infinite liquidity for buying. A CDAwMM also has built-in
liquidity, but at the cost of significant risk for the market
maker. In a pari-mutuel market, since money is only
redistributed among bettors, the market institution itself has no
risk. The main drawback of a pari-mutuel market is that
it is useful only for capturing the value of an uncertain
asset at some instant in time. It is ill-suited for situations
where information arrives over time, continuously updating
the estimated value of the asset-situations common in
al171
most all trading and wagering scenarios. There is no notion
of buying low and selling high, as occurs in a CDA, where
buying when few others are buying (and the price is low) is
rewarded more than buying when many others are buying
(and the price is high). Perhaps for this reason, in most
dynamic environments, financial mechanisms like the CDA
that can react in real-time to changing information are more
typically employed to facilitate speculating and hedging.
Since a pari-mutuel market can estimate the value of an
asset at a single instant in time, a repeated pari-mutuel
market, where distinct pari-mutuel markets are run at
consecutive intervals, could in principle capture changing
information dynamics. But running multiple consecutive
markets would likely thin out trading in each individual market.
Also, in each individual pari-mutuel market, the incentives
would still be to wait to bet until just before the ending
time of that particular market. This last problem might
be mitigated by instituting a random stopping rule for each
individual pari-mutuel market.
In laboratory experiments, pari-mutuel markets have shown
a remarkable ability to aggregate and disseminate
information dispersed among traders, at least for a single snapshot
in time [17]. A similar ability has been recognized at real
racetracks [1, 22, 24, 25, 26].
2.2 Financial markets
In the financial world, wagering on the outcomes of
uncertain future propositions is also common. The typical market
mechanism used is the continuous double auction (CDA).
The term securities market in economics and finance
generically encompasses a number of markets where speculating
on uncertain events is possible. Examples include stock
markets like NASDAQ, options markets like the CBOE [13],
futures markets like the CME [21], other derivatives markets,
insurance markets, political stock markets [6, 7], idea futures
markets [12], decision markets [10] and even market games
[3, 15, 16]. Securities markets generally have an economic
and social value beyond facilitating speculation or wagering:
they allow traders to hedge risk, or to insure against
undesirable outcomes. So if a particular outcome has disutility
for a trader, he or she can mitigate the risk by wagering
for the outcome, to arrange for compensation in case the
outcome occurs. In this sense, buying automobile insurance
is effectively a bet that an accident or other covered event
will occur. Similarly, buying a put option, which is useful
as a hedge for a stockholder, is a bet that the underlying
stock will go down. In practice, agents engage in a mixture
of hedging and speculating, and there is no clear dividing
line between the two [14]. Like pari-mutuel markets, often
prices in financial markets are excellent information
aggregators, yielding very accurate forecasts of future events [5,
18, 19].
A CDA constantly matches orders to buy an asset with
orders to sell. If at any time one party is willing to buy
one unit of the asset at a bid price of pbid, while another
party is willing to sell one unit of the asset at an ask price of
pask, and pbid is greater than or equal to pask, then the two
parties transact (at some price between pbid and pask). If
the highest bid price is less than the lowest ask price, then no
transactions occur. In a CDA, the bid and ask prices rapidly
change as new information arrives and traders reassess the
value of the asset. Since the auctioneer only matches willing
bidders, the auctioneer takes on no risk. However, buyers
can only buy as many shares as sellers are willing to sell; for
any transaction to occur, there must be a counterparty on
the other side willing to accept the trade.
As a result, when few traders participate in a CDA, it may
become illiquid, meaning that not much trading activity
occurs. The spread between the highest bid price and the
lowest ask price may be very large, or one or both queues may
be completely empty, discouraging trading.2
One way to
induce liquidity is to provide a market maker who is willing
to accept a large number of buy and sell orders at particular
prices. We call this mechanism a CDA with market maker
(CDAwMM).3
Conceptually, the market maker is just like
any other trader, but typically is willing to accept a much
larger volume of trades. The market maker may be a
person, or may be an automated algorithm. Adding a market
maker to the system increases liquidity, but exposes the
market maker to risk. Now, instead of only matching trades, the
system actually takes on risk of its own, and depending on
what happens in the future, may lose considerable amounts
of money.
2.3 Wagering markets
The typical Las Vegas bookmaker or oddsmaker functions
much like a market maker in a CDA. In this case, the
market institution (the book or house) sets the odds,4
initially
according to expert opinion, and later in response to the
relative level of betting on the various outcomes. Unlike in a
pari-mutuel environment, whenever a wager is placed with
a bookmaker, the odds or terms for that bet are fixed at the
time of the bet. The bookmaker profits by offering different
odds for the two sides of the bet, essentially defining a
bidask spread. While odds may change in response to changing
information, any bets made at previously set odds remain
in effect according to the odds at the time of the bet; this
is precisely in analogy to a CDAwMM. One difference
between a bookmaker and a market maker is that the former
usually operates in a take it or leave it mode: bettors
cannot place their own limit orders on a common queue, they
can in effect only place market orders at prices defined by
the bookmaker. Still, the bookmaker certainly reacts to
bettor demand. Like a market maker, the bookmaker exposes
itself to significant risk. Sports betting markets have also
been shown to provide high quality aggregate forecasts [4,
9, 23].
2.4 Market scoring rule
Hanson"s [11] market scoring rule (MSR) is a new
mechanism for hedging and speculating that shares some
properties in common with a DPM. Like a DPM, an MSR can be
conceptualized as an automated market maker always
willing to accept a trade on any event at some price. An MSR
requires a patron to subsidize the market. The patron"s final
loss is variable, and thus technically implies a degree of risk,
though the maximum loss is bounded. An MSR maintains
a probability distribution over all events. At any time any
2
Thin markets do occur often in practice, and can be
seen in a variety of the less popular markets available on
http://TradeSports.com, or in some financial options
markets, for example.
3
A very clear example of a CDAwMM is the interactive
betting market on http://WSEX.com.
4
Or, alternatively, the bookmaker sets the game line in order
to provide even-money odds.
172
trader who believes the probabilities are wrong can change
any part of the distribution by accepting a lottery ticket that
pays off according to a scoring rule (e.g., the logarithmic
scoring rule) [27], as long as that trader also agrees to pay
off the most recent person to change the distribution. In the
limit of a single trader, the mechanism behaves like a
scoring rule, suitable for polling a single agent for its probability
distribution. In the limit of many traders, it produces a
combined estimate. Since the market essentially always has a
complete set of posted prices for all possible outcomes, the
mechanism avoids the problem of thin markets or illiquidity.
An MSR is not pari-mutuel in nature, as the patron in
general injects a variable amount of money into the system. An
MSR provides a two-sided automated market maker, while
a DPM provides a one-sided automated market maker. In
an MSR, the vector of payoffs across outcomes is fixed at
the time of the trade, while in a DPM, the vector of payoffs
across outcomes depends both on the state of wagering at
the time of the trade and the state of wagering at the
market"s close. While the mechanisms are quite different-and
so trader acceptance and incentives may strongly differ-the
properties and motivations of DPMs and MSRs are quite
similar.
Hanson shows how MSRs are especially well suited for
allowing bets on a combinatorial number of outcomes. The
patron"s payment for subsidizing trading on all 2n
possible
combinations of n events is no larger than the sum of
subsidizing the n event marginals independently. The mechanism
was planned for use in the Policy Analysis Market (PAM), a
futures market in Middle East related outcomes and funded
by DARPA [20], until a media firestorm killed the project.5
As of this writing, the founders of PAM were considering
reopening under private control.6
3. A DYNAMIC PARI-MUTUEL MARKET
3.1 High-level description
In contrast to a standard pari-mutuel market, where each
dollar always buys an equal share of the payoff, in a DPM
each dollar buys a variable share in the payoff depending on
the state of wagering at the time of purchase. So a wager
on A at a time when most others are wagering on B offers a
greater possible profit than a wager on A when most others
are also wagering on A.
A natural way to communicate the changing payoff of a
bet is to say that, at any given time, a certain amount of
money will buy a certain number of shares in one outcome
the other. Purchasing a share entitles its owner to an equal
stake in the winning pot should the chosen outcome occur.
The payoff is variable, because when few people are betting
on an outcome, shares will generally be cheaper than at a
time when many people are betting that outcome. There
is no pre-determined limit on the number of shares: new
shares can be continually generated as trading proceeds.
For simplicity, all analyses in this paper consider the
binary outcome case; generalizing to multiple discrete
outcomes should be straightforward. Denote the two outcomes
A and B. The outcomes are mutually exclusive and
ex5
See http://hanson.gmu.edu/policyanalysismarket.html for
more information, or http://dpennock.com/pam.html for
commentary.
6
http://www.policyanalysismarket.com/
haustive. Denote the instantaneous price per share of A as
p1 and the price per share of B as p2. Denote the payoffs
per share as P1 and P2, respectively. These four numbers,
p1, p2, P1, P2 are the key numbers that traders must track
and understand. Note that the price is set at the time of the
wager; the payoff per share is finalized only after the event
outcome is revealed.
At any time, a trader can purchase an infinitesimal
quantity of shares of A at price p1 (and similarly for B). However,
since the price changes continuously as shares are purchased,
the cost of buying n shares is computed as the integral of a
price function from 0 to n. The use of continuous functions
and integrals can be hidden from traders by aggregating the
automated market maker"s sell orders into discrete lots of,
say, 100 shares each. These ask orders can be
automatically entered into the system by the market institution, so
that traders interact with what looks like a more familiar
CDA; we examine this interface issue in more detail below
in Section 4.2.
For our analysis, we introduce the following additional
notation. Denote M1 as the total amount of money wagered
on A, M2 as the total amount of money wagered on B,
T = M1 + M2 as the total amount of money wagered on
both sides, N1 as the total number of shares purchased of
A, and N2 as the total number of shares purchased of B.
There are many ways to formulate the price function.
Several natural price functions are outlined below; each is
motivated as the unique solution to a particular constraint on
price dynamics.
3.2 Advantages and disadvantages
To my knowledge, a DPM is the only known mechanism
for hedging and speculating that exhibits all three of the
following properties: (1) guaranteed liquidity, (2) no risk
for the market institution, and (3) continuous incorporation
of information. A standard pari-mutuel fails (3). A CDA
fails (1). A CDAwMM, the bookmaker mechanism, and an
MSR all fail (2). Even though technically an MSR exposes
its patron to risk (i.e., a variable future payoff), the
patron"s maximum loss is bounded, so the distinction between
a DPM and an MSR in terms of these three properties is
more technical than practical.
DPM traders can cash out of the market early, just like
stock market traders, to lock in a profit or limit a loss, an
action that is simply not possible in a standard pari-mutuel.
A DPM also has some drawbacks. The payoff for a
wager depends both on the price at the time of the trade, and
on the final payoff per share at the market"s close. This
contrasts with the CDA variants, where the payoff vector
across possible future outcomes is fixed at the time of the
trade. So a trader"s strategic optimization problem is
complicated by the need to predict the final values of P1 and
P2. If P changes according to a random walk, then traders
can take the current P as an unbiased estimate of the
final P, greatly decreasing the complexity of their
optimization. If P does not change according to a random walk,
the mechanism still has utility as a mechanism for hedging
and speculating, though optimization may be difficult, and
determining a measure of the market"s aggregate opinion of
the probabilities of A and B may be difficult. We discuss
the implications of random walk behavior further below in
Section 4.1 in the discussion surrounding Assumption 3.
A second drawback of a DPM is its one-sided nature.
173
While an automated market maker always stands ready to
accept buy orders, there is no corresponding market maker
to accept sell orders. Traders must sell to each other
using a standard CDA mechanism, for example by posting an
ask order at a price at or below the market maker"s current
ask price. Traders can also always hedge-sell by
purchasing shares in the opposite outcome from the market maker,
thereby hedging their bet if not fully liquidating it.
3.3 Redistribution rule
In a standard pari-mutuel market, payoffs can be
computed in either of two equivalent ways: (1) each winning $1
wager receives a refund of the initial $1 paid, plus an equal
share of all losing wagers, or (2) each winning $1 wager
receives an equal share of all wagers, winning or losing.
Because each dollar always earns an equal share of the payoff,
the two formulations are precisely the same:
$1 +
Mlose
Mwin
=
Mwin + Mlose
Mwin
.
In a dynamic pari-mutuel market, because each dollar is
not equally weighted, the two formulations are distinct, and
lead to significantly different price functions and
mechanisms, each with different potentially desirable properties.
We consider each case in turn. The next section analyzes
case (1), where only losing money is redistributed. Section 5
examines case (2), where all money is redistributed.
4. DPM I: LOSING MONEY
REDISTRIBUTED
For the case where the initial payments on winning bets
are refunded, and only losing money is redistributed, the
respective payoffs per share are simply:
P1 =
M2
N1
P2 =
M1
N2
.
So, if A occurs, shareholders of A receive all of their
initial payment back, plus P1 dollars per share owned, while
shareholders of B lose all money wagered. Similarly, if B
occurs, shareholders of B receive all of their initial payment
back, plus P2 dollars per share owned, while shareholders of
A lose all money wagered.
Without loss of generality, I will analyze the market from
the perspective of A, deriving prices and payoffs for A only.
The equations for B are symmetric.
The trader"s per-share expected value for purchasing an
infinitesimal quantity of shares of A is
E[ shares]
= Pr(A) · E [P1|A] − (1 − Pr(A)) · p1
E[ shares]
= Pr(A) · E
»
M2
N1
˛
˛
˛
˛ A
− (1 − Pr(A)) · p1
where is an infinitesimal quantity of shares of A, Pr(A)
is the trader"s belief in the probability of A, and p1 is the
instantaneous price per share of A for an infinitesimal
quantity of shares. E[P1|A] is the trader"s expectation of the
payoff per share of A after the market closes and given that
A occurs. This is a subtle point. The value of P1 does not
matter if B occurs, since in this case shares of A are
worthless, and the current value of P1 does not necessarily matter
as this may change as trading continues. So, in order to
determine the expected value of shares of A, the trader must
estimate what he or she expects the payoff per share to be
in the end (after the market closes) if A occurs.
If E[ shares]/ > 0, a risk-neutral trader should purchase
shares of A. How many shares? This depends on the price
function determining p1. In general, p1 increases as more
shares are purchased. The risk-neutral trader should
continue purchasing shares until E[ shares]/ = 0. (A
riskaverse trader will generally stop purchasing shares before
driving E[ shares]/ all the way to zero.) Assuming
riskneutrality, the trader"s optimization problem is to choose a
number of shares n ≥ 0 of A to purchase, in order to
maximize
E[n shares] = Pr(A)·n·E [P1|A]−(1−Pr(A))·
Z n
0
p1(n)dn.
(1)
It"s easy to see that the same value of n can be solved for by
finding the number of shares required to drive E[ shares]/
to zero. That is, find n ≥ 0 satisfying
0 = Pr(A) · E [P1|A] − (1 − Pr(A)) · p1(n),
if such a n exists, otherwise n = 0.
4.1 Market probability
As traders who believe that E[ shares of A]/ > 0
purchase shares of A and traders who believe that E[ shares
of B]/ > 0 purchase shares of B, the prices p1 and p2
change according to a price function, as prescribed below.
The current prices in a sense reflect the market"s opinion as
a whole of the relative probabilities of A and B.
Assuming an efficient marketplace, the market as a whole
considers E[ shares]/ = 0, since the mechanisms is a zero sum
game. For example, if market participants in aggregate felt
that E[ shares]/ > 0, then there would be net demand for
A, driving up the price of A until E[ shares]/ = 0. Define
MPr(A) to be the market probability of A, or the
probability of A inferred by assuming that E[ shares]/ = 0. We
can consider MPr(A) to be the aggregate probability of A
as judged by the market as a whole. MPr(A) is the solution
to
0 = MPr(A) · E[P1|A] − (1 − MPr(A)) · p1.
Solving we get
MPr(A) =
p1
p1 + E[P1|A]
. (2)
At this point we make a critical assumption in order to
greatly simplify the analysis; we assume that
E[P1|A] = P1. (3)
That is, we assume that the current value for the payoff per
share of A is the same as the expected final value of the
payoff per share of A given that A occurs. This is certainly true
for the last (infinitesimal) wager before the market closes.
It"s not obvious, however, that the assumption is true well
before the market"s close. Basically, we are assuming that
the value of P1 moves according to an unbiased random
walk: the current value of P1 is the best expectation of
its future value. I conjecture that there are reasonable
market efficiency conditions under which assumption (3) is true,
though I have not been able to prove that it arises naturally
from rational trading. We examine scenarios below in which
174
assumption (3) seems especially plausible. Nonetheless, the
assumption effects our analysis only. Regardless of whether
(3) is true, each price function derived below implies a
welldefined zero-sum game in which traders can play. If traders
can assume that (3) is true, then their optimization
problem (1) is greatly simplified; however, optimizing (1) does
not depend on the assumption, and traders can still optimize
by strategically projecting the final expected payoff in
whatever complicated way they desire. So, the utility of DPM for
hedging and speculating does not necessarily hinge on the
truth of assumption (3). On the other hand, the ability to
easily infer an aggregate market consensus probability from
market prices does depend on (3).
4.2 Price functions
A variety of price functions seem reasonable, each
exhibiting various properties, and implying differing market
probabilities.
4.2.1 Price function I: Price of A equals payoff of B
One natural price function to consider is to set the price
per share of A equal to the payoff per share of B, and set
the price per share of B equal to the payoff per share of A.
That is,
p1 = P2
p2 = P1. (4)
Enforcing this relationship reduces the dimensionality of
the system from four to two, simplifying the interface: traders
need only track two numbers instead of four. The
relationship makes sense, since new information supporting A
should encourage purchasing of shares A, driving up both
the price of A and the payoff of B, and driving down the
price of B and the payoff of A. In this setting, assumption
(3) seems especially reasonable, since if an efficient market
hypothesis leads prices to follow a random walk, than payoffs
must also follow a random walk.
The constraints (4) lead to the following derivation of the
market probability:
MPr(A)P1 = MPr(B)p1
MPr(A)P1 = MPr(B)P2
MPr(A)
MPr(B)
=
P2
P1
MPr(A)
MPr(B)
=
M1
N2
M2
N1
MPr(A)
MPr(B)
=
M1N1
M2N2
MPr(A) =
M1N1
M1N1 + M2N2
(5)
The constraints (4) specify the instantaneous
relationship between payoff and price. From this, we can derive
how prices change when (non-infinitesimal) shares are
purchased. Let n be the number of shares purchased and let m
be the amount of money spent purchasing n shares. Note
that p1 = dm/dn, the instantaneous price per share, and
m =
R n
0
p1(n)dn. Substituting into equation (4), we get:
p1 = P2
dm
dn
=
M1 + m
N2
dm
M1 + m
=
dn
N2
Z
dm
M1 + m
=
Z
dn
N2
ln(M1 + m) =
n
N2
+ C
m = M1
h
e
n
N2 − 1
i
(6)
Equation 6 gives the cost of purchasing n shares. The
instantaneous price per share as a function of n is
p1(n) =
dm
dn
=
M1
N2
e
n
N2 . (7)
Note that p1(0) = M1/N2 = P2 as required. The derivation
of the price function p2(n) for B is analogous and the results
are symmetric.
The notion of buying infinitesimal shares, or
integrating costs over a continuous function, are probably foreign
to most traders. A more standard interface can be
implemented by discretizing the costs into round lots of shares,
for example lots of 100 shares. Then ask orders of 100
shares each at the appropriate price can be automatically
placed by the market institution. For example, the
market institution can place an ask order for 100 shares at
price m(100)/100, another ask order for 100 shares at price
(m(200)−m(100))/100, a third ask for 100 shares at (m(300)−
m(200))/100, etc. In this way, the market looks more
familiar to traders, like a typical CDA with a number of ask
orders at various prices automatically available. A trader
buying less than 100 shares would pay a bit more than if
the true cost were computed using (6), but the discretized
interface would probably be more intuitive and transparent
to the majority of traders.
The above equations assume that all money that comes in
is eventually returned or redistributed. In other words, the
mechanism is a zero sum game, and the market institution
takes no portion of the money. This could be generalized so
that the market institution always takes a certain amount,
or a certain percent, or a certain amount per transaction, or
a certain percent per transaction, before money in returned
or redistributed.
Finally, note that the above price function is undefined
when the amount bet or the number of shares are zero. So
the system must begin with some positive amount on both
sides, and some positive number of shares outstanding on
both sides. These initial amounts can be arbitrarily small in
principle, but the size of the initial subsidy may affect the
incentives of traders to participate. Also, the smaller the
initial amounts, the more each new dollar effects the prices.
The initialization amounts could be funded as a subsidy from
the market institution or a patron, which I"ll call a seed
wager, or from a portion of the fees charged, which I"ll call
an ante wager.
4.2.2 Price function II: Price of A proportional to
money on A
A second price function can be derived by requiring the
ratio of prices to be equal to the ratio of money wagered.
175
That is,
p1
p2
=
M1
M2
. (8)
In other words, the price of A is proportional to the amount
of money wagered on A, and similarly for B. This seems like
a particularly natural way to set the price, since the more
money that is wagered on one side, the cheaper becomes a
share on the other side, in exactly the same proportion.
Using Equation 8, along with (2) and (3), we can derive
the implied market probability:
M1
M2
=
p1
p2
=
MPr(A)
MPr(B)
· M2
N1
MPr(B)
MPr(A)
· M1
N2
=
(MPr(A))2
(MPr(B))2
·
M2N2
M1N1
(MPr(A))2
(MPr(B))2
=
(M1)2
N1
(M2)2N2
MPr(A)
MPr(B)
=
M1
√
N1
M2
√
N2
MPr(A) =
M1
√
N1
M1
√
N1 + M2
√
N2
(9)
We can solve for the instantaneous price as follows:
p1 =
MPr(A)
MPr(B)
· P1
=
M1
√
N1
M2
√
N2
·
M2
N1
=
M1
√
N1N2
(10)
Working from the above instantaneous price, we can
derive the implied cost function m as a function of the number
n of shares purchased as follows:
dm
dn
=
M1 + m
√
N1 + n
√
N2
Z
dm
M1 + m
=
Z
dn
√
N1 + n
√
N2
ln(M1 + m) =
2
N2
[(N1 + n)N2]
1
2 + C
m = M1
"
e
2
r
N1+n
N2
−2
r
N1
N2 − 1
#
. (11)
From this we get the price function:
p1(n) =
dm
dn
=
M1
p
(N1 + n)N2
e
2
r
N1+n
N2
−2
r
N1
N2
. (12)
Note that, as required, p1(0) = M1/
√
N1N2, and p1(0)/p2(0)
= M1/M2. If one uses the above price function, then the
market dynamics will be such that the ratio of the
(instantaneous) prices of A and B always equals the ratio of the
amounts wagered on A and B, which seems fairly natural.
Note that, as before, the mechanism can be modified to
collect transaction fees of some kind. Also note that seed or
ante wagers are required to initialize the system.
5. DPM II: ALL MONEY REDISTRIBUTED
Above we examined the policy of refunding winning
wagers and redistributing only losing wagers. In this section
we consider the second policy mentioned in Section 3.3: all
money from all wagers are collected and redistributed to
winning wagers.
For the case where all money is redistributed, the
respective payoffs per share are:
P1 =
M1 + M2
N1
=
T
N1
P2 =
M1 + M2
N2
=
T
N2
,
where T = M1 + M2 is the total amount of money wagered
on both sides. So, if A occurs, shareholders of A lose their
initial price paid, but receive P1 dollars per share owned;
shareholders of B simply lose all money wagered. Similarly,
if B occurs, shareholders of B lose their initial price paid,
but receive P2 dollars per share owned; shareholders of A
lose all money wagered.
In this case, the trader"s per-share expected value for
purchasing an infinitesimal quantity of shares of A is
E[ shares]
= Pr(A) · E [P1|A] − p1. (13)
A risk-neutral trader optimizes by choosing a number of
shares n ≥ 0 of A to purchase, in order to maximize
E[n shares] = Pr(A) · n · E [P1|A] −
Z n
0
p1(n)dn
= Pr(A) · n · E [P1|A] − m (14)
The same value of n can be solved for by finding the number
of shares required to drive E[ shares]/ to zero. That is,
find n ≥ 0 satisfying
0 = Pr(A) · E [P1|A] − p1(n),
if such a n exists, otherwise n = 0.
5.1 Market probability
In this case MPr(A), the aggregate probability of A as
judged by the market as a whole, is the solution to
0 = MPr(A) · E[P1|A] − p1.
Solving we get
MPr(A) =
p1
E[P1|A]
. (15)
As before, we make the simplifying assumption (3) that
the expected final payoff per share equals the current payoff
per share. The assumption is critical for our analysis, but
may not be required for a practical implementation.
5.2 Price functions
For the case where all money is distributed, the
constraints (4) that keep the price of A equal to the payoff of B,
and vice versa, do not lead to the derivation of a coherent
price function.
A reasonable price function can be derived from the
constraint (8) employed in Section 4.2.2, where we require that
the ratio of prices to be equal to the ratio of money wagered.
That is, p1/p2 = M1/M2. In other words, the price of A is
proportional to the amount of money wagered on A, and
similarly for B.
176
Using Equations 3, 8, and 15 we can derive the implied
market probability:
M1
M2
=
p1
p2
=
MPr(A)
MPr(B)
·
T
N1
·
N2
T
=
MPr(A)
MPr(B)
·
N2
N1
MPr(A)
MPr(B)
=
M1N1
M2N2
MPr(A) =
M1N1
M1N1 + M2N2
(16)
Interestingly, this is the same market probability derived in
Section 4.2.1 for the case of losing-money redistribution with
the constraints that the price of A equal the payoff of B and
vice versa.
The instantaneous price per share for an infinitesimal
quantity of shares is:
p1 =
(M1)2
+ M1M2
M1N1 + M2N2
=
M1 + M2
N1 + M2
M1
N2
Working from the above instantaneous price, we can
derive the number of shares n that can be purchased for m
dollars, as follows:
dm
dn
=
M1 + M2 + m
N1 + n + M2
M1+m
N2
dn
dm
=
N1 + n + M2
M1+m
N2
M1 + M2 + m
(17)
· · ·
n =
m(N1 − N2)
T
+
N2(T + m)
M2
ln
»
T(M1 + m)
M1(T + m)
.
Note that we solved for n(m) rather than m(n). I could not
find a closed-form solution for m(n), as was derived for the
two other cases above. Still, n(m) can be used to determine
how many shares can be purchased for m dollars, and the
inverse function can be approximated to any degree
numerically. From n(m) we can also compute the price function:
p1(m) =
dm
dn
=
(M1 + m)M2T
denom
, (18)
where
denom = (M1 + m)M2N1 + (M2 − m)M2N2
+T(M1 + m)N2 ln
»
T(M1 + m)
M1(T + m)
Note that, as required, p1(0)/p2(0) = M1/M2. If one uses
the above price function, then the market dynamics will be
such that the ratio of the (instantaneous) prices of A and B
always equals the ratio of the amounts wagered on A and
B.
This price function has another desirable property: it acts
such that the expected value of wagering $1 on A and
simultaneously wagering $1 on B equals zero, assuming (3). That
is, E[$1 of A + $1 of B] = 0. The derivation is omitted.
5.3 Comparing DPM I and II
The main advantage of refunding winning wagers (DPM
I) is that every bet on the winning outcome is guaranteed
to at least break even. The main disadvantage of
refunding winning wagers is that shares are not homogenous: each
share of A, for example, is actually composed of two distinct
parts: (1) the refund, or a lottery ticket that pays $p if A
occurs, where p is the price paid per share, and (2) one share
of the final payoff ($P1) if A occurs. This complicates the
implementation of an aftermarket to cash out of the market
early, which we will examine below in Section 7. When all
money is redistributed (DPM II), shares are homogeneous:
each share entitles its owner to an equal slice of the final
payoff. Because shares are homogenous, the
implementation of an aftermarket is straightforward, as we shall see in
Section 7. On the other hand, because initial prices paid
are not refunded for winning bets, there is a chance that, if
prices swing wildly enough, a wager on the correct outcome
might actually lose money. Traders must be aware that if
they buy in at an excessively high price that later tumbles
allowing many others to get in at a much lower price, they
may lose money in the end regardless of the outcome. From
informal experiments, I don"t believe this eventuality would
be common, but nonetheless it requires care in
communicating to traders the possible risks. One potential fix would be
for the market maker to keep track of when the price is going
too low, endangering an investor on the correct outcome. At
this point, the market maker could artificially stop lowering
the price. Sell orders in the aftermarket might still come in
below the market maker"s price, but in this way the system
could ensure that every wager on the correct outcome at
least breaks even.
6. OTHER VARIATIONS
A simple ascending price function would set p1 = αM1
and p2 = αM2, where α > 0. In this case, prices would only
go up. For the case of all money being redistributed, this
would eliminate the possibility of losing money on a wager on
the correct outcome. Even though the market maker"s price
only rises, the going price may fall well below the market
maker"s price, as ask orders are placed in the aftermarket.
I have derived price functions for several other cases, using
the same methodology above. Each price function may have
its own desirable properties, but it"s not clear which is best,
or even that a single best method exists. Further analyses
and, more importantly, empirical investigations are required
to answer these questions.
7. AFTERMARKETS
A key advantage of DPM over a standard pari-mutuel
market is the ability to cash out of the market before it
closes, in order to take a profit or limit a loss. This is
accomplished by allowing traders to place ask orders on the
same queue as the market maker. So traders can sell the
shares that they purchased at or below the price set by the
market maker. Or traders can place a limit sell order at any
price. Buyers will purchase any existing shares for sale at
the lower prices first, before purchasing new shares from the
market maker.
7.1 Aftermarket for DPM II
For the second main case explored above, where all money
177
is redistributed, allowing an aftermarket is simple. In fact,
aftermarket may be a poor descriptor: buying and selling
are both fully integrated into the same mechanism. Every
share is worth precisely the same amount, so traders can
simply place ask orders on the same queue as the market maker
in order to sell their shares. New buyers will accept the
lowest ask price, whether it comes from the market maker or
another trader. In this way, traders can cash out early and
walk away with their current profit or loss, assuming they
can find a willing buyer.
7.2 Aftermarket for DPM I
When winning wagers are refunded and only losing wagers
are redistributed, each share is potentially worth a different
amount, depending on how much was paid for it, so it is not
as simple a matter to set up an aftermarket. However, an
aftermarket is still possible. In fact, much of the complexity
can be hidden from traders, so it looks nearly as simple as
placing a sell order on the queue.
In this case shares are not homogenous: each share of A
is actually composed of two distinct parts: (1) the refund of
p · 1A dollars, and (2) the payoff of P1 · 1A dollars, where p
is the per-share price paid and 1A is the indicator function
equalling 1 if A occurs, and 0 otherwise. One can
imagine running two separate aftermarkets where people can sell
these two respective components. However, it is possible to
automate the two aftermarkets, by automatically bundling
them together in the correct ratio and selling them in the
central DPM. In this way, traders can cash out by placing
sell orders on the same queue as the DPM market maker,
effectively hiding the complexity of explicitly having two
separate aftermarkets. The bundling mechanism works as
follows. Suppose the current price for 1 share of A is p1. A
buyer agrees to purchase the share at p1. The buyer pays
p1 dollars and receives p1 · 1A + P1 · 1A dollars. If there is
enough inventory in the aftermarkets, the buyer"s share is
constructed by bundling together p1 ·1A from the first
aftermarket, and P1 ·1A from the second aftermarket. The seller
in the first aftermarket receives p1MPr(A) dollars, and the
seller in the second aftermarket receives p1MPr(B) dollars.
7.3 Pseudo aftermarket for DPM I
There is an alternative pseudo aftermarket that"s
possible for the case of DPM I that does not require bundling.
Consider a share of A purchased for $5. The share is
composed of $5·1A and $P1 ·1A. Now suppose the current price
has moved from $5 to $10 per share and the trader wants to
cash out at a profit. The trader can sell 1/2 share at market
price (1/2 share for $5), receiving all of the initial $5
investment back, and retaining 1/2 share of A. The 1/2 share is
worth either some positive amount, or nothing, depending
on the outcome and the final payoff. So the trader is left
with shares worth a positive expected value and all of his or
her initial investment. The trader has essentially cashed out
and locked in his or her gains. Now suppose instead that
the price moves downward, from $5 to $2 per share. The
trader decides to limit his or her loss by selling the share for
$2. The buyer gets the 1 share plus $2·1A (the buyer"s price
refunded). The trader (seller) gets the $2 plus what remains
of the original price refunded, or $3 · 1A. The trader"s loss
is now limited to $3 at most instead of $5. If A occurs, the
trader breaks even; if B occurs, the trader loses $3.
Also note that-in either DPM formulation-traders can
always hedge sell by buying the opposite outcome without
the need for any type of aftermarket.
8. CONCLUSIONS
I have presented a new market mechanism for wagering
on, or hedging against, a future uncertain event, called a
dynamic pari-mutuel market (DPM). The mechanism
combines the infinite liquidity and risk-free nature of a
parimutuel market with the dynamic nature of a CDA, making
it suitable for continuous information aggregation. To my
knowledge, all existing mechanisms-including the standard
pari-mutuel market, the CDA, the CDAwMM, the bookie
mechanism, and the MSR-exhibit at most two of the three
properties. An MSR is the closest to a DPM in terms of
these properties, if not in terms of mechanics. Given some
natural constraints on price dynamics, I have derived in
closed form the implied price functions, which encode how
prices change continuously as shares are purchased. The
interface for traders looks much like the familiar CDA, with
the system acting as an automated market maker willing
to accept an infinite number of buy orders at some price.
I have explored two main variations of a DPM: one where
only losing money is redistributed, and one where all money
is redistributed. Each has its own pros and cons, and each
supports several reasonable price functions. I have described
the workings of an aftermarket, so that traders can cash out
of the market early, like in a CDA, to lock in their gains
or limit their losses, an operation that is not possible in a
standard pari-mutuel setting.
9. FUTURE WORK
This paper reports the results of an initial investigation of
the concept of a dynamic pari-mutuel market. Many avenues
for future work present themselves, including the following:
• Random walk conjecture. The most important
question mark in my mind is whether the random walk
assumption (3) can be proven under reasonable market
efficiency conditions and, if not, how severely it effects
the practicality of the system.
• Incentive analysis. Formally, what are the
incentives for traders to act on new information and when?
How does the level of initial subsidy effect trader
incentives?
• Laboratory experiments and field tests. This
paper concentrated on the mathematics and algorithmics
of the mechanism. However, the true test of the
mechanism"s ability to serve as an instrument for hedging,
wagering, or information aggregation is to test it with
real traders in a realistic environment. In reality, how
do people behave when faced with a DPM mechanism?
• DPM call market. I have derived the price functions
to react to wagers on one outcome at a time. The
mechanism could be generalized to accept orders on
both sides, then update the prices wholistically, rather
than by assuming a particular sequence on the wagers.
• Real-valued variables. I believe the mechanisms in
this paper can easily be generalized to multiple discrete
178
outcomes, and multiple real-valued outcomes that
always sum to some constant value (e.g., multiple
percentage values that must sum to 100). However, the
generalization to real-valued variables with arbitrary
range is less clear, and open for future development.
• Compound/combinatorial betting. I believe that
DPM may be well suited for compound [8, 11] or
combinatorial [2] betting, for many of the same reasons
that market scoring rules [11] are well suited for the
task. DPM may also have some computational
advantages over MSR, though this remains to be seen.
Acknowledgments
I thank Dan Fain, Gary Flake, Lance Fortnow, and Robin
Hanson.
10. REFERENCES
[1] Mukhtar M. Ali. Probability and utility estimates for
racetrack bettors. Journal of Political Economy,
85(4):803-816, 1977.
[2] Peter Bossaerts, Leslie Fine, and John Ledyard.
Inducing liquidity in thin financial markets through
combined-value trading mechanisms. European
Economic Review, 46:1671-1695, 2002.
[3] Kay-Yut Chen, Leslie R. Fine, and Bernardo A.
Huberman. Forecasting uncertain events with small
groups. In Third ACM Conference on Electronic
Commerce (EC"01), pages 58-64, 2001.
[4] Sandip Debnath, David M. Pennock, C. Lee Giles, and
Steve Lawrence. Information incorporation in online
in-game sports betting markets. In Fourth ACM
Conference on Electronic Commerce (EC"03), 2003.
[5] Robert Forsythe and Russell Lundholm. Information
aggregation in an experimental market. Econometrica,
58(2):309-347, 1990.
[6] Robert Forsythe, Forrest Nelson, George R. Neumann,
and Jack Wright. Anatomy of an experimental
political stock market. American Economic Review,
82(5):1142-1161, 1992.
[7] Robert Forsythe, Thomas A. Rietz, and Thomas W.
Ross. Wishes, expectations, and actions: A survey on
price formation in election stock markets. Journal of
Economic Behavior and Organization, 39:83-110,
1999.
[8] Lance Fortnow, Joe Kilian, David M. Pennock, and
Michael P. Wellman. Betting boolean-style: A
framework for trading in securities based on logical
formulas. In Proceedings of the Fourth Annual ACM
Conference on Electronic Commerce, pages 144-155,
2003.
[9] John M. Gandar, William H. Dare, Craig R. Brown,
and Richard A. Zuber. Informed traders and price
variations in the betting market for professional
basketball games. Journal of Finance,
LIII(1):385-401, 1998.
[10] Robin Hanson. Decision markets. IEEE Intelligent
Systems, 14(3):16-19, 1999.
[11] Robin Hanson. Combinatorial information market
design. Information Systems Frontiers, 5(1), 2002.
[12] Robin D. Hanson. Could gambling save science?
Encouraging an honest consensus. Social
Epistemology, 9(1):3-33, 1995.
[13] Jens Carsten Jackwerth and Mark Rubinstein.
Recovering probability distributions from options
prices. Journal of Finance, 51(5):1611-1631, 1996.
[14] Joseph B. Kadane and Robert L. Winkler. Separating
probability elicitation from utilities. Journal of the
American Statistical Association, 83(402):357-363,
1988.
[15] David M. Pennock, Steve Lawrence, C. Lee Giles, and
Finn ˚Arup Nielsen. The real power of artificial
markets. Science, 291:987-988, February 9 2001.
[16] David M. Pennock, Steve Lawrence, Finn ˚Arup
Nielsen, and C. Lee Giles. Extracting collective
probabilistic forecasts from web games. In Seventh
International Conference on Knowledge Discovery and
Data Mining, pages 174-183, 2001.
[17] C. R. Plott, J. Wit, and W. C. Yang. Parimutuel
betting markets as information aggregation devices:
Experimental results. Technical Report Social Science
Working Paper 986, California Institute of
Technology, April 1997.
[18] Charles R. Plott. Markets as information gathering
tools. Southern Economic Journal, 67(1):1-15, 2000.
[19] Charles R. Plott and Shyam Sunder. Rational
expectations and the aggregation of diverse
information in laboratory security markets.
Econometrica, 56(5):1085-1118, 1988.
[20] Charles Polk, Robin Hanson, John Ledyard, and
Takashi Ishikida. Policy analysis market: An
electronic commerce application of a combinatorial
information market. In Proceedings of the Fourth
Annual ACM Conference on Electronic Commerce,
pages 272-273, 2003.
[21] R. Roll. Orange juice and weather. American
Economic Review, 74(5):861-880, 1984.
[22] Richard N. Rosett. Gambling and rationality. Journal
of Political Economy, 73(6):595-607, 1965.
[23] Carsten Schmidt and Axel Werwatz. How accurate do
markets predict the outcome of an event? The Euro
2000 soccer championships experiment. Technical
Report 09-2002, Max Planck Institute for Research
into Economic Systems, 2002.
[24] Wayne W. Snyder. Horse racing: Testing the efficient
markets model. Journal of Finance, 33(4):1109-1118,
1978.
[25] Richard H. Thaler and William T. Ziemba. Anomalies:
Parimutuel betting markets: Racetracks and lotteries.
Journal of Economic Perspectives, 2(2):161-174, 1988.
[26] Martin Weitzman. Utility analysis and group
behavior: An empirical study. Journal of Political
Economy, 73(1):18-26, 1965.
[27] Robert L. Winkler and Allan H. Murphy. Good
probability assessors. J. Applied Meteorology,
7:751-758, 1968.
179 | trader interface;information speculation;bet;dynamic pari-mutuel market;pari-mutuel market;gamble;dpm;demand;double auction format;risk allocation;hedge;bid-ask queue;price;continuous double auction;combinatorial bet;selling;price function;market institution;loss;gain;event resolution;automated market maker;information aggregation;trade;payoff per share;hybrid;cda;speculate;automate market maker;compound security market;infinite buy-in liquidity;wager;zero risk |
train_J-72 | Applying Learning Algorithms to Preference Elicitation | We consider the parallels between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from learning theory. We show that learning algorithms can be used as a basis for preference elicitation algorithms. The resulting elicitation algorithms perform a polynomial number of queries. We also give conditions under which the resulting algorithms have polynomial communication. Our conversion procedure allows us to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions. In particular, we obtain an algorithm that elicits XOR bids with polynomial communication. | 1. INTRODUCTION
In a combinatorial auction, agents may bid on bundles of
goods rather than individual goods alone. Since there are
an exponential number of bundles (in the number of goods),
communicating values over these bundles can be
problematic. Communicating valuations in a one-shot fashion can
be prohibitively expensive if the number of goods is only
moderately large. Furthermore, it might even be hard for
agents to determine their valuations for single bundles [14].
It is in the interest of such agents to have auction protocols
which require them to bid on as few bundles as possible.
Even if agents can efficiently compute their valuations, they
might still be reluctant to reveal them entirely in the course
of an auction, because such information may be valuable to
their competitors. These considerations motivate the need
for auction protocols that minimize the communication and
information revelation required to determine an optimal
allocation of goods.
There has been recent work exploring the links between
the preference elicitation problem in combinatorial auctions
and the problem of learning an unknown function from
computational learning theory [5, 19]. In learning theory, the
goal is to learn a function via various types of queries, such
as What is the function"s value on these inputs? In
preference elicitation, the goal is to elicit enough partial
information about preferences to be able to compute an optimal
allocation. Though the goals of learning and preference
elicitation differ somewhat, it is clear that these problems share
similar structure, and it should come as no surprise that
techniques from one field should be relevant to the other.
We show that any exact learning algorithm with
membership and equivalence queries can be converted into a
preference elicitation algorithm with value and demand queries.
The resulting elicitation algorithm guarantees elicitation in
a polynomial number of value and demand queries. Here
we mean polynomial in the number of goods, agents, and
the sizes of the agents" valuation functions in a given
encoding scheme. Preference elicitation schemes have not
traditionally considered this last parameter. We argue that
complexity guarantees for elicitation schemes should allow
dependence on this parameter. Introducing this parameter
also allows us to guarantee polynomial worst-case
communication, which usually cannot be achieved in the number
of goods and agents alone. Finally, we use our conversion
procedure to generate combinatorial auction protocols from
learning algorithms for polynomials, monotone DNF, and
linear-threshold functions.
Of course, a one-shot combinatorial auction where agents
provide their entire valuation functions at once would also
have polynomial communication in the size of the agents"
valuations, and only require one query. The advantage of
our scheme is that agents can be viewed as black-boxes
that provide incremental information about their valuations.
There is no burden on the agents to formulate their
valuations in an encoding scheme of the auctioneer"s choosing.
We expect this to be an important consideration in practice.
Also, with our scheme entire revelation only happens in the
worst-case.
180
For now, we leave the issue of incentives aside when
deriving elicitation algorithms. Our focus is on the time and
communication complexity of preference elicitation
regardless of incentive constraints, and on the relationship between
the complexities of learning and preference elicitation.
Related work. Zinkevich et al. [19] consider the problem
of learning restricted classes of valuation functions which can
be represented using read-once formulas and Toolbox DNF.
Read-once formulas can represent certain substitutabilities,
but no complementarities, whereas the opposite holds for
Toolbox DNF. Since their work is also grounded in learning
theory, they allow dependence on the size of the target
valuation as we do (though read-once valuations can always be
succinctly represented anyway). Their work only makes use
of value queries, which are quite limited in power. Because
we allow ourselves demand queries, we are able to derive an
elicitation scheme for general valuation functions.
Blum et al. [5] provide results relating the complexities
of query learning and preference elicitation. They consider
models with membership and equivalence queries in query
learning, and value and demand queries in preference
elicitation. They show that certain classes of functions can be
efficiently learned yet not efficiently elicited, and vice-versa.
In contrast, our work shows that given a more general (yet
still quite standard) version of demand query than the type
they consider, the complexity of preference elicitation is no
greater than the complexity of learning. We will show that
demand queries can simulate equivalence queries until we
have enough information about valuations to imply a
solution to the elicitation problem.
Nisan and Segal [12] study the communication
complexity of preference elicitation. They show that for many rich
classes of valuations, the worst-case communication
complexity of computing an optimal allocation is exponential.
Their results apply to the black-box model of
computational complexity. In this model algorithms are allowed to
ask questions about agent valuations and receive honest
responses, without any insight into how the agents internally
compute their valuations. This is in fact the basic
framework of learning theory. Our work also addresses the issue
of communication complexity, and we are able to derive
algorithms that provide significant communication guarantees
despite Nisan and Segal"s negative results. Their work
motivates the need to rely on the sizes of agents" valuation
functions in stating worst-case results.
2. THE MODELS
2.1 Query Learning
The query learning model we consider here is called exact
learning from membership and equivalence queries,
introduced by Angluin [2]. In this model the learning
algorithm"s objective is to exactly identify an unknown target
function f : X → Y via queries to an oracle. The target
function is drawn from a function class C that is known to
the algorithm. Typically the domain X is some subset of
{0, 1}m
, and the range Y is either {0, 1} or some subset
of the real numbers Ê. As the algorithm progresses, it
constructs a manifest hypothesis ˜f which is its current estimate
of the target function. Upon termination, the manifest
hypothesis of a correct learning algorithm satisfies ˜f(x) = f(x)
for all x ∈ X.
It is important to specify the representation that will be
used to encode functions from C. For example, consider the
following function from {0, 1}m
to Ê: f(x) = 2 if x
consists of m 1"s, and f(x) = 0 otherwise. This function may
simply be represented as a list of 2m
values. Or it may be
encoded as the polynomial 2x1 · · · xm, which is much more
succinct. The choice of encoding may thus have a significant
impact on the time and space requirements of the learning
algorithm. Let size(f) be the size of the encoding of f with
respect to the given representation class. Most
representation classes have a natural measure of encoding size. The
size of a polynomial can be defined as the number of non-zero
coefficients in the polynomial, for example. We will usually
only refer to representation classes; the corresponding
function classes will be implied. For example, the representation
class of monotone DNF formulae implies the function class
of monotone Boolean functions.
Two types of queries are commonly used for exact
learning: membership and equivalence queries. On a membership
query, the learner presents some x ∈ X and the oracle replies
with f(x). On an equivalence query, the learner presents
its manifest hypothesis ˜f. The oracle either replies ‘YES" if
˜f = f, or returns a counterexample x such that ˜f(x) = f(x).
An equivalence query is proper if size( ˜f) ≤ size(f) at the
time the manifest hypothesis is presented.
We are interested in efficient learning algorithms. The
following definitions are adapted from Kearns and Vazirani [9]:
Definition 1. The representation class C is
polynomialquery exactly learnable from membership and
equivalence queries if there is a fixed polynomial p(·, ·) and an
algorithm L with access to membership and equivalence
queries of an oracle such that for any target function f ∈ C, L
outputs after at most p(size(f), m) queries a function ˜f ∈ C
such that ˜f(x) = f(x) for all instances x.
Similarly, the representation class C is efficiently
exactly learnable from membership and equivalence
queries if the algorithm L outputs a correct hypothesis in
time p(size(f), m), for some fixed polynomial p(·, ·).
Here m is the dimension of the domain. Since the target
function must be reconstructed, we also necessarily allow
polynomial dependence on size(f).
2.2 Preference Elicitation
In a combinatorial auction, a set of goods M is to be
allocated among a set of agents N so as to maximize the sum of
the agents" valuations. Such an allocation is called efficient
in the economics literature, but we will refer to it as optimal
and reserve the term efficient to refer to computational
efficiency. We let n = |N| and m = |M|. An allocation
is a partition of the objects into bundles (S1, . . . , Sn), such
that Si ∩ Sj = ∅ for all distinct i, j ∈ N. Let Γ be the set
of possible allocations. Each agent i ∈ N has a valuation
function vi : 2M
→ Ê over the space of possible bundles.
Each valuation vi is drawn from a known class of valuations
Vi. The valuation classes do not need to coincide.
We will assume that all the valuations considered are
normalized, meaning v(∅) = 0, and that there are no
externalities, meaning vi(S1, ..., Sn) = vi(Si), for all agents i ∈ N,
for any allocation (S1, ..., Sn) ∈ Γ (that is, an agent cares
only about the bundle allocated to her). Valuations
satisfying these conditions are called general valuations.1
We
1
Often general valuations are made to satisfy the additional
181
also assume that agents have quasi-linear utility functions,
meaning that agents" utilities can be divided into monetary
and non-monetary components. If an agent i is allocated
bundle S at price p, it derives utility ui(S, p) = vi(S) − p.
A valuation function may be viewed as a vector of 2m
− 1
non-negative real-values. Of course there may also be more
succinct representations for certain valuation classes, and
there has been much research into concise bidding languages
for various types of valuations [11]. A classic example which
we will refer to again later is the XOR bidding language.
In this language, the agent provides a list of atomic bids,
which consist of a bundle together with its value. To
determine the value of a bundle S given these bids, one searches
for the bundle S of highest value listed in the atomic bids
such that S ⊆ S. It is then the case that v(S) = v(S ).
As in the learning theory setting, we will usually only refer
to bidding languages rather than valuation classes, because
the corresponding valuation classes will then be implied. For
example, the XOR bidding language implies the class of
valuations satisfying free-disposal, which is the condition that
A ⊆ B ⇒ v(A) ≤ v(B).
We let size(v1, . . . , vn) =
Èn
i=1 size(vi). That is, the size
of a vector of valuations is the size of the concatenation of
the valuations" representations in their respective encoding
schemes (bidding languages).
To make an analogy to computational learning theory, we
assume that all representation classes considered are
polynomially interpretable [11], meaning that the value of a bundle
may be computed in polynomial time given the valuation
function"s representation. More formally, a representation
class (bidding language) C is polynomially interpretable if
there exists an algorithm that given as input some v ∈ C
and an instance x ∈ X computes the value v(x) in time
q(size(v), m), for some fixed polynomial q(·, ·).2
In the intermediate rounds of an (iterative) auction, the
auctioneer will have elicited information about the agents"
valuation functions via various types of queries. She will
thus have constructed a set of manifest valuations, denoted
˜v1, . . . , ˜vn.3
The values of these functions may correspond
exactly to the true agent values, or they may for example
be upper or lower bounds on the true values, depending
on the types of queries made. They may also simply be
default or random values if no information has been acquired
about certain bundles. The goal in the preference elicitation
problem is to construct a set of manifest valuations such
that:
arg max
(S1,...,Sn)∈Γ
i∈N
˜vi(Si) ⊆ arg max
(S1,...,Sn)∈Γ
i∈N
vi(Si)
That is, the manifest valuations provide enough information
to compute an allocation that is optimal with respect to the
true valuations. Note that we only require one such optimal
allocation.
condition of free-disposal (monotonicity), but we do not
need it at this point.
2
This excludes OR∗
, assuming P = NP, because
interpreting bids from this language is NP-hard by reduction from
weighted set-packing, and there is no well-studied
representation class in learning theory that is clearly analogous to
OR∗
.
3
This view of iterative auctions is meant to parallel the
learning setting. In many combinatorial auctions, manifest
valuations are not explicitly maintained but rather simply
implied by the history of bids.
Two typical queries used in preference elicitation are value
and demand queries. On a value query, the auctioneer
presents a bundle S ⊆ M and the agent responds with
her (exact) value for the bundle v(S) [8]. On a demand
query, the auctioneer presents a vector of non-negative prices
p ∈ Ê(2m )
over the bundles together with a bundle S. The
agent responds ‘YES" if it is the case that
S ∈ arg max
S ⊆M
v(S ) − p(S )
¡
or otherwise presents a bundle S such that
v(S ) − p(S ) > v(S) − p(S)
That is, the agent either confirms that the presented bundle
is most preferred at the quoted prices, or indicates a
better one [15].4
Note that we include ∅ as a bundle, so the
agent will only respond ‘YES" if its utility for the proposed
bundle is non-negative. Note also that communicating
nonlinear prices does not necessarily entail quoting a price for
every possible bundle. There may be more succinct ways of
communicating this vector, as we show in section 5.
We make the following definitions to parallel the query
learning setting and to simplify the statements of later
results:
Definition 2. The representation classes V1, . . . , Vn can
be polynomial-query elicited from value and demand
queries if there is a fixed polynomial p(·, ·) and an
algorithm L with access to value and demand queries of the
agents such that for any (v1, . . . , vn) ∈ V1 × . . . × Vn, L
outputs after at most p(size(v1, . . . , vn), m) queries an
allocation (S1, . . . , Sn) ∈ arg max(S1,...,Sn)∈Γ
È
vi(Si).
Similarly, the representation class C can be efficiently
elicited from value and demand queries if the
algorithm L outputs an optimal allocation with communication
p(size(v1, . . . , vn), m), for some fixed polynomial p(·, ·).
There are some key differences here with the query
learning definition. We have dropped the term exactly since
the valuation functions need not be determined exactly in
order to compute an optimal allocation. Also, an efficient
elicitation algorithm is polynomial communication, rather
than polynomial time. This reflects the fact that
communication rather than runtime is the bottleneck in elicitation.
Computing an optimal allocation of goods even when given
the true valuations is NP-hard for a wide range of
valuation classes. It is thus unreasonable to require polynomial
time in the definition of an efficient preference elicitation
algorithm. We are happy to focus on the communication
complexity of elicitation because this problem is widely
believed to be more significant in practice than that of winner
determination [11].5
4
This differs slightly from the definition provided by Blum
et al. [5] Their demand queries are restricted to linear prices
over the goods, where the price of a bundle is the sum of
the prices of its underlying goods. In contrast our demand
queries allow for nonlinear prices, i.e. a distinct price for
every possible bundle. This is why the lower bound in their
Theorem 2 does not contradict our result that follows.
5
Though the winner determination problem is NP-hard for
general valuations, there exist many algorithms that solve
it efficiently in practice. These range from special
purpose algorithms [7, 16] to approaches using off-the-shelf IP
solvers [1].
182
Since the valuations need not be elicited exactly it is
initially less clear whether the polynomial dependence on
size(v1, . . . , vn) is justified in this setting. Intuitively, this
parameter is justified because we must learn valuations
exactly when performing elicitation, in the worst-case. We
address this in the next section.
3. PARALLELSBETWEEN EQUIVALENCE
AND DEMAND QUERIES
We have described the query learning and preference
elicitation settings in a manner that highlights their similarities.
Value and membership queries are clear analogs. Slightly
less obvious is the fact that equivalence and demand queries
are also analogs. To see this, we need the concept of Lindahl
prices. Lindahl prices are nonlinear and non-anonymous
prices over the bundles. They are nonlinear in the sense
that each bundle is assigned a price, and this price is not
necessarily the sum of prices over its underlying goods. They
are non-anonymous in the sense that two agents may face
different prices for the same bundle of goods. Thus Lindahl
prices are of the form pi(S), for all S ⊆ M, for all i ∈ N.
Lindahl prices are presented to the agents in demand
queries.
When agents have normalized quasi-linear utility
functions, Bikhchandani and Ostroy [4] show that there always
exist Lindahl prices such that (S1, . . . , Sn) is an optimal
allocation if and only if
Si ∈ arg max
Si
vi(Si) − pi(Si)
¡
∀i ∈ N (1)
(S1, . . . , Sn) ∈ arg max
(S1,...,Sn)∈Γ
i∈N
pi(Si) (2)
Condition (1) states that each agent is allocated a bundle
that maximizes its utility at the given prices. Condition (2)
states that the allocation maximizes the auctioneer"s
revenue at the given prices. The scenario in which these
conditions hold is called a Lindahl equilibrium, or often a
competitive equilibrium. We say that the Lindahl prices support
the optimal allocation. It is therefore sufficient to announce
supporting Lindahl prices to verify an optimal allocation.
Once we have found an allocation with supporting Lindahl
prices, the elicitation problem is solved.
The problem of finding an optimal allocation (with respect
to the manifest valuations) can be formulated as a linear
program whose solutions are guaranteed to be integral [4].
The dual variables to this linear program are supporting
Lindahl prices for the resulting allocation. The objective
function to the dual program is:
min
pi(S)
πs
+
i∈N
πi (3)
with πi = max
S⊆M
(˜vi(S) − pi(S))
πs
= max
(S1,...,Sn)∈Γ
i∈N
pi(Si)
The optimal values of πi and πs
correspond to the maximal
utility to agent i with respect to its manifest valuation and
the maximal revenue to the seller.
There is usually a range of possible Lindahl prices
supporting a given optimal allocation. The agent"s manifest
valuations are in fact valid Lindahl prices, and we refer to
them as maximal Lindahl prices. Out of all possible
vectors of Lindahl prices, maximal Lindahl prices maximize the
utility of the auctioneer, in fact giving her the entire
social welfare. Conversely, prices that maximize the
È
i∈N πi
component of the objective (the sum of the agents" utilities)
are minimal Lindahl prices. Any Lindahl prices will do for
our results, but some may have better elicitation
properties than others. Note that a demand query with maximal
Lindahl prices is almost identical to an equivalence query,
since in both cases we communicate the manifest valuation
to the agent. We leave for future work the question of which
Lindahl prices to choose to minimize preference elicitation.
Considering now why demand and equivalence queries are
direct analogs, first note that given the πi in some Lindahl
equilibrium, setting
pi(S) = max{0, ˜vi(S) − πi} (4)
for all i ∈ N and S ⊆ M yields valid Lindahl prices. These
prices leave every agent indifferent across all bundles with
positive price, and satisfy condition (1). Thus demand
queries can also implicitly communicate manifest valuations,
since Lindahl prices will typically be an additive constant
away from these by equality (4). In the following lemma we
show how to obtain counterexamples to equivalence queries
through demand queries.
Lemma 1. Suppose an agent replies with a preferred
bundle S when proposed a bundle S and supporting Lindahl
prices p(S) (supporting with respect to the the agent"s
manifest valuation). Then either ˜v(S) = v(S) or ˜v(S ) = v(S ).
Proof. We have the following inequalities:
˜v(S) − p(S) ≥ ˜v(S ) − p(S )
⇒ ˜v(S ) − ˜v(S) ≤ p(S ) − p(S) (5)
v(S ) − p(S ) > v(S) − p(S)
⇒ v(S ) − v(S) > p(S ) − p(S) (6)
Inequality (5) holds because the prices support the proposed
allocation with respect to the manifest valuation. Inequality
(6) holds because the agent in fact prefers S to S given the
prices, according to its response to the demand query. If it
were the case that ˜v(S) = v(S) and ˜v(S ) = v(S ), these
inequalities would represent a contradiction. Thus at least
one of S and S is a counterexample to the agent"s manifest
valuation.
Finally, we justify dependence on size(v1, . . . , vn) in
elicitation problems. Nisan and Segal (Proposition 1, [12])
and Parkes (Theorem 1, [13]) show that supporting Lindahl
prices must necessarily be revealed in the course of any
preference elicitation protocol which terminates with an optimal
allocation. Furthermore, Nisan and Segal (Lemma 1, [12])
state that in the worst-case agents" prices must coincide with
their valuations (up to a constant), when the valuation class
is rich enough to contain dual valuations (as will be the
case with most interesting classes). Since revealing Lindahl
prices is a necessary condition for establishing an optimal
allocation, and since Lindahl prices contain the same
information as valuation functions (in the worst-case), allowing
for dependence on size(v1, . . . , vn) in elicitation problems is
entirely natural.
183
4. FROM LEARNING TO PREFERENCE
ELICITATION
The key to converting a learning algorithm to an
elicitation algorithm is to simulate equivalence queries with
demand and value queries until an optimal allocation is found.
Because of our Lindahl price construction, when all agents
reply ‘YES" to a demand query, we have found an optimal
allocation, analogous to the case where an agent replies ‘YES"
to an equivalence query when the target function has been
exactly learned. Otherwise, we can obtain a
counterexample to an equivalence query given an agent"s response to a
demand query.
Theorem 1. The representation classes V1, . . . , Vn can
be polynomial-query elicited from value and demand queries
if they can each be polynomial-query exactly learned from
membership and equivalence queries.
Proof. Consider the elicitation algorithm in Figure 1.
Each membership query in step 1 is simulated with a value
query since these are in fact identical. Consider step 4. If all
agents reply ‘YES", condition (1) holds. Condition (2) holds
because the computed allocation is revenue-maximizing for
the auctioneer, regardless of the agents" true valuations.
Thus an optimal allocation has been found. Otherwise, at
least one of Si or Si is a counterexample to ˜vi, by Lemma 1.
We identify a counterexample by performing value queries
on both these bundles, and provide it to Ai as a response to
its equivalence query.
This procedure will halt, since in the worst-case all agent
valuations will be learned exactly, in which case the
optimal allocation and Lindahl prices will be accepted by all
agents. The procedure performs a polynomial number of
queries, since A1, . . . , An are all polynomial-query learning
algorithms.
Note that the conversion procedure results in a
preference elicitation algorithm, not a learning algorithm. That
is, the resulting algorithm does not simply learn the
valuations exactly, then compute an optimal allocation. Rather,
it elicits partial information about the valuations through
value queries, and periodically tests whether enough
information has been gathered by proposing an allocation to the
agents through demand queries. It is possible to generate a
Lindahl equilibrium for valuations v1, . . . , vn using an
allocation and prices derived using manifest valuations ˜v1, . . . , ˜vn,
and finding an optimal allocation does not imply that the
agents" valuations have been exactly learned. The use of
demand queries to simulate equivalence queries enables this
early halting. We would not obtain this property with
equivalence queries based on manifest valuations.
5. COMMUNICATION COMPLEXITY
In this section, we turn to the issue of the
communication complexity of elicitation. Nisan and Segal [12] show
that for a variety of rich valuation spaces (such as general
and submodular valuations), the worst-case communication
burden of determining Lindahl prices is exponential in the
number of goods, m. The communication burden is
measured in terms of the number of bits transmitted between
agents and auctioneer in the case of discrete communication,
or in terms of the number of real numbers transmitted in the
case of continuous communication.
Converting efficient learning algorithms to an elicitation
algorithm produces an algorithm whose queries have sizes
polynomial in the parameters m and size(v1, . . . , vn).
Theorem 2. The representation classes V1, . . . , Vn can
be efficiently elicited from value and demand queries if they
can each be efficiently exactly learned from membership and
equivalence queries.
Proof. The size of any value query is O(m): the
message consists solely of the queried bundle. To communicate
Lindahl prices to agent i, it is sufficient to communicate
the agent"s manifest valuation function and the value πi, by
equality (4). Note that an efficient learning algorithm never
builds up a manifest hypothesis of superpolynomial size,
because the algorithm"s runtime would then also be
superpolynomial, contradicting efficiency. Thus communicating the
manifest valuation requires size at most p(size(vi), m), for
some polynomial p that upper-bounds the runtime of the
efficient learning algorithm. Representing the surplus πi to
agent i cannot require space greater than q(size(˜vi), m) for
some fixed polynomial q, because we assume that the chosen
representation is polynomially interpretable, and thus any
value generated will be of polynomial size. We must also
communicate to i its allocated bundle, so the total
message size for a demand query is at most p(size(vi), m) +
q(p(size(vi), m), m)+O(m). Clearly, an agent"s response to
a value or demand query has size at most q(size(vi), m) +
O(m). Thus the value and demand queries, and the
responses to these queries, are always of polynomial size. An
efficient learning algorithm performs a polynomial number of
queries, so the total communication of the resulting
elicitation algorithm is polynomial in the relevant parameters.
There will often be explicit bounds on the number of
membership and equivalence queries performed by a learning
algorithm, with constants that are not masked by big-O
notation. These bounds can be translated to explicit bounds
on the number of value and demand queries made by the
resulting elicitation algorithm. We upper-bounded the size
of the manifest hypothesis with the runtime of the learning
algorithm in Theorem 2. We are likely to be able to do
much better than this in practice. Recall that an
equivalence query is proper if size( ˜f) ≤ size(f) at the time the
query is made. If the learning algorithm"s equivalence
queries are all proper, it may then also be possible to provide
tight bounds on the communication requirements of the
resulting elicitation algorithm.
Theorem 2 show that elicitation algorithms that depend
on the size(v1, . . . , vn) parameter sidestep Nisan and
Segal"s [12] negative results on the worst-case communication
complexity of efficient allocation problems. They provide
guarantees with respect to the sizes of the instances of
valuation functions faced at any run of the algorithm. These
algorithms will fare well if the chosen representation class
provides succinct representations for the simplest and most
common of valuations, and thus the focus moves back to one
of compact yet expressive bidding languages. We consider
these issues below.
6. APPLICATIONS
In this section, we demonstrate the application of our
methods to particular representation classes for
combinatorial valuations. We have shown that the preference
elicitation problem for valuation classes V1, . . . , Vn can be reduced
184
Given: exact learning algorithms A1, . . . , An for valuations classes V1, . . . , Vn respectively.
Loop until there is a signal to halt:
1. Run A1, . . . , An in parallel on their respective agents until each requires a response to an
equivalence query, or has halted with the agent"s exact valuation.
2. Compute an optimal allocation (S1, . . . , Sn) and corresponding Lindahl prices with respect to
the manifest valuations ˜v1, . . . , ˜vn determined so far.
3. Present the allocation and prices to the agents in the form of a demand query.
4. If they all reply ‘YES", output the allocation and halt. Otherwise there is some agent i that
has replied with some preferred bundle Si. Perform value queries on Si and Si to find a
counterexample to ˜vi, and provide it to Ai.
Figure 1: Converting learning algorithms to an elicitation algorithm.
to the problem of finding an efficient learning algorithm for
each of these classes separately. This is significant because
there already exist learning algorithms for a wealth of
function classes, and because it may often be simpler to solve
each learning subproblem separately than to attack the
preference elicitation problem directly. We can develop an
elicitation algorithm that is tailored to each agent"s valuation,
with the underlying learning algorithms linked together at
the demand query stages in an algorithm-independent way.
We show that existing learning algorithms for
polynomials, monotone DNF formulae, and linear-threshold functions
can be converted into preference elicitation algorithms for
general valuations, valuations with free-disposal, and
valuations with substitutabilities, respectively. We focus on
representations that are polynomially interpretable, because
the computational learning theory literature places a heavy
emphasis on computational tractability [18].
In interpreting the methods we emphasize the
expressiveness and succinctness of each representation class. The
representation class, which in combinatorial auction terms
defines a bidding language, must necessarily be expressive
enough to represent all possible valuations of interest, and
should also succinctly represent the simplest and most
common functions in the class.
6.1 Polynomial Representations
Schapire and Sellie [17] give a learning algorithm for sparse
multivariate polynomials that can be used as the basis for
a combinatorial auction protocol. The equivalence queries
made by this algorithm are all proper. Specifically, their
algorithm learns the representation class of t-sparse
multivariate polynomials over the real numbers, where the variables
may take on values either 0 or 1. A t-sparse polynomial
has at most t terms, where a term is a product of variables,
e.g. x1x3x4. A polynomial over the real numbers has
coefficients drawn from the real numbers. Polynomials are
expressive: every valuation function v : 2M
→ Ê+ can be
uniquely written as a polynomial [17].
To get an idea of the succinctness of polynomials as a
bidding language, consider the additive and single-item
valuations presented by Nisan [11]. In the additive valuation,
the value of a bundle is the number of goods the bundle
contains. In the single-item valuation, all bundles have value
1, except ∅ which has value 0 (i.e. the agent is satisfied as
soon as it has acquired a single item). It is not hard to show
that the single-item valuation requires polynomials of size
2m
− 1, while polynomials of size m suffice for the additive
valuation. Polynomials are thus appropriate for valuations
that are mostly additive, with a few substitutabilities and
complementarities that can be introduced by adjusting
coefficients.
The learning algorithm for polynomials makes at most
mti +2 equivalence queries and at most (mti +1)(t2
i +3ti)/2
membership queries to an agent i, where ti is the sparcity of
the polynomial representing vi [17]. We therefore obtain an
algorithm that elicits general valuations with a polynomial
number of queries and polynomial communication.6
6.2 XOR Representations
The XOR bidding language is standard in the
combinatorial auctions literature. Recall that an XOR bid is
characterized by a set of bundles B ⊆ 2M
and a value function
w : B → Ê+ defined on those bundles, which induces the
valuation function:
v(B) = max
{B ∈B | B ⊆B}
w(B ) (7)
XOR bids can represent valuations that satisfy free-disposal
(and only such valuations), which again is the property that
A ⊆ B ⇒ v(A) ≤ v(B).
The XOR bidding language is slightly less expressive than
polynomials, because polynomials can represent valuations
that do not satisfy free-disposal. However, XOR is as
expressive as required in most economic settings. Nisan [11] notes
that XOR bids can represent the single-item valuation with
m atomic bids, but 2m
− 1 atomic bids are needed to
represent the additive valuation. Since the opposite holds for
polynomials, these two languages are incomparable in
succinctness, and somewhat complementary for practical use.
Blum et al. [5] note that monotone DNF formulae are the
analogs of XOR bids in the learning theory literature. A
monotone DNF formula is a disjunction of conjunctions in
which the variables appear unnegated, for example x1x2 ∨
x3 ∨ x2x4x5. Note that such formulae can be represented
as XOR bids where each atomic bid has value 1; thus XOR
bids generalize monotone DNF formulae from Boolean to
real-valued functions. These insights allow us to generalize
a classic learning algorithm for monotone DNF ([3] Theorem
6
Note that Theorem 1 applies even if valuations do not
satisfy free-disposal.
185
1, [18] Theorem B) to a learning algorithm for XOR bids.7
Lemma 2. An XOR bid containing t atomic bids can be
exactly learned with t + 1 equivalence queries and at most
tm membership queries.
Proof. The algorithm will identify each atomic bid in
the target XOR bid in turn. Initialize the manifest valuation
˜v to the bid that is identically zero on all bundles (this is an
XOR bid containing 0 atomic bids). Present ˜v as an
equivalence query. If the response is ‘YES", we are done. Otherwise
we obtain a bundle S for which v(S) = ˜v(S). Create a
bundle T as follows. First initialize T = S. For each item i in T,
check via a membership query whether v(T) = v(T − {i}).
If so set T = T − {i}. Otherwise leave T as is and proceed
to the next item.
We claim that (T, v(T)) is an atomic bid of the target
XOR bid. For each item i in T, we have v(T) = v(T − {i}).
To see this, note that at some point when generating T, we
had a ¯T such that T ⊆ ¯T ⊆ S and v( ¯T) > v( ¯T − {i}), so that
i was kept in ¯T. Note that v(S) = v( ¯T) = v(T) because the
value of the bundle S is maintained throughout the process
of deleting items. Now assume v(T) = v(T − {i}). Then
v( ¯T) = v(T) = v(T − {i}) > v( ¯T − {i})
which contradicts free-disposal, since T − {i} ⊆ ¯T − {i}.
Thus v(T) > v(T − {i}) for all items i in T. This implies
that (T, v(T)) is an atomic bid of v. If this were not the case,
T would take on the maximum value of its strict subsets, by
the definition of an XOR bid, and we would have
v(T) = max
i∈T
{ max
T ⊆T −{i}
v(T )} = max
i∈T
{v(T − {i})} < v(T)
which is a contradiction.
We now show that v(T) = ˜v(T), which will imply that
(T, v(T)) is not an atomic bid of our manifest hypothesis by
induction. Assume that every atomic bid (R, ˜v(R))
identified so far is indeed an atomic bid of v (meaning R is indeed
listed in an atomic bid of v as having value v(R) = ˜v(R)).
This assumption holds vacuously when the manifest
valuation is initialized. Using the notation from (7), let ( ˜B, ˜w) be
our hypothesis, and (B, w) be the target function. We have
˜B ⊆ B, and ˜w(B) = w(B) for B ∈ ˜B by assumption. Thus,
˜v(S) = max
{B∈ ˜B | B⊆S}
˜w(B)
= max
{B∈ ˜B | B⊆S}
w(B)
≤ max
{B∈B | B⊆S}
w(B)
= v(S) (8)
Now assume v(T) = ˜v(T). Then,
˜v(T) = v(T) = v(S) = ˜v(S) (9)
The second equality follows from the fact that the value
remains constant when we derive T from S. The last
inequality holds because S is a counterexample to the
manifest valuation. From equation (9) and free-disposal, we
7
The cited algorithm was also used as the basis for Zinkevich
et al."s [19] elicitation algorithm for Toolbox DNF. Recall
that Toolbox DNF are polynomials with non-negative
coefficients. For these representations, an equivalence query can
be simulated with a value query on the bundle containing
all goods.
have ˜v(T) < ˜v(S). Then again from equation (9) it
follows that v(S) < ˜v(S). This contradicts (8), so we in fact
have v(T) = ˜v(T). Thus (T, v(T)) is not currently in our
hypothesis as an atomic bid, or we would correctly have
˜v(T) = v(T) by the induction hypothesis. We add (T, v(T))
to our hypothesis and repeat the process above,
performing additional equivalence queries until all atomic bids have
been identified.
After each equivalence query, an atomic bid is identified
with at most m membership queries. Each counterexample
leads to the discovery of a new atomic bid. Thus we make at
most tm membership queries and exactly t + 1 equivalence
queries.
The number of time steps required by this algorithm is
essentially the same as the number of queries performed, so
the algorithm is efficient. Applying Theorem 2, we therefore
obtain the following corollary:
Theorem 3. The representation class of XOR bids can
be efficiently elicited from value and demand queries.
This contrasts with Blum et al."s negative results ([5],
Theorem 2) stating that monotone DNF (and hence XOR bids)
cannot be efficiently elicited when the demand queries are
restricted to linear and anonymous prices over the goods.
6.3 Linear-Threshold Representations
Polynomials, XOR bids, and all languages based on the
OR bidding language (such as XOR-of-OR, OR-of-XOR,
and OR∗
) fail to succinctly represent the majority
valuation [11]. In this valuation, bundles have value 1 if they
contain at least m/2 items, and value 0 otherwise. More
generally, consider the r-of-S family of valuations where bundles
have value 1 if they contain at least r items from a specified
set of items S ⊆ M, and value 0 otherwise. The
majority valuation is a special case of the r-of-S valuation with
r = m/2 and S = M. These valuations are appropriate for
representing substitutabilities: once a required set of items
has been obtained, no other items can add value.
Letting k = |S|, such valuations are succinctly represented
by r-of-k threshold functions. These functions take the form
of linear inequalities:
xi1 + . . . + xik ≥ r
where the function has value 1 if the inequality holds, and
0 otherwise. Here i1, . . . , ik are the items in S. Littlestone"s
WINNOW 2 algorithm can learn such functions using
equivalence queries only, using at most 8r2
+ 5k + 14kr ln m + 1
queries [10]. To provide this guarantee, r must be known to
the algorithm, but S (and k) are unknown. The elicitation
algorithm that results from WINNOW 2 uses demand
queries only (value queries are not necessary here because the
values of counterexamples are implied when there are only
two possible values).
Note that r-of-k threshold functions can always be
succinctly represented in O(m) space. Thus we obtain an
algorithm that can elicit such functions with a polynomial
number of queries and polynomial communication, in the
parameters n and m alone.
186
7. CONCLUSIONS AND FUTURE WORK
We have shown that exact learning algorithms with
membership and equivalence queries can be used as a basis for
preference elicitation algorithms with value and demand
queries. At the heart of this result is the fact that demand
queries may be viewed as modified equivalence queries,
specialized to the problem of preference elicitation. Our result
allows us to apply the wealth of available learning algorithms
to the problem of preference elicitation.
A learning approach to elicitation also motivates a
different approach to designing elicitation algorithms that
decomposes neatly across agent types. If the designer knowns
beforehand what types of preferences each agent is likely to
exhibit (mostly additive, many substitutes, etc...), she can
design learning algorithms tailored to each agents"
valuations and integrate them into an elicitation scheme. The
resulting elicitation algorithm makes a polynomial number
of queries, and makes polynomial communication if the
original learning algorithms are efficient.
We do not require that agent valuations can be learned
with value and demand queries. Equivalence queries can
only be, and need only be, simulated up to the point where
an optimal allocation has been computed. This is the
preference elicitation problem. Theorem 1 implies that elicitation
with value and demand queries is no harder than learning
with membership and equivalence queries, but it does not
provide any asymptotic improvements over the learning
algorithms" complexity. It would be interesting to find
examples of valuation classes for which elicitation is easier than
learning. Blum et al. [5] provide such an example when
considering membership/value queries only (Theorem 4).
In future work we plan to address the issue of
incentives when converting learning algorithms to elicitation
algorithms. In the learning setting, we usually assume that
oracles will provide honest responses to queries; in the
elicitation setting, agents are usually selfish and will provide
possibly dishonest responses so as to maximize their utility.
We also plan to implement the algorithms for learning
polynomials and XOR bids as elicitation algorithms, and test
their performance against other established combinatorial
auction protocols [6, 15]. An interesting question here is:
which Lindahl prices in the maximal to minimal range are
best to quote in order to minimize information revelation?
We conjecture that information revelation is reduced when
moving from maximal to minimal Lindahl prices, namely as
we move demand queries further away from equivalence
queries. Finally, it would be useful to determine whether the
OR∗
bidding language [11] can be efficiently learned (and
hence elicited), given this language"s expressiveness and
succinctness for a wide variety of valuation classes.
Acknowledgements
We would like to thank Debasis Mishra for helpful
discussions. This work is supported in part by NSF grant
IIS0238147.
8. REFERENCES
[1] A. Andersson, M. Tenhunen, and F. Ygge. Integer
programming for combinatorial auction winner
determination. In Proceedings of the Fourth
International Conference on Multiagent Systems
(ICMAS-00), 2000.
[2] D. Angluin. Learning regular sets from queries and
counterexamples. Information and Computation,
75:87-106, November 1987.
[3] D. Angluin. Queries and concept learning. Machine
Learning, 2:319-342, 1987.
[4] S. Bikhchandani and J. Ostroy. The Package
Assignment Model. Journal of Economic Theory,
107(2), December 2002.
[5] A. Blum, J. Jackson, T. Sandholm, and M. Zinkevich.
Preference elicitation and query learning. In Proc.
16th Annual Conference on Computational Learning
Theory (COLT), Washington DC, 2003.
[6] W. Conen and T. Sandholm. Partial-revelation VCG
mechanism for combinatorial auctions. In Proc. the
18th National Conference on Artificial Intelligence
(AAAI), 2002.
[7] Y. Fujishima, K. Leyton-Brown, and Y. Shoham.
Taming the computational complexity of
combinatorial auctions: Optimal and approximate
approaches. In Proc. the 16th International Joint
Conference on Artificial Intelligence (IJCAI), pages
548-553, 1999.
[8] B. Hudson and T. Sandholm. Using value queries in
combinatorial auctions. In Proc. 4th ACM Conference
on Electronic Commerce (ACM-EC), San Diego, CA,
June 2003.
[9] M. J. Kearns and U. V. Vazirani. An Introduction to
Computational Learning Theory. MIT Press, 1994.
[10] N. Littlestone. Learning quickly when irrelevant
attributes abound: A new linear-threshold algorithm.
Machine Learning, 2:285-318, 1988.
[11] N. Nisan. Bidding and allocation in combinatorial
auctions. In Proc. the ACM Conference on Electronic
Commerce, pages 1-12, 2000.
[12] N. Nisan and I. Segal. The communication
requirements of efficient allocations and supporting
Lindahl prices. Working Paper, Hebrew University,
2003.
[13] D. C. Parkes. Price-based information certificates for
minimal-revelation combinatorial auctions. In
Padget et al., editor, Agent-Mediated Electronic
Commerce IV,LNAI 2531, pages 103-122.
Springer-Verlag, 2002.
[14] D. C. Parkes. Auction design with costly preference
elicitation. In Special Issues of Annals of Mathematics
and AI on the Foundations of Electronic Commerce,
Forthcoming (2003).
[15] D. C. Parkes and L. H. Ungar. Iterative combinatorial
auctions: Theory and practice. In Proc. 17th National
Conference on Artificial Intelligence (AAAI-00), pages
74-81, 2000.
[16] T. Sandholm, S. Suri, A. Gilpin, and D. Levine.
CABOB: A fast optimal algorithm for combinatorial
auctions. In Proc. the 17th International Joint
Conference on Artificial Intelligence (IJCAI), pages
1102-1108, 2001.
[17] R. Schapire and L. Sellie. Learning sparse multivariate
polynomials over a field with queries and
counterexamples. In Proceedings of the Sixth Annual
ACM Workshop on Computational Learning Theory,
pages 17-26. ACM Press, 1993.
187
[18] L. Valiant. A theory of the learnable. Commun. ACM,
27(11):1134-1142, Nov. 1984.
[19] M. Zinkevich, A. Blum, and T. Sandholm. On
polynomial-time preference elicitation with
value-queries. In Proc. 4th ACM Conference on
Electronic Commerce (ACM-EC), San Diego, CA,
June 2003.
188 | xor bid;learning theory;polynomial communication;elicitation algorithm;learning;linear-threshold function;learn;preference elicitation problem;combinatorial auction;monotone dnf;learning algorithm;conversion procedure;preference elicitation algorithm;polynomial;polynomial number of query;parallel;resulting algorithm;preference elicitation;combinatorial auction protocol;query polynomial number |
train_J-73 | Competitive Algorithms for VWAP and Limit Order Trading | We introduce new online models for two important aspects of modern financial markets: Volume Weighted Average Price trading and limit order books. We provide an extensive study of competitive algorithms in these models and relate them to earlier online algorithms for stock trading. | 1. INTRODUCTION
While popular images of Wall Street often depict
swashbuckling traders boldly making large gambles on just their
market intuitions, the vast majority of trading is actually
considerably more technical and constrained. The constraints
often derive from a complex combination of business,
regulatory and institutional issues, and result in certain kinds
of standard trading strategies or criteria that invite
algorithmic analysis.
One of the most common activities in modern financial
markets is known as Volume Weighted Average Price, or
VWAP, trading. Informally, the VWAP of a stock over a
specified market period is simply the average price paid per
share during that period, so the price of each transaction in
the market is weighted by its volume. In VWAP trading,
one attempts to buy or sell a fixed number of shares at a
price that closely tracks the VWAP.
Very large institutional trades constitute one of the main
motivations behind VWAP activity. A typical scenario goes
as follows. Suppose a very large mutual fund holds 3% of
the outstanding shares of a large, publicly traded company
- a huge fraction of the shares - and that this fund"s
manager decides he would like to reduce this holding to 2% over
a 1-month period. (Such a decision might be forced by the
fund"s own regulations or other considerations.) Typically,
such a fund manager would be unqualified to sell such a large
number of shares in the open market - it requires a
professional broker to intelligently break the trade up over time,
and possibly over multiple exchanges, in order to minimize
the market impact of such a sizable transaction. Thus, the
fund manager would approach brokerages for help in selling
the 1%.
The brokerage will typically alleviate the fund manager"s
problem immediately by simply buying the shares directly
from the fund manager, and then selling them off
laterbut what price should the brokerage pay the fund manager?
Paying the price on the day of the sale is too risky for the
brokerage, as they need to sell the shares themselves over an
extended period, and events beyond their control (such as
wars) could cause the price to fall dramatically. The usual
answer is that the brokerage offers to buy the shares from
the fund manager at a per-share price tied to the VWAP
over some future period - in our example, the brokerage
might offer to buy the 1% at a per-share price of the
coming month"s VWAP minus 1 cent. The brokerage now has a
very clean challenge: by selling the shares themselves over
the next month in a way that exactly matches the VWAP,
a penny per share is earned in profits. If they can beat the
VWAP by a penny, they make two cents per share. Such
small-margin, high-volume profits can be extremely
lucrative for a large brokerage. The importance of the VWAP
has led to many automated VWAP trading algorithms -
indeed, every major brokerage has at least one VWAP box,
189
Price Volume Model Order Book Model Macroscopic Distribution Model
OWT Θ(log(R)) (From[3]) O(log(R) log(N)) 2E(Pbins
maxprice )
2(1 + )E(Pbins
maxprice ) for -approx of Pbins
maxprice
Θ(log(Q)) (same as above plus...)
VWAP Θ(log(R)) O(log(R) log(N)) (from above) 2E(Pbins
vol )
Ω(Q) fixed schedule O(log(Q)) for large N (1 + )2E(Pbins
vol ) for -approx. of Pbins
vol
1 for volume in [N, QN]
Figure 1: The table summarizes the results presented in this paper. The rows represent results for either the OWT
or VWAP criterion. The columns represent which model we are working in. The entry in the table is the competitive
ratio between our algorithm and an optimal algorithm, and the closer the ratio is to 1 the better. The parameter R
represents a bound on the maximum to the minimum price fluctuation and the parameter Q represents a bound on the
maximum to minimum volume fluctuation in the respective model. (See Section 4 for a description of the Macroscopic
Distribution Model.) All the results for the OWT trading criterion (which is a stronger criterion) directly translate
to the VWAP criterion. However, in the VWAP setting, considering a restriction on the maximum to the minimum
volume fluctuation Q, leads to an additional class of results which depends on Q.
and some small companies focus exclusively on proprietary
VWAP trading technology.
In this paper, we provide the first study of VWAP trading
algorithms in an online, competitive ratio setting. We first
formalize the VWAP trading problem in a basic online model
we call the price-volume model, which can be viewed as a
generalization of previous theoretical online trading models
incorporating market volume information. In this model, we
provide VWAP algorithms and competitive ratios, and
compare this setting with the one-way trading (OWT) problem
studied in [3].
Our most interesting results, however, examine the VWAP
trading problem in a new online trading model capturing the
important recent phenomenon of limit order books in
financial markets. Briefly, a limit buy or sell order specifies both
the number of shares and the desired price, and will only
be executed if there is a matching party on the opposing
side, according to a well-defined matching procedure used
by all the major exchanges. While limit order books (the
list of limit orders awaiting possible future execution) have
existed since the dawn of equity exchanges, only very
recently have these books become visible to traders in real
time, thus opening the way to trading algorithms of all
varieties that attempt to exploit this rich market microstructure
data. Such data and algorithms are a topic of great current
interest on Wall Street [4].
We thus introduce a new online trading model
incorporating limit order books, and examine both the one-way and
VWAP trading problems in it. Our results are summarized
in Figure 1 (see the caption for a summary).
2. THEPRICE-VOLUMETRADINGMODEL
We now present a trading model which includes both price
and volume information about the sequence of trades. While
this model is a generalization of previous formalisms for
online trading, it makes an infinite liquidity assumption which
fails to model the negative market impact that trading a
large number of shares typically has. This will be addressed
in the order book model studied in the next section.
A note on terminology: throughout the paper (unless
otherwise specified), we shall use the term market to describe
all activity or orders other than those of the algorithm
under consideration. The setting we consider can be viewed as
a game between our algorithm and the market.
2.1 The Model
In the price-volume trading model, we assume that the
intraday trading activity in a given stock is summarized by
a discrete sequence of price and volume pairs (pt, vt) for
t = 1, . . . , T. Here t = 0 corresponds to the day"s
market open, and t = T to the close. While there is nothing
technically special about the time horizon of a single day, it
is particularly consistent with limit order book trading on
Wall Street. The pair (pt, vt) represents the fact that a total
of vt shares were traded at an (average) price per share pt
in the market between time t − 1 and t. Realistically, we
should imagine the number of intervals T being reasonably
large, so that it is sensible to assign a common approximate
price to all shares traded within an interval.
In the price-volume model, we shall make an infinite
liquidity assumption for our trading algorithms. More
precisely, in this online model, we see the price-volume sequence
one pair at a time. Following the observation of (pt, vt),
we are permitted to sell any (possibly fractional) number
of shares nt at the price pt. Let us assume that our goal
is to sell N shares over the course of the day. Hence, at
each time, we must select a (possibly fractional) number of
shares nt to sell at price pt, subject to the global constraint
T
t=1 nt = N. It is thus assumed that if we have left over
shares to sell after time T − 1, we are forced to sell them at
the closing price of the market - that is, nT = N − T −1
t=1 nt
is sold at pT . In this way we are certain to sell exactly N
shares over the course of the day; the only thing an
algorithm must do is determine the schedule of selling based on
the incoming market price-volume stream.
Any algorithm which sells fractional volumes can be
converted to a randomized algorithm which only sells integral
volumes with the same expected number of shares sold. If
we keep the hard constraint of selling exactly N shares, we
might incur an additional slight loss in the conversion. (Note
that we only allow fractional volumes in the price-volume
model, where liquidity is not an issue. In the order book
model to follow, we do not allow fractional volumes.)
In VWAP trading, the goal of an online algorithm A which
sells exactly N shares is not to maximize profits per se, but
to track the market VWAP. The market VWAP for an
intraday trading sequence S = (p1, v1), . . . , (pT , vT ) is simply the
average price paid per share over the course of the trading
190
day, ie
VWAPM (S) =
T
t=1
ptvt /V
where V is the total daily volume, i.e., V = T
t=1 vt. If on
the sequence S, the algorithm A sells its N stocks using the
volume sequence n1, . . . nT , then we analogously define the
VWAP of A on market sequence S by
VWAPA(S) =
T
t=1
ptnt /N .
Note that the market VWAP does not include the shares
that the algorithm sells.
The VWAP competitive ratio of A with respect to a set
of sequences Σ is then
RVWAP(A) = max
S∈Σ
{VWAPM (S)/VWAPA(S)}
In the case that A is randomized, we generalize the definition
above by taking an expectation over VWAPA(S) inside the
max. We note that unlike on Wall Street, our definition of
VWAPM does not take our own trading into account. It is
easy to see that this makes it a more challenging criterion
to track.
In contrast to the VWAP, another common measure of
the performance of an online selling algorithm would be its
one-way trading (OWT) competitive ratio [3] with respect
to a set of sequences Σ:
ROWT(A) = max
S∈Σ
max
1≤t≤T
{pt/VWAPA(S)}
where the algorithms performance is compared to the largest
individual price appearing in the sequence S.
In both VWAP and OWT, we are comparing the average
price per share received by a selling algorithm to some
measure of market performance. In the case of OWT, we
compare to the rather ambitious benchmark of the high price of
the day, ignoring volumes entirely. In VWAP trading, we
have the more modest goal of comparing favorably to the
overall market average of the day. As we shall see, there are
some important commonalities and differences to these two
approaches. For now we note one simple fact: on any specific
sequence S, VWAPA(S) may be larger that VWAPM (S).
However, RVWAP(A) cannot be smaller than 1, since on any
sequence S in which all price pt are identical, it is impossible
to get a better average share per price. Thus, for all
algorithms A, both RVWAP(A) and ROWT(A) are larger than
1, and the closer to 1 they are, the better A is tracking its
respective performance measure.
2.2 VWAP Results in the Price-Volume Model
As in previous work on online trading, it is generally not
possible to obtain finite bounds on competitive ratios with
absolutely no assumptions on the set of sequences
Σbounds on the maximum variation in price or volume are
required, depending on the exact setting. We thus introduce
the following two assumptions.
2.2.0.1 Volume Variability Assumption..
Let 0 < Vmin ≤ Vmax be known positive constants, and
define Q = Vmax /Vmin . For all intraday trading sequences
S ∈ Σ, the total daily volume V ∈ [Vmin , Vmax ].
2.2.0.2 Price Variability Assumption..
Let 0 < pmin ≤ pmax be known positive constants, and
define R = pmax/pmin. For all intraday trading sequences S ∈
Σ, the prices satisfy pt ∈ [pmin, pmax], for all t = 1, . . . , T.
Competitive ratios are generally taken over all sets Σ
consistent with at least one of these assumptions. To gain some
intuition consider the two trivial cases of R = 1 and Q = 1.
In the case of R = 1 (where there is no fluctuation in price),
any schedule is optimal. In the case of Q = 1 (where the
total volume V over the trading period is known), we can
gain a competitive ratio of 1 by selling vt
V
N shares after each
time period.
For the OWT problem in the price-volume model,
volumes are irrelevant for the performance criterion, but for
the VWAP criterion they are central. For the OWT problem
under the price variability assumption, the results of [3]
established that the optimal competitive ratio was Θ(log(R)).
Our first result establishes that the optimal competitive
ratio for VWAP under the volume variability assumption is
Θ(log(Q)) and is achieved by an algorithm that ignores the
price data.
Theorem 1. In the price-volume model under the volume
variability assumption, there exists an online algorithm A
for selling N shares achieving competitive ratio RVWAP(A) ≤
2 log(Q). In addition, if only the volume variability (and not
the price variability) assumption holds, any online algorithm
A for selling N shares has RVWAP(A) = Ω(log(Q)).
Proof. (Sketch) For the upper bound, the idea is
similar to the price reservation algorithm of [3] for the OWT
problem, and similar in spirit to the general technique of
classify and select [1]. Consider algorithms which use a
parameter ˆV , which is interpreted as an estimate for the total
volume for the day. Then at each time t, if the market
price and volume is (pt, vt), the algorithm sells a fraction
vt/ ˆV of its shares. We consider a family of log(Q) such
algorithms, where algorithm Ai uses ˆV = Vmin 2i−1
. Clearly,
one of the Ai has a competitive ratio of 2. We can derive an
O(log(Q)) VWAP competitive ratio by running these
algorithms in parallel, and letting each algorithm sell N/ log(Q)
shares. (Alternatively, we can randomly select one Ai and
guarantee the same expected competitive ratio.)
We now sketch the proof of the lower bound, which
relates performance in the VWAP and OWT problems. Any
algorithm that is c-competitive in the VWAP setting
(under fixed Q) is 3c-competitive in the OWT setting with
R = Q/2. To show this, we take any sequence S of prices
for the OWT problem, and convert it into a price-volume
sequence for the VWAP problem. The prices in the VWAP
sequence are the same as in S. To construct the volumes in the
VWAP sequence, we segment the prices in S into log(R)
intervals [2i−1
pmin , 2i
pmin ). Suppose pt ∈ [2i−1
pmin , 2i
pmin ),
and this is the first time in S that a price has fallen in this
interval. Then in the VWAP sequence we set the volume
vt = 2i−1
. If this is not the first visit to the interval
containing pt, we set vt = 0. Assume that the maximum price
in S is pmax . The VWAP of our sequence is at least pmax /3.
Since we had a c competitive algorithm, its average sell is at
least pmax /3c. The lower bound now follows using the lower
bound in [3].
An alternative approach to VWAP is to ignore the
volumes in favor of prices, and apply an algorithm for the OWT
problem. Note that the lower bound in this theorem, unlike
in the previous one, only assumes a price variation bound.
191
Theorem 2. In the price-volume model under the price
variability assumption, there exists an online algorithm A
for selling N shares achieving competitive ratio RVWAP(A) =
O(log(R)). In addition, if only the price variability (and not
the volume variability) assumption holds, any online A for
selling N shares has RVWAP(A) = Ω(log(R)).
Proof. (Sketch) Follows immediately from the results of
[3] for OWT: the upper bound from the simple fact that for
any sequence S, VWAPA(S) is less than max1≤t≤T {pt}, and
the lower bound from a reduction to OWT.
Theorems 1 and 2 demonstrate that one can achieve
logarithmic VWAP competitive ratios under the assumption of
either bounded variability of total volume or bounded
variability of maximum price. If both assumptions hold, it is
possible to give an algorithm accomplishing the minimum
of log(Q) and log(R). This flexibility of approach derives
from the fact that the VWAP is a quantity in which both
prices and volumes matter, as opposed to OWT.
2.3 RelatedResultsinthePrice-VolumeModel
All of the VWAP algorithms we have discussed so far
make some use of the daily data (pt, vt) as it unfolds,
using either the price or volume information. In contrast, a
fixed schedule VWAP algorithm has a predetermined
distribution {f1, f2, . . . fT }, and simply sells ftN shares at time t,
independent of (pt, vt). Fixed schedule VWAP algorithms,
or slight variants of them, are surprisingly common on Wall
Street, and the schedule is usually derived from historical
intraday volume data. Our next result demonstrates that
such algorithms can perform considerably worse than
dynamically adaptive algorithms in terms of the worst case
competitive ratio.
Theorem 3. In the price-volume model under both the
volume and price variability assumptions, any fixed schedule
VWAP algorithm A for selling N shares has sell VWAP
competitive ratio RVWAP(A) = Ω(min(T, R)).
The proofs of all the results in this subsection are in the
Appendix.
So far our emphasis has been on VWAP algorithms that
must sell exactly N shares. In many realistic circumstances,
however, there is actually some flexibility in the precise
number of shares to be sold. For instance, this is true at
large brokerages, where many separate VWAP trades may
be pooled and executed by a common algorithm, and the
firm would be quite willing to carry a small position of
unsold shares overnight if it resulted in better execution prices.
The following theorem (which interestingly has no analogue
for the OWT problem) demonstrates that this trade-off in
shares sold and performance can be realized dramatically in
our model. It states that if we are willing to let the number
of shares sold vary with Q, we can in fact achieve a VWAP
competitive ratio of 1.
Theorem 4. In the price-volume model under the volume
variability assumption, there exists an algorithm A that
always sells between N and QN shares and that the average
price per sold share is exactly VWAPM (S).
In many online problems, there is a clear distinction
between benefit problems and cost problems [2]. In the
VWAP setting, selling shares is a benefit problem, and
buying shares is a cost problem. The definitions of the
competitive ratios, Rbuy
VWAP(A) and Rbuy
OWT(A), for algorithms which
Figure 2: Sample Island order books for MSFT.
buy exactly N shares are maxS∈Σ{VWAPA(S)/VWAPM (S)}
and maxS∈Σ maxt{VWAPA(S)/pt} respectively. Eventhough
Theorem 4 also holds for buying, in general, the competitive
ratio of the buy (cost) problem is much higher, as stated in
the following theorem.
Theorem 5. In the price-volume model under the volume
and price variability assumptions, there exists an online
algorithm A for buying N shares achieving buy VWAP
competitive ratio Rbuy
VWAP(A) = O(min{Q,
√
R}). In addition
any online algorithm A for buying N shares has buy VWAP
competitive ratio Rbuy
VWAP(A) = Ω(min{Q,
√
R}).
3. A LIMIT ORDER BOOK TRADING
MODEL
Before we can define our online trading model based on
limit order books, we give some necessary background on
the detailed mechanics of financial markets, which are
sometimes referred to as market microstructure. We then provide
results and algorithms for both the OWT and VWAP
problems.
192
3.1 Background on Limit Order Books and
Market Microstructure
A fundamental distinction in stock trading is that between
a limit order and a market order. Suppose we wish to
purchase 1000 shares of Microsoft (MSFT) stock. In a limit
order, we specify not only the desired volume (1000 shares),
but also the desired price. Suppose that MSFT is currently
trading at roughly $24.07 a share (see Figure 2, which shows
an actual snapshot of a recent MSFT order book on
Island (www.island.com), a well-known electronic exchange
for NASDAQ stocks), but we are only willing to buy the
1000 shares at $24.04 a share or lower. We can choose to
submit a limit order with this specification, and our order
will be placed in a queue called the buy order book, which
is ordered by price, with the highest offered unexecuted buy
price at the top (often referred to as the bid). If there are
multiple limit orders at the same price, they are ordered by
time of arrival (with older orders higher in the book). In the
example provided by Figure 2, our order would be placed
immediately after the extant order for 5,503 shares at $24.04;
though we offer the same price, this order has arrived before
ours. Similarly, a sell order book for sell limit orders (for
instance, we might want to sell 500 shares of MSFT at $24.10
or higher) is maintained, this time with the lowest sell price
offered (often referred to as the ask).
Thus, the order books are sorted from the most
competitive limit orders at the top (high buy prices and low sell
prices) down to less competitive limit orders. The bid and
ask prices (which again, are simply the prices in the limit
orders at the top of the buy and sell books, respectively)
together are sometimes referred to as the inside market, and
the difference between them as the spread. By definition,
the order books always consist exclusively of unexecuted
orders - they are queues of orders hopefully waiting for the
price to move in their direction.
How then do orders get executed? There are two
methods. First, any time a market order arrives, it is immediately
matched with the most competitive limit orders on the
opposing book. Thus, a market order to buy 2000 shares is
matched with enough volume on the sell order book to fill
the 2000 shares. For instance, in the example of Figure 2,
such an order would be filled by the two limit sell orders
for 500 shares at $24.069, the 500 shares at $24.07, the 200
shares at $24.08, and then 300 of the 1981 shares at $24.09.
The remaining 1681 shares of this last limit order would
remain as the new top of the sell limit order book. Second,
if a buy (sell, respectively) limit order comes in above the
ask (below the bid, respectively) price, then the order is
matched with orders on the opposing books. It is important
to note that the prices of executions are the prices specified
in the limit orders already in the books, not the prices of the
incoming order that is immediately executed.
Every market or limit order arrives atomically and
instantaneously - there is a strict temporal sequence in which
orders arrive, and two orders can never arrive simultaneously.
This gives rise to the definition of the last price of the
exchange, which is simply the last price at which the exchange
executed an order. It is this quantity that is usually meant
when people casually refer to the (ticker) price of a stock.
Note that a limit buy (sell, respectively) order with a
price of infinity (0, respectively) is effectively a market
order. We shall thus assume without loss of generality that
all orders are placed as limit order. Although limit orders
which are unexecuted may be removed by the party which
placed them, for simplicity, we assume that limit orders are
never removed from the books.
We refer the reader to [4] for further discussion of modern
electronic exchanges and market microstructure.
3.2 The Model
The online order book trading model is intended to capture
the realistic details of market microstructure just discussed
in a competitive ratio setting. In this refined model, a day"s
market activity is described by a sequence of limit orders
(pt, vt, bt). Here bt is a bit indicating whether the order is
a buy or sell order, while pt is the limit order price and vt
the number of shares desired. Following the arrival of each
such limit order, an online trading algorithm is permitted
to place its own limit order. These two interleaved sources
(market and algorithm) of limit orders are then simply
operated on according to the matching process described in
Section 3.1. Any limit order that is not immediately
executable according to this process is placed in the appropriate
(buy or sell) book for possible future execution; arriving
orders that can be partially or fully executed are so executed,
with any residual shares remaining on the respective book.
The goal of a VWAP or OWT selling algorithm is
essentially the same as in the price-volume model, but the
context has changed in the following two fundamental ways.
First, the assumption of infinite liquidity in the price-volume
model is eliminated entirely. The number of shares available
at any given price is restricted to the total volume of limit
orders offering that price. Second, all incoming orders, and
therefore the complete limit order books, are assumed to
be visible to the algorithm. This is consistent with modern
electronic financial exchanges, and indeed is the source of
much current interest on Wall Street [4].
In general, the definition of competitive ratios in the order
book model is complicated by the fact that now our
algorithm"s activity influences the sequence of executed prices
and volumes. We thus first define the execution sequence
determined by a limit order sequence (placed by the
market and our algorithm). Let S = (p1, v1, b1), . . . , (pT , vT , bT )
be a limit order sequence placed by the market, and let
S = (p1, v1, b1), . . . , (pT , vT , bT ) be a limit order sequence
placed by our algorithm (unless otherwise specified, all bt are
of the sell type). Let merge(S, S ) be the merged sequence
(p1, v1, b1), (p1, v1, b1), . . . , (pT , vT , bT ), (pT , vT , bT ), which is
the time sequence of orders placed by the market and
algorithm. Note that the algorithm has the option of not placing
an order, which we can view as a zero volume order.
If we conducted the order book maintenance and order
execution process described in Section 3.1 on the sequence
merge(S, S ), at irregular intervals a trade occurs for some
number of shares and some price. In each executed trade,
the selling party is either the market or the algorithm. Let
execM (S, S ) = (q1, w1), . . . , (qT , wT ) be the sequence of
executions where the market (that is, a party other than
the algorithm) was the selling party, where the qt are the
execution prices and wt the execution volumes. Similarly,
we define execA(S, S ) = (r1, x1), . . . , (rT , xT ) to be the
sequence of executions in which the algorithm was the selling
party. Thus, execA(S, S ) ∪ execM (S, S ) is the set of all
executions. We generally expect T to be (possibly much)
smaller than T .
The revenue of the algorithm and the market are defined
193
as:
REVM (S, S ) ≡
T
t=1
qtwt , REVA(S, S ) ≡
T
t=1
rtxt
Note that both these quantities are solely determined by the
execution sequences execM (S, S ) and execA(S, S ),
respectively.
For an algorithm A which is constrained to sell exactly N
shares, we define the OWT competitive ratio of A, ROWT(A),
as the maximum ratio (under any S ∈ Σ) of the revenue
obtained by A, as compared to the revenue obtained by an
optimal offline algorithm A∗
. More formally, for A∗
which
is constrained to sell exactly N shares, we define
ROWT(A) = max
S∈Σ
max
A∗
REVA∗ (S S∗
)
REVA(S, S )
where S∗
is the limit order sequence placed by A∗
on S. If
the algorithm A is randomized then we take the appropriate
expectation with respect to S ∼ A.
We define the VWAP competitive ratio, RVWAP(A), as
the maximum ratio (under any S ∈ Σ) between the market
and algorithm VWAPs. More formally, define VWAPM (S, S )
as REVM (S, S )/ T
t=1 wt, where the denominator is just
the total executed volume of orders placed by the
market. Similarly, we define VWAPA(S, S ) as REVA(S, S )/N,
since we assume the algorithm sells no more than N shares
(this definition implicitly assumes that A gets a 0 price for
unsold shares). The VWAP competitive ratio of A is then:
RVWAP(A) = max
S∈Σ
{VWAPM (S, S )/VWAPA(S, S )}
where S is the online sequence of limit orders generated by
A in response to the sequence S.
3.3 OWT Results in the Order Book Model
For the OWT problem in the order book model, we
introduce a more subtle version of the price variability
assumption. This is due to the fact that our algorithm"s trading
can impact the high and low prices of the day. For the
assumption below, note that execM (S, ∅) is the sequence of
executions without the interaction of our algorithm.
3.3.0.3 Order Book Price Variability Assumption..
Let 0 < pmin ≤ pmax be known positive constants, and
define R = pmax/pmin. For all intraday trading sequences
S ∈ Σ, the prices pt in the sequence execM (S, ∅) satisfy
pt ∈ [pmin, pmax], for all t = 1, . . . , T.
Note that this assumption does not imply that the ratios
of high to low prices under the sequences execM (S, S ) or
execA(S, S ) are bounded by R. In fact, the ratio in the
sequence execA(S, S ) could be infinite if the algorithm ends
up selling some stocks at a 0 price.
Theorem 6. In the order book model under the order
book price variability assumption, there exists an online
algorithm A for selling N shares achieving sell OWT competitive
ratio ROWT(A) = 2 log(R) log(N).
Proof. The algorithm A works by guessing a price p in
the set {pmin2i
: 1 ≤ i ≤ log(R)} and placing a sell limit
order for all N shares at the price p at the beginning of
the day. (Alternatively, algorithm A can place log(R) sell
limit orders, where the i-th one has price 2i
pmin and volume
N/ log(R).) By placing an order at the beginning of the day,
the algorithm undercuts all sell orders that will be placed
during the day for a price of p or higher, meaning the
algorithm"s N shares must be filled first at this price. Hence,
if there were k shares that would have been sold at price p
or higher without our activity, then A would sell at least kp
shares.
We define {pj} to be the multiset of prices of
individual shares that are either executed or are buy limit order
shares that remained unexecuted, excluding the activity of
our algorithm (that is, assuming our algorithm places no
orders). Assume without loss of generality that p1 ≥ p2 ≥ . . ..
Consider guessing the kth highest such price, pk. If an
order for N shares is placed at the day"s start at price pk,
then we are guaranteed to obtain a return of kpk. Let
k∗
= argmaxk{kpk}. We can view our algorithm as
attempting to guess pk∗ , and succeeding if the guess p
satisfies p ∈ [pk∗ /2, pk∗ ]. Hence, we are 2 log(R) competitive
with the quantity max1≤k≤N kpk. Note that
ρ ≡
N
i=1
pi
=
N
i=1
1
i
ipi
≤ max
1≤k≤N
kpk
N
i=1
1
i
≤ log(N) max
1≤k≤N
kpk
where ρ is defined as the sum of the top N prices pi without
A"s involvement.
Similarly, let {pj} be the multiset of prices of
individual executed shares, or the prices of unexecuted buy order
shares, but now including the orders placed by some selling
algorithm A . We now wish to show that for all algorithms
A which sell N shares, REVA ≤ N
i=1 pi ≤ ρ.
Essentially, this inequality states the intuitive idea that a selling
algorithm can only lower executed or unmatched buy
order share prices. To prove this, we use induction to show
that the removal of the activity of a selling algorithm causes
these prices to increase. First, remove the last share in the
last sell order placed by either A or the market on an
arbitrary sequence merge(S, S ) - by this we mean, take the
last sell order placed by A or the market and decrease its
volume by one share. After this modification, the top N
prices p1 . . . pN will not decrease. This is because either this
sell order share was not executed, in which case the claim
is trivially true, or, if it was executed, the removal of this
sell order share leaves an additional unexecuted buy order
share of equal or higher price. For induction, assume that if
we remove a share from any sell order that was placed, by
A or the market, at or after time t then the top N prices
do not decrease. We now show that if we remove a share
from the last sell order that was placed by A or the market
before time t, then the top N prices do not decrease. If this
sell order share was not executed, then the claim is trivially
true. Else, if the sell order share was executed, then claim
is true because by removing this executed share from the
sell order either: i) the corresponding buy order share (of
equal or higher value) is unmatched on the remainder of the
sequence, in which case the claim is true; or ii) this buy
194
order matches some sell order share at an equal or higher
price, which has the effect of removing a share from a sell
order on the remainder of the sequence, and, by the
inductive assumption, this can only increase prices. Hence, we
have proven that for all A which sell N shares REVA ≤ ρ.
We have now established that our revenue satisfies
2 log(R)ES ∼A[REVA(S, S )] ≥ max
1≤k≤N
{kpk}
≥ ρ/ log(N)
≥ max
A
{REVA }/ log(N),
where A performs an arbitrary sequence of N sell limit
orders.
3.4 VWAP Results in the Order Book Model
The OWT algorithm from Theorem 6 can be applied to
obtain the following VWAP result:
Corollary 7. In the order book model under the order
book price variability assumption, there exists an online
algorithm A for selling N shares achieving sell VWAP
competitive ratio RVWAP(A) = O(log(R) log(N)).
We now make a rather different assumption on the
sequences S.
3.4.0.4 Bounded Order Volume and Max Price
Assumption.
The set of sequences Σ satisfies the following two
properties. First, we assume that each order placed by the market
is of volume less than γ, which we view as a mild assumption
since typically single orders on the market are not of high
volume (due to liquidity issues). This assumption allows our
algorithm to place at least one limit order at a time
interleaved with approximately γ market executions. Second, we
assume that there is large volume in the sell order books
below the price pmax , which means that no orders placed by
the market will be executed above the price pmax . The
simplest way to instantiate this latter assumption in the order
book model is to assume that each sequence S ∈ Σ starts
by placing a huge number of sell orders (more than Vmax )
at price pmax .
Although this assumption has a maximum price
parameter, it does not imply that the price ratio R is finite, since
it does not imply any lower bound on the prices of buy or
executed shares (aside from the trivial one of 0).
Theorem 8. Consider the order book model under the
bounded order volume and max price assumption. There
exists an algorithm A in which after exactly γN market
executions have occurred, then A has sold at most N shares
and
REVA(S, S )
N
= VWAPA(S, S )
≥ (1 − )VWAPM (S, S ) −
pmax
N
where S is a sequence of N sell limit orders generated by A
when observing S.
Proof. The algorithm divides the trading day into
volume intervals whose real-time duration may vary. For each
period i in which γ shares have been executed in the
market, the algorithm computes the market VWAP of only those
shares traded in period i; let us denote this by VWAPi.
Following this ith volume interval, the algorithm places a limit
order to sell exactly one share at a price close to VWAPi.
More precisely, the algorithm only places orders at the
discrete prices (1− )pmax , (1− )2
pmax , . . .. Following volume
interval i, the algorithm places a limit order to sell one share
at the discretized price that is closest to VWAPi, but which
is strictly smaller.
For the analysis, we begin by noting that if all of the
algorithm"s limit orders are executed during the day, the
total revenue received by the algorithm would be at least
(1 − )VWAPM (S, S )N. To see this, it suffices to note that
VWAPM (S, S ) is a uniform mixture of the VWAPi (since
by definition they each cover the same amount of market
volume); and if all the algorithm"s limit orders were
executed, they each received more than (1 − )VWAPi dollars
for the interval i they followed.
We now count the potential lost revenue of the
algorithm due to unexecuted limit orders. By the assumption
that individual orders are placed with volume less than γ,
then our algorithm is able to place a limit order during every
block of γ shares have been traded. Hence, after γN market
orders have been executed, A has placed N orders in the
market.
Note that there can be at most one limit order (and thus,
at most one share) left unexecuted at each level of the
discretized price ladder defined above. This is because
following interval i, the algorithm places its limit order strictly
below VWAPi, so if VWAPj ≥ VWAPi for j > i, this limit
order must have been executed. Thus unexecuted limit
orders bound the VWAPs of the remainder of the day,
resulting in at most one unexecuted order per price level.
A bound on the lost revenue is thus the sum of the
discretized prices: ∞
i=1(1 − )i
pmax ≤ pmax / . Clearly our
algorithm has sold at most N shares.
Note that as N becomes large, VWAPA approaches 1 −
times the market VWAP. If we knew that the final total
volume of the market executions is V , then we can set γ =
V/N, assuming that γ >> 1. If we have only an upper and
lower bound on V we should be able to guess and incur a
logarithmic loss. The following assumption tries to capture
the market volume variability.
3.4.0.5 Order Book Volume Variability Assumption.
We now assume that the total volume (which includes
the shares executed by both our algorithm and the market)
is variable within some known region and that the market
volume will be greater than our algorithms volume. More
formally, for all S ∈ Σ, assume that the total volume V
of shares traded in execM (S, S ), for any sequence S of N
sell limit orders, satisfies 2N ≤ Vmin ≤ V ≤ Vmax . Let
Q = Vmax /Vmin .
The following corollary is derived using a constant = 1/2
and observing that if we set γ such that V ≤ γN ≤ 2V then
our algorithm will place between N and N/2 limit orders.
Corollary 9. In the order book model, if the bounded
order volume and max price assumption, and the order book
volume variability assumption hold, there exists an online
algorithm A for selling at most N shares such that
VWAPA(S, S ) ≥
1
4 log(Q)
VWAPM (S, S ) −
2pmax
N
195
0 2 4 6 8
x 10
7
0
20
40
60
80
100
QQQ: log(Q)=4.71, E=3.77
0 2 4 6 8 10
x 10
6
0
20
40
60
80
JNPR: log(Q)=5.66, E=3.97
0 0.5 1 1.5 2
x 10
6
0
10
20
30
40
50
60
70
MCHP: log(Q)=5.28, E=3.86
0 2 4 6 8 10
x 10
6
0
50
100
150
200
250
CHKP: log(Q)=6.56, E=4.50
Figure 3: Here we present bounds from Section 4 based on the empirical volume distributions for four real stocks:
QQQ, MCHP, JNPR, and CHKP. The plots show histograms for the total daily volumes transacted on Island for
these stocks, in the last year and a half, along with the corresponding values of log(Q) and E(Pbins
vol ) (denoted by "E").
We assume that the minimum and maximum daily volumes in the data correspond to Vmin and Vmax , respectively.
The worst-case competitive ratio bounds (which are twice log(Q)) of our algorithm for those stocks are 9.42, 10.56, 11.32,
and 13.20, respectively. The corresponding bounds on the competitive ratio performance of our algorithm under the
volume distribution model (which are twice E(Pbins
vol )) are better: 7.54, 7.72, 7.94, and 9.00, respectively (a 20−40% relative
improvement). Using a finer volume binning along with a slightly more refined bound on the competitive ratio, we can
construct algorithms that, using the empirical volume distribution given as correct, guarantee even better competitive
ratios of 2.76, 2.73, 2.75, and 3.17, respectively for those stocks (details omitted).
4. MACROSCOPIC DISTRIBUTION
MODELS
We conclude our results with a return to the price-volume
model, where we shall introduce some refined methods of
analysis for online trading algorithms. We leave the
generalization of these methods to the order book model for
future work.
The competitive ratios defined so far measure performance
relative to some baseline criterion in the worst case over all
market sequences S ∈ Σ. It has been observed in many
online settings that such worst-case metrics can yield
pessimistic results, and various relaxations have been
considered, such as permitting a probability distribution over the
input sequence.
We now consider distributional models that are
considerably weaker than assuming a distribution over complete
market sequences S ∈ Σ. In the volume distribution model,
we assume only that there exists a distribution Pvol over the
total volume V traded in the market for the day, and then
examine the worst-case competitive ratio over sequences
consistent with the randomly chosen volume. More precisely,
we define
RVWAP(A, Pvol ) = EV ∼Pvol max
S∈seq(V )
VWAPM (S)
VWAPA(S)
.
Here V ∼ Pvol denotes that V is chosen with respect to
distribution Pvol , and seq(V ) ⊂ Σ is the set of all market
sequences (p1, v1), . . . , (pT , vT ) satisfying T
t=1 vt = V .
Similarly, for OWT, we can define
ROWT(A, Pmaxprice ) = Ep∼Pmaxprice max
S∈seq(p)
p
VWAPA(S)
.
Here Pmaxprice is a distribution over just the maximum price
of the day, and we then examine worst-case sequences
consistent with this price (seq(p) ⊂ Σ is the set of all market
sequences satisfying max1≤t≤T pt = p). Analogous buy-side
definitions can be given.
We emphasize that in these models, only the distribution
of maximum volume and price is known to the algorithm.
We also note that our probabilistic assumptions on S are
considerably weaker than typical statistical finance
models, which would posit a detailed stochastic model for the
step-by-step evolution of (pt, vt). Here we instead permit
only a distribution over crude, macroscopic measures of the
entire day"s market activity, such as the total volume and
high price, and analyze the worst-case performance
consistent with these crude measures. For this reason, we refer to
such settings as the macroscopic distribution model.
The work of El-Yaniv et al. [3] examines distributional
assumptions similar to ours, but they emphasize the
worst196
case choices for the distributions as well, and show that this
leads to results no better than the original worst-case
analysis over all sequences. In contrast, we feel that the analysis
of specific distributions Pvol and Pmaxprice is natural in many
financial contexts and our preliminary experimental results
show significant improvements when this rather crude
distributional information is taken into account (see Figure 3).
Our results in the VWAP setting examine the cases where
these distributions are known exactly or only approximately.
Similar results can be obtained for macroscopic distributions
of maximum daily price for the one-way trading setting.
4.1 Results in the Macroscopic Distribution
Model
We begin by noting that the algorithms examined so far
work by binning total volumes or maximum prices into bins
of exponentially increasing size, and then guessing the
index of the bin in which the actual quantity falls. It is
thus natural that the macroscopic distribution model
performance of such algorithms (which are common in
competitive analysis) might depend on the distribution of the true
bin index.
In the remaining, we assume that Q is a power of 2 and
the base of the logarithm is 2. Let Pvol denote the
distribution of total daily market volume. We define the
related distribution Pbins
vol over bin indices i as follows: for
all i = 1, . . . , log(Q) − 1, Pbins
vol (i) is equal to the
probability, under Pvol , that the daily volume falls in the interval
[Vmin 2i−1
, Vmin 2i
), and Pbins
vol (log(Q)) is for the last interval
[Vmax /2, Vmax ] .
We define E as
E(Pbins
vol ) ≡ Ei∼P bins
vol
1/Pbins
vol (i)
2
=
log(Q)
i=1
Pbins
vol (i)
2
.
Since the support of Pbins
vol has only log(Q) elements, E(Pbins
vol )
can vary from 1 (for distributions Pvol that place all of
their weight in only one of the log(Q) intervals between
Vmin , Vmin 2, Vmin 4, . . . , Vmax ) to log(Q) (for distributions
Pvol in which the total daily volume is equally likely to fall
in any one of these intervals). Note that distributions Pvol
of this latter type are far from uniform over the entire range
[Vmin , Vmax ].
Theorem 10. In the volume distribution model under the
volume variability assumption, there exists an online
algorithm A for selling N shares that, using only knowledge of
the total volume distribution Pvol , achieves RVWAP(A, Pvol ) ≤
2E(Pbins
vol ).
All proofs in this section are provided in the appendix.
As a concrete example, consider the case in which Pvol
is the uniform distribution over [Vmin , Vmax ]. In that case,
Pbins
vol is exponentially increasing and peaks at the last bin,
which, having the largest width, also has the largest weight.
In this case E(Pbins
vol ) is a constant (i.e., independent of Q),
leading to a constant competitive ratio. On the other hand,
if Pvol is exponential, then Pbins
vol is uniform, leading to an
O(log(Q)) competitive ratio, just as in the more adversarial
price-volume setting discussed earlier. In Figure 3, we
provide additional specific bounds obtained for empirical total
daily volume distributions computed for some real stocks.
We now examine the setting in which Pvol is unknown,
but an approximation ˜Pvol is available. Let us define
C(Pbins
vol , ˜Pbins
vol ) = log(Q)
j=1
˜Pbins
vol (j) log(Q)
i=1
P bins
vol (i)
√ ˜P bins
vol
(i)
.
C is minimized at C(Pbins
vol , Pbins
vol ) = E(Pbins
vol ), and C may
be infinite if ˜Pbins
vol (i) is 0 when Pbins
vol (i) > 0.
Theorem 11. In the volume distribution model under the
volume variability assumption, there exists an online
algorithm A for selling N shares that using only knowledge of
an approximation ˜Pvol of Pvol achieves RVWAP(A, Pvol ) ≤
2C(Pbins
vol , ˜Pbins
vol ).
As an example of this result, suppose our approximation
obeys (1/α)Pbins
vol (i) ≤ ˜Pbins
vol (i) ≤ αPbins
vol (i) for all i, for
some α > 1. Thus our estimated bin index probabilities
are all within a factor of α of the truth. Then it is easy
to show that C(Pbins
vol , ˜Pbins
vol ) ≤ αE(Pbins
vol ), so according to
Theorems 10 and 11 our penalty for using the approximate
distribution is a factor of α in competitive ratio.
5. REFERENCES
[1] B. Awerbuch, Y. Bartal, A. Fiat, and A. Ros´en.
Competitive non-preemptive call control. In Proc. 5"th
ACM-SIAM Symp. on Discrete Algorithms, pages
312-320, 1994.
[2] A. Borodin and R. El-Yaniv. Online Computation and
Competitive Analysis. Cambridge University Press,
1998.
[3] R. El-Yaniv, A. Fiat, R. M. Karp, and G. Turpin.
Optimal search and one-way trading online algorithms.
Algorithmica, 30:101-139, 2001.
[4] M. Kearns and L. Ortiz. The Penn-Lehman automated
trading project. IEEE Intelligent Systems, 2003. To
appear.
6. APPENDIX
6.1 Proofs from Subsection 2.3
Proof. (Sketch of Theorem 3) W.l.o.g., assume that Q =
1 and the total volume is V . Consider the time t where the
fixed schedule f sells the least, then ft ≤ N/T. Consider the
sequences where at time t we have pt = pmax , vt = V , and
for times t = t we have pt = pmin and vt = 0. The VWAP
is pmax and the fixed schedule average is (N/T)pmax + (N −
N/T)pmin .
Proof. (Sketch of Theorem 4) The algorithm simply sells
ut = (vt/Vmin )N shares at time t. The total number of
shares sold U is clearly more than N and
U =
t
ut =
t
(vt/Vmin )N = (V/Vmin )N ≤ QN
The average price is
V WAPA(S) = (
t
ptut)/U =
t
pt(vt/V ) = V WAPM (S),
where we used the fact that ut/U = vt/V .
197
Proof. (of Theorem 5) We start with the proof of the
lower bound. Consider the following scenario. For the first
T time units we have a price of
√
Rpmin , and a total volume
of Vmin . We observe how many shares the online algorithm
has bought. If it has bought more than half of the shares,
the remaining time steps have price pmin and volume Vmax −
Vmin . Otherwise, the remaining time steps have price pmax
and negligible volume.
In the first case the online has paid at least
√
Rpmin /2
while the VWAP is at most
√
Rpmin /Q + pmin . Therefore,
in this case the competitive ratio is Ω(Q). In the second case
the online has to buy at least half of the shares at pmax , so
its average cost is at least pmax /2. The market VWAP is√
Rpmin = pmax /
√
R, hence the competitive ratio is Ω(
√
R).
For the upper bound, we can get a
√
R competitive ratio,
by buying all the shares once the price drops below
√
Rpmin .
The Q upper bound is derive by running an algorithm that
assumes the volume is Vmin . The online pays a cost of p,
while the VWAP will be at least p/Q.
6.2 Proofs from Section 4
Proof. (Sketch of Theorem 10) We use the idea of
guessing the total volume from Theorem 1, but now allow for the
possibility of an arbitrary (but known) distribution over the
total volume. In particular, consider constructing a
distribution Gbins
vol over a set of volume values using Pvol and use
it to guess the total volume V . Let the algorithm guess
ˆV = Vmin 2i
with probability Gbins
vol (i). Then note that,
for any price-volume sequence S, if V ∈ [Vmin 2i−1
, Vmin 2i
],
VWAPA(S) ≥ Gbins
vol (i)VWAPM (S)/2. This implies an
upper bound on RVWAP(A, Pvol ) in terms of Gbins
vol . We then
get that Gbins
vol (i) ∝ Pbins
vol (i) minimizes the upper bound,
which leads to the upper bound stated in the theorem.
Proof. (Sketch of Theorem 11) Replace Pvol with ˜Pvol
in the expression for Gbins
vol in the proof sketch for the last
result.
198 | share;modern financial market;online algorithm;online trade;sequence of trade;trade sequence;competitive analysis;market order;volume weighted average price trading model;vwap;competitive algorithm;stock trading;online model;limit order book trading model |
train_J-74 | On Cheating in Sealed-Bid Auctions | Motivated by the rise of online auctions and their relative lack of security, this paper analyzes two forms of cheating in sealed-bid auctions. The first type of cheating we consider occurs when the seller spies on the bids of a second-price auction and then inserts a fake bid in order to increase the payment of the winning bidder. In the second type, a bidder cheats in a first-price auction by examining the competing bids before deciding on his own bid. In both cases, we derive equilibrium strategies when bidders are aware of the possibility of cheating. These results provide insights into sealedbid auctions even in the absence of cheating, including some counterintuitive results on the effects of overbidding in a first-price auction. | 1. INTRODUCTION
Among the types of auctions commonly used in practice,
sealed-bid auctions are a good practical choice because they
require little communication and can be completed almost
instantly. Each bidder simply submits a bid, and the winner
is immediately determined. However, sealed-bid auctions
do require that the bids be kept private until the auction
clears. The increasing popularity of online auctions only
makes this disadvantage more troublesome. At an auction
house, with all participants present, it is difficult to examine
a bid that another bidder gave directly to the auctioneer.
However, in an online auction the auctioneer is often little
more than a server with questionable security; and, since all
participants are in different locations, one can anonymously
attempt to break into the server. In this paper, we present a
game theoretic analysis of how bidders should behave when
they are aware of the possibility of cheating that is based on
knowledge of the bids.
We investigate this type of cheating along two dimensions:
whether it is the auctioneer or a bidder who cheats, and
which variant (either first or second-price) of the sealed-bid
auction is used. Note that two of these cases are trivial.
In our setting, there is no incentive for the seller to submit
a shill bid in a first price auction, because doing so would
either cancel the auction or not affect the payment of the
winning bidder. In a second-price auction, knowing the
competing bids does not help a bidder because it is dominant
strategy to bid truthfully. This leaves us with two cases that
we examine in detail.
A seller can profitably cheat in a second-price auction by
looking at the bids before the auction clears and submitting
an extra bid. This possibility was pointed out as early as
the seminal paper [12] that introduced this type of auction.
For example, if the bidders in an eBay auction each use a
proxy bidder (essentially creating a second-price auction),
then the seller may be able to break into eBay"s server,
observe the maximum price that a bidder is willing to pay, and
then extract this price by submitting a shill bid just below
it using a false identity. We assume that there is no chance
that the seller will be caught when it cheats. However, not
all sellers are willing to use this power (or, not all sellers can
successfully cheat). We assume that each bidder knows the
probability with which the seller will cheat. Possible
motivation for this knowledge could be a recently published expos´e
on seller cheating in eBay auctions. In this setting, we derive
an equilibrium bidding strategy for the case in which each
bidder"s value for the good is independently drawn from a
common distribution (with no further assumptions except
for continuity and differentiability). This result shows how
first and second-price auctions can be viewed as the
endpoints of a spectrum of auctions.
But why should the seller have all the fun? In a first-price
auction, a bidder must bid below his value for the good (also
called shaving his bid) in order to have positive utility if he
76
wins. To decide how much to shave his bid, he must trade off
the probability of winning the auction against how much he
will pay if he does win. Of course, if he could simply examine
the other bids before submitting his own, then his problem
is solved: bid the minimum necessary to win the auction.
In this setting, our goal is to derive an equilibrium bidding
strategy for a non-cheating bidder who is aware of the
possibility that he is competing against cheating bidders. When
bidder values are drawn from the commonly-analyzed
uniform distribution, we show the counterintuitive result that
the possibility of other bidders cheating has no effect on
the equilibrium strategy of an honest bidder. This result
is then extended to show the robustness of the equilibrium
of a first-price auction without the possibility of cheating.
We conclude this section by exploring other distributions,
including some in which the presence of cheating bidders
actually induces an honest bidder to lower its bid.
The rest of the paper is structured as follows. In Section
2 we formalize the setting and present our results for the
case of a seller cheating in a second price auction. Section
3 covers the case of bidders cheating in a first-price auction.
In Section 4, we quantify the effects that the possibility of
cheating has on an honest seller in the two settings. We
discuss related work, including other forms of cheating in
auctions, in Section 5, before concluding with Section 6. All
proofs and derivations are found in the appendix.
2. SECOND-PRICE AUCTION,
CHEATING SELLER
In this section, we consider a second-price auction in which
the seller may cheat by inserting a shill bid after
observing all of the bids. The formulation for this section will be
largely reused in the following section on bidders cheating
in a first-price auction. While no prior knowledge of game
theory or auction theory is assumed, good introductions can
be found in [2] and [6], respectively.
2.1 Formulation
The setting consists of N bidders, or agents, (indexed by
i = 1, · · · , n) and a seller. Each agent has a type θi ∈
[0, 1], drawn from a continuous range, which represents the
agent"s value for the good being auctioned.2
Each agent"s
type is independently drawn from a cumulative distribution
function (cdf ) F over [0, 1], where F(0) = 0 and F(1) = 1.
We assume that F(·) is strictly increasing and differentiable
over the interval [0, 1]. Call the probability density function
(pdf ) f(θi) = F (θi), which is the derivative of the cdf.
Each agent knows its own type θi, but only the
distribution over the possible types of the other agents. A bidding
strategy for an agent bi : [0, 1] → [0, 1] maps its type to its
bid.3
Let θ = (θ1, · · · , θn) be the vector of types for all agents,
and θ−i = (θ1, · · · , θi−1, θi+1, · · · θn) be the vector of all
types except for that of agent i. We can then combine the
vectors so that θ = (θi, θ−i). We also define the vector of
bids as b(θ) = (b1(θ1), . . . , bn(θn)), and this vector without
2
We can restrict the types to the range [0, 1] without loss
of generality because any distribution over a different range
can be normalized to this range.
3
We thus limit agents to deterministic bidding strategies,
but, because of our continuity assumption, there always
exists a pure strategy equilibrium.
the bid of agent i as b−i(θ−i). Let b[1](θ) be the value of the
highest bid of the vector b(θ), with a corresponding
definition for b[1](θ−i).
An agent obviously wins the auction if its bid is greater
than all other bids, but ties complicate the formulation.
Fortunately, we can ignore the case of ties in this paper because
our continuity assumption will make them a zero probability
event in equilibrium. We assume that the seller does not set
a reserve price.4
If the seller does not cheat, then the winning agent pays
the highest bid by another agent. On the other hand, if
the seller does cheat, then the winning agent will pay its
bid, since we assume that a cheating seller would take full
advantage of its power. Let the indicator variable µc
be 1
if the seller cheats, and 0 otherwise. The probability that
the seller cheats, Pc
, is known by all agents.5
We can then
write the payment of the winning agent as follows.
pi(b(θ), µc
) = µc
· bi(θi) − (1 − µc
) · b[1](θ−i) (1)
Let µ(·) be an indicator function that takes an inequality
as an argument and returns is 1 if it holds, and 0 otherwise.
The utility for agent i is zero if it does not win the auction,
and the difference between its valuation and its price if it
does.
ui(b(θ), µc
, θi) = µ bi(θi) > b[1](θ−i) · θi − pi(b(θ), µc
)
(2)
We will be concerned with the expected utility of an agent,
with the expectation taken over the types of the other agents
and over whether or not the seller cheats. By pushing the
expectation inward so that it is only over the price
(conditioned on the agent winning the auction), we can write the
expected utility as:
Eθ−i,µc [ui(b(θ), µc
, θi)] = Prob bi(θi) > b[1](θ−i) ·
θi − Eθ−i,µc pi(b(θ), µc
) | bi(θi) > b[1](θ−i) (3)
We assume that all agents are rational, expected utility
maximizers. Because of the uncertainty over the types of
the other agents, we will be looking for a Bayes-Nash
equilibrium. A vector of bidding strategies b∗
is a Bayes-Nash
equilibrium if for each agent i and each possible type θi,
agent i cannot increase its expected utility by using an
alternate bidding strategy bi, holding the bidding strategies
for all other agents fixed. Formally, b∗
is a Bayes-Nash
equilibrium if:
∀i, θi, bi Eθ−i,µc ui b∗
i (θi), b∗
−i(θ−i) , µc
, θi ≥
Eθ−i,µc ui bi(θi), b∗
−i(θ−i) , µc
, θi (4)
2.2 Equilibrium
We first present the Bayes-Nash equilibrium for an
arbitrary distribution F(·).
4
This simplifies the analysis, but all of our results can be
applied to the case in which the seller announces a reserve
price before the auction begins.
5
Note that common knowledge is not necessary for the
existence of an equilibrium.
77
Theorem 1. In a second-price auction in which the seller
cheats with probability Pc
, it is a Bayes-Nash equilibrium for
each agent to bid according to the following strategy:
bi(θi) = θi −
θi
0
F( N−1
P c )
(x)dx
F( N−1
P c )
(θi)
(5)
It is useful to consider the extreme points of Pc
. Setting
Pc
= 1 yields the correct result for a first-price auction (see,
e.g., [10]). In the case of Pc
= 0, this solution is not defined.
However, in the limit, bi(θi) approaches θi as Pc
approaches
0, which is what we expect as the auction approaches a
standard second-price auction.
The position of Pc
is perhaps surprising. For example, the
linear combination bi(θi) = θi − Pc
·
θi
0 F (N−1)
(x)dx
F (N−1)(θi)
of the
equilibrium bidding strategies of first and second-price
auctions would have also given us the correct bidding strategies
for the cases of Pc
= 0 and Pc
= 1.
2.3 Continuum of Auctions
An alternative perspective on the setting is as a
continuum between first and second-price auctions. Consider a
probabilistic sealed-bid auction in which the seller is
honest, but the price paid by the winning agent is determined
by a weighted coin flip: with probability Pc
it is his bid,
and with probability 1 − Pc
it is the second-highest bid.
By adjusting Pc
, we can smoothly move between a first
and second-price auction. Furthermore, the fact that this
probabilistic auction satisfies the properties required for the
Revenue Equivalence Theorem (see, e.g., [2]) provides a way
to verify that the bidding strategy in Equation 5 is the
symmetric equilibrium of this auction (see the alternative proof
of Theorem 1 in the appendix).
2.4 Special Case: Uniform Distribution
Another way to try to gain insight into Equation 5 is by
instantiating the distribution of types. We now consider the
often-studied uniform distribution: F(θi) = θi.
Corollary 2. In a second-price auction in which the
seller cheats with probability Pc
, and F(θi) = θi, it is a
Bayes-Nash equilibrium for each agent to bid according to
the following strategy:
bi(θi) =
N − 1
N − 1 + Pc
θi (6)
This equilibrium bidding strategy, parameterized by Pc
,
can be viewed as an interpolation between two well-known
results. When Pc
= 0 the bidding strategy is now
welldefined (each agent bids its true type), while when Pc
= 1
we get the correct result for a first-price auction: each agent
bids according to the strategy bi(θi) = N−1
N
θi.
3. FIRST-PRICE AUCTION,
CHEATING AGENTS
We now consider the case in which the seller is honest,
but there is a chance that agents will cheat and examine the
other bids before submitting their own (or, alternatively,
they will revise their bid before the auction clears). Since
this type of cheating is pointless in a second-price auction,
we only analyze the case of a first-price auction. After
revising the formulation from the previous section, we present a
fixed point equation for the equilibrium strategy for an
arbitrary distribution F(·). This equation will be useful for the
analysis the uniform distribution, in which we show that the
possibility of cheating agents does not change the
equilibrium strategy of honest agents. This result has implications
for the robustness of the symmetric equilibrium to
overbidding in a standard first-price auction. Furthermore, we find
that for other distributions overbidding actually induces a
competing agent to shave more off of its bid.
3.1 Formulation
It is clear that if a single agent is cheating, he will bid
(up to his valuation) the minimum amount necessary to win
the auction. It is less obvious, though, what will happen if
multiple agents cheat. One could imagine a scenario similar
to an English auction, in which all cheating agents keep
revising their bids until all but one cheater wants the good
at the current winning bid. However, we are only concerned
with how an honest agent should bid given that it is aware
of the possibility of cheating. Thus, it suffices for an honest
agent to know that it will win the auction if and only if
its bid exceeds every other honest agent"s bid and every
cheating agent"s type.
This intuition can be formalized as the following
discriminatory auction. In the first stage, each agent"s payment
rule is determined. With probability Pa
, the agent will pay
the second highest bid if it wins the auction (essentially,
he is a cheater), and otherwise it will have to pay its bid.
These selections are recorded by a vector of indicator
variables µa
= (µa1
, . . . , µan
), where µai
= 1 denotes that agent
i pays the second highest bid. Each agent knows the
probability Pa
, but does not know the payment rule for all other
agents. Otherwise, this auction is a standard, sealed-bid
auction. It is thus a dominant strategy for a cheater to bid
its true type, making this formulation strategically
equivalent to the setting outlined in the previous paragraph. The
expression for the utility of an honest agent in this
discriminatory auction is as follows.
ui(b(θ), µa
, θi) = θi − bi(θ) ·
j=i
µaj
· µ bi(θi) > θj + (1 − µaj
) · µ bi(θi) > bj(θj)
(7)
3.2 Equilibrium
Our goal is to find the equilibrium in which all cheating
agents use their dominant strategy of bidding truthfully and
honest agents bid according to a symmetric bidding strategy.
Since we have left F(·) unspecified, we cannot present a
closed form solution for the honest agent"s bidding strategy,
and instead give a fixed point equation for it.
Theorem 3. In a first-price auction in which each agent
cheats with probability Pa
, it is a Bayes-Nash equilibrium
for each non-cheating agent i to bid according to the strategy
that is a fixed point of the following equation:
bi(θi) = θi −
θi
0 Pa · F(bi(x)) + (1 − Pa) · F(x)
(N−1)
dx
Pa · F(bi(θi)) + (1 − Pa) · F(θi)
(N−1)
(8)
78
3.3 Special Case: Uniform Distribution
Since we could not solve Equation 8 in the general case,
we can only see how the possibility of cheating affects the
equilibrium bidding strategy for particular instances of F(·).
A natural place to start is uniform distribution: F(θi) = θi.
Recall the logic behind the symmetric equilibrium strategy
in a first-price auction without cheating: bi(θi) = N−1
N
θi is
the optimal tradeoff between increasing the probability of
winning and decreasing the price paid upon winning, given
that the other agents are bidding according to the same
strategy. Since in the current setting the cheating agents
do not shave their bid at all and thus decrease an honest
agent"s probability of winning (while obviously not affecting
the price that an honest agent pays if he wins), it is
natural to expect that an honest agent should compensate by
increasing his bid. The idea is that sacrificing some
potential profit in order to regain some of the lost probability of
winning would bring the two sides of the tradeoff back into
balance. However, it turns out that the equilibrium bidding
strategy is unchanged.
Corollary 4. In a first-price auction in which each agent
cheats with probability Pa
, and F(θi) = θi, it is a
BayesNash equilibrium for each non-cheating agent to bid
according to the strategy bi(θi) = N−1
N
θi.
This result suggests that the equilibrium of a first-price
auction is particularly robust when types are drawn from the
uniform distribution, since the best response is unaffected
by deviations of the other agents to the strategy of always
bidding their type. In fact, as long as all other agents shave
their bid by a fraction (which can differ across the agents) no
greater than 1
N
, it is still a best response for the remaining
agent to bid according to the equilibrium strategy. Note
that this result holds even if other agents are shaving their
bid by a negative fraction, and are thus irrationally bidding
above their type.
Theorem 5. In a first-price auction where F(θi) = θi, if
each agent j = i bids according a strategy bj(θj) =
N−1+αj
N
θj,
where αj ≥ 0, then it is a best response for the remaining
agent i to bid according to the strategy bi(θi) = N−1
N
θi.
Obviously, these strategy profiles are not equilibria (unless
each αj = 0), because each agent j has an incentive to set
αj = 0. The point of this theorem is that a wide range of
possible beliefs that an agent can hold about the strategies
of the other agents will all lead him to play the equilibrium
strategy. This is important because a common (and valid)
criticism of equilibrium concepts such as Nash and
BayesNash is that they are silent on how the agents converge
on a strategy profile from which no one wants to deviate.
However, if the equilibrium strategy is a best response to a
large set of strategy profiles that are out of equilibrium, then
it seems much more plausible that the agents will indeed
converge on this equilibrium.
It is important to note, though, that while this equilibrium
is robust against arbitrary deviations to strategies that shave
less, it is not robust to even a single agent shaving more off
of its bid. In fact, if we take any strategy profile consistent
with the conditions of Theorem 5 and change a single agent
j"s strategy so that its corresponding αj is negative, then
agent i"s best response is to shave more than 1
N
off of its
bid.
3.4 Effects of Overbidding for
Other Distributions
A natural question is whether the best response bidding
strategy is similarly robust to overbidding by competing
agents for other distributions. It turns out that Theorem
5 holds for all distributions of the form F(θi) = (θi)k
, where
k is some positive integer. However, taking a simple linear
combination of two such distributions to produce F(θi) =
θ2
i +θi
2
yields a distribution in which an agent should
actually shave its bid more when other agents shave their bids
less. In the example we present for this distribution (with
the details in the appendix), there are only two players and
the deviation by one agent is to bid his type. However, it
can be generalized to a higher number of agents and to other
deviations.
Example 1. In a first-price auction where F(θi) =
θ2
i +θi
2
and N = 2, if agent 2 always bids its type (b2(θ2) = θ2),
then, for all θ1 > 0, agent 1"s best response bidding strategy
is strictly less than the bidding strategy of the symmetric
equilibrium.
We also note that the same result holds for the normalized
exponential distribution (F(θi) = eθi −1
e−1
).
It is certainly the case that distributions can be found
that support the intuition given above that agents should
shave their bid less when other agents are doing likewise.
Examples include F(θi) = −1
2
θ2
i + 3
2
θi (the solution to the
system of equations: F (θi) = −1, F(0) = 0, and F(1) = 1),
and F(θi) = e−e(1−θi)
e−1
.
It would be useful to relate the direction of the change
in the best response bidding strategy to a general
condition on F(·). Unfortunately, we were not able to find such
a condition, in part because the integral in the symmetric
bidding strategy of a first-price auction cannot be solved
without knowing F(·) (or at least some restrictions on it).
We do note, however, that the sign of the second
derivative of F(θi)/f(θi) is an accurate predictor for all of the
distributions that we considered.
4. REVENUE LOSS FOR AN
HONEST SELLER
In both of the settings we covered, an honest seller suffers
a loss in expected revenue due to the possibility of cheating.
The equilibrium bidding strategies that we derived allow us
to quantify this loss. Although this is as far as we will take
the analysis, it could be applied to more general settings,
in which the seller could, for example, choose the market
in which he sells his good or pay a trusted third party to
oversee the auction.
In a second-price auction in which the seller may cheat, an
honest seller suffers due the fact that the agents will shave
their bids. For the case in which agent types are drawn from
the uniform distribution, every agent will shave its bid by
P c
N−1+P c , which is thus also the fraction by which an honest
seller"s revenue decreases due to the possibility of cheating.
Analysis of the case of a first-price auction in which agents
may cheat is not so straightforward. If Pa
= 1 (each agent
cheats with certainty), then we simply have a second-price
auction, and the seller"s expected revenue will be unchanged.
Again considering the uniform distribution for agent types,
it is not surprising that Pa
= 1
2
causes the seller to lose
79
the most revenue. However, even in this worst case, the
percentage of expected revenue lost is significantly less than
it is for the second-price auction in which Pc
= 1
2
, as shown
in Table 1.6
It turns out that setting Pc
= 0.2 would make
the expected loss of these two settings comparable. While
this comparison between the settings is unlikely to be useful
for a seller, it is interesting to note that agent suspicions of
possible cheating by the seller are in some sense worse than
agents actually cheating themselves.
Percentage of Revenue lost for an Honest Seller
Agents Second-Price Auction First-Price Auction
(Pc
= 0.5) (Pa
= 0.5)
2 33 12
5 11 4.0
10 5.3 1.8
15 4.0 1.5
25 2.2 0.83
50 1.1 0.38
100 0.50 0.17
Table 1: The percentage of expected revenue lost
by an honest seller due to the possibility of cheating
in the two settings considered in this paper. Agent
valuations are drawn from the uniform distribution.
5. RELATED WORK
Existing work covers another dimension along which we
could analyze cheating: altering the perceived value of N.
In this paper, we have assumed that N is known by all
of the bidders. However, in an online setting this
assumption is rather tenuous. For example, a bidder"s only source
of information about N could be a counter that the seller
places on the auction webpage, or a statement by the seller
about the number of potential bidders who have indicated
that they will participate. In these cases, the seller could
arbitrarily manipulate the perceived N. In a first-price
auction, the seller obviously has an incentive to increase the
perceived value of N in order to induce agents to bid closer
to their true valuation. However, if agents are aware that
the seller has this power, then any communication about
N to the agents is cheap talk, and furthermore is not
credible. Thus, in equilibrium the agents would ignore the
declared value of N, and bid according to their own prior
beliefs about the number of agents. If we make the natural
assumption of a common prior, then the setting reduces to
the one tackled by [5], which derived the equilibrium
bidding strategies of a first-price auction when the number of
bidders is drawn from a known distribution but not revealed
to any of the bidders. Of course, instead of assuming that
the seller can always exploit this power, we could assume
that it can only do so with some probability that is known
by the agents. The analysis would then proceed in a similar
manner as that of our cheating seller model.
The other interesting case of this form of cheating is by
bidders in a first-price auction. Bidders would obviously
want to decrease the perceived number of agents in order
to induce their competition to lower their bids. While it is
6
Note that we have not considered the costs of the seller.
Thus, the expected loss in profit could be much greater than
the numbers that appear here.
unreasonable for bidders to be able to alter the perceived
N arbitrarily, collusion provides an opportunity to decrease
the perceived N by having only one of a group of colluding
agents participate in the auction. While the non-colluding
agents would account for this possibility, as long as they are
not certain of the collusion they will still be induced to shave
more off of their bids than they would if the collusion did
not take place. This issue is tackled in [7].
Other types of collusion are of course related to the
general topic of cheating in auctions. Results on collusion in
first and second-price auctions can be found in [8] and [3],
respectively.
The work most closely related to our first setting is [11],
which also presents a model in which the seller may cheat in
a second-price auction. In their setting, the seller is a
participant in the Bayesian game who decides between running
a first-price auction (where profitable cheating is never
possible) or second-price auction. The seller makes this choice
after observing his type, which is his probability of having
the opportunity and willingness to cheat in a second-price
auction. The bidders, who know the distribution from which
the seller"s type is drawn, then place their bid. It is shown
that, in equilibrium, only a seller with the maximum
probability of cheating would ever choose to run a second-price
auction. Our work differs in that we focus on the agents"
strategies in a second-price auction for a given probability
of cheating by the seller. An explicit derivation of the
equilibrium strategies then allows us relate first and second-price
auctions.
An area of related work that can be seen as
complementary to ours is that of secure auctions, which takes the point
of view of an auction designer. The goals often extend
well beyond simply preventing cheating, including
properties such as anonymity of the bidders and nonrepudiation of
bids. Cryptographic methods are the standard weapon of
choice here (see [1, 4, 9]).
6. CONCLUSION
In this paper we presented the equilibria of sealed-bid
auctions in which cheating is possible. In addition to providing
strategy profiles that are stable against deviations, these
results give us with insights into both first and second-price
auctions. The results for the case of a cheating seller in a
second-price auction allow us to relate the two auctions as
endpoints along a continuum. The case of agents cheating in
a first-price auction showed the robustness of the first-price
auction equilibrium when agent types are drawn from the
uniform distribution. We also explored the effect of
overbidding on the best response bidding strategy for other
distributions, and showed that even for relatively simple
distributions it can be positive, negative, or neutral. Finally,
results from both of our settings allowed us to quantify the
expected loss in revenue for a seller due to the possibility of
cheating.
7. REFERENCES
[1] M. Franklin and M. Reiter. The Design and
Implementation of a Secure Auction Service. In Proc.
IEEE Symp. on Security and Privacy, 1995.
[2] D. Fudenberg and J. Tirole. Game Theory. MIT
Press, 1991.
80
[3] D. Graham and R. Marshall. Collusive bidder behavior
at single-object second-price and english auctions.
Journal of Political Economy, 95:579-599, 1987.
[4] M. Harkavy, J. D. Tygar, and H. Kikuchi. Electronic
auctions with private bids. In Proceedings of the 3rd
USENIX Workshop on Electronic Commerce, 1998.
[5] R. Harstad, J. Kagel, and D. Levin. Equilibrium bid
functions for auctions with an uncertain number of
bidders. Economic Letters, 33:35-40, 1990.
[6] P. Klemperer. Auction theory: A guide to the
literature. Journal of Economic Surveys,
13(3):227-286, 1999.
[7] K. Leyton-Brown, Y. Shoham, and M. Tennenholtz.
Bidding clubs in first-price auctions. In AAAI-02.
[8] R. McAfee and J. McMillan. Bidding rings. The
American Economic Review, 71:579-599, 1992.
[9] M. Naor, B. Pinkas, and R. Sumner. Privacy
preserving auctions and mechanism design. In EC-99.
[10] J. Riley and W. Samuelson. Optimal auctions.
American Economic Review, 71(3):381-392, 1981.
[11] M. Rothkopf and R. Harstad. Two models of bid-taker
cheating in vickrey auctions. The Journal of Business,
68(2):257-267, 1995.
[12] W. Vickrey. Counterspeculations, auctions, and
competitive sealed tenders. Journal of Finance,
16:15-27, 1961.
APPENDIX
Theorem 1. In a second-price auction in which the seller
cheats with probability Pc
, it is a Bayes-Nash equilibrium for
each agent to bid according to the following strategy:
bi(θi) = θi −
θi
0
F( N−1
P c )
(x)dx
F( N−1
P c )
(θi)
(5)
Proof. To find an equilibrium, we start by guessing that
there exists an equilibrium in which all agents bid
according to the same function bi(θi), because the game is
symmetric. Further, we guess that bi(θi) is strictly increasing
and differentiable over the range [0, 1]. We can also assume
that bi(0) = 0, because negative bids are not allowed and
a positive bid is not rational when the agent"s valuation is
0. Note that these are not assumptions on the setting- they
are merely limitations that we impose on our search.
Let Φi : [0, bi(1)] → [0, 1] be the inverse function of bi(θi).
That is, it takes a bid for agent i as input and returns the
type θi that induced this bid. Recall Equation 3:
Eθ−i,µc ui(b(θ), µc
, θi) = Prob bi(θi) > b[1](θ−i) ·
θi − Eθ−i,µc pi(b(θ), µc
) | bi(θi) > b[1](θ−i)
The probability that a single other bid is below that of
agent i is equal to the cdf at the type that would induce
a bid equal to that of agent i, which is formally written as
F(Φi(bi(θi))). Since all agents are independent, the
probability that all other bids are below agent i"s is simply this
term raised the (N − 1)-th power.
Thus, we can re-write the expected utility as:
Eθ−i,µc ui(b(θ), µc
, θi) = FN−1
(Φi(bi(θi)))·
θi − Eθ−i,µc pi(b(θ), µc
) | bi(θi) > b[1](θ−i) (9)
We now solve for the expected payment. Plugging
Equation 1 (which gives the price for the winning agent) into the
term for the expected price in Equation 9, and then
simplifying the expectation yields:
Eθ−i,µc pi(b(θ), µc
) | bi(θi) > b[1](θ−i)
= Eθ−i,µc µc
· bi(θi) + (1 − µc
) · b[1](θ−i) |
bi(θi) > b[1](θ−i)
= Pc
· bi(θi) + (1 − Pc
) · Eθ−i b[1](θ−i) |
bi(θi) > b[1](θ−i)
= Pc
· bi(θi) + (1 − Pc
) ·
bi(θi)
0
b[1](θ−i)·
pdf b[1](θ−i) | bi(θi) > b[1](θ−i) db[1](θ−i) (10)
Note that the integral on the last line is taken up to
bi(θi) because we are conditioning on the fact that bi(θi) >
b[1](θ−i). To derive the pdf of b[1](θ−i) given this condition,
we start with the cdf. For a given value b[1](θ−i), the
probability that any one agent"s bid is less than this value is equal
to F(Φi(b[1](θ−i))). We then condition on the agent"s bid
being below bi(θi) by dividing by F(Φi(bi(θi))). The cdf for
the N − 1 agents is then this value raised to the (N − 1)-th
power.
cdf b[1](θ−i) | bi(θi) > b[1](θ−i) =
FN−1
(Φi(b[1](θ−i)))
FN−1(Φi(bi(θi)))
The pdf is then the derivative of the cdf with respect to
b[1](θ−i):
pdf b[1](θ−i) | bi(θi) > b[1](θ−i)
=
N − 1
FN−1(Φi(bi(θi)))
· FN−2
(Φi(b[1](θ−i)))·
f(Φi(b[1](θ−i))) · Φi(b[1](θ−i))
Substituting the pdf into Equation 10 and pulling terms
out of the integral that do not depend on b[1](θ−i) yields:
Eθ−i,µc pi(b(θ), µc
) | bi(θi) > b[1](θ−i) = Pc
· bi(θi)+
(1 − Pc
) · (N − 1)
FN−1(Φi(bi(θi)))
·
bi(θi)
0
b[1](θ−i) · FN−2
(Φi(b[1](θ−i)))·
f(Φi(b[1](θ−i))) · Φi(b[1](θ−i)) db[1](θ−i)
81
Plugging the expected price back into the expected utility
equation (9), and distributing FN−1
(Φi(bi(θi))), yields:
Eθ−i,µc ui(b(θ), µc
, θi) = FN−1
(Φi(bi(θi))) · θi−
FN−1
(Φi(bi(θi))) · Pc
· bi(θi)−
(1 − Pc
) · (N − 1) ·
bi(θi)
0
b[1](θ−i) · FN−2
(Φi(b[1](θ−i)))·
f(Φi(b[1](θ−i))) · Φi(b[1](θ−i)) db[1](θ−i)
We are now ready to optimize the expected utility by
taking the derivative with respect to bi(θi) and setting it to 0.
Note that we do not need to solve the integral, because it
will disappear when the derivative is taken (by application
of the Fundamental Theorem of Calculus).
0 = (N−1)·FN−2
(Φi(bi(θi)))·f(Φi(bi(θi)))·Φi(bi(θi))·θi−
FN−1
(Φi(bi(θi))) · Pc
−
Pc
·(N−1)·FN−2
(Φi(bi(θi)))·f(Φi(bi(θi)))·Φi(bi(θi))·bi(θi)−
(1−Pc
)·(N−1)· bi(θi)·FN−2
(Φi(bi(θi)))·f(Φi(bi(θi)))·Φi(bi(θi))
Dividing through by FN−2
(Φi(bi(θi))) and combining like
terms yields:
0 = θi − Pc
· bi(θi) − (1 − Pc
) · bi(θi) · (N − 1)
· f(Φi(bi(θi))) · Φi(bi(θi)) − Pc
· F(Φi(bi(θi)))
Simplifying the expression and rearranging terms produces:
bi(θi) = θi −
Pc
· F(Φi(bi(θi)))
(N − 1) · f(Φi(bi(θi))) · Φi(bi(θi))
To further simplify, we use the formula f (x) = 1
g (f(x))
,
where g(x) is the inverse function of f(x). Plugging in
function from our setting gives us: Φi(bi(θi)) = 1
bi(θi)
. Applying
both this equation and Φi(bi(θi)) = θi yields:
bi(θi) = θi −
Pc
· F(θi) · bi(θi)
(N − 1) · f(θi)
(11)
Attempts at a derivation of the solution from this point
proved fruitless, but we are at a point now where a guessed
solution can be quickly verified. We used the solution for
the first-price auction (see, e.g., [10]) as our starting point
to find the answer:
bi(θi) = θi −
θi
0
F( N−1
P c )
(x)dx
F( N−1
P c )
(θi)
(12)
To verify the solution, we first take its derivative:
bi(θi) = 1−
F(2· N−1
P c )
(θi) − N−1
P c · F( N−1
P c −1)
(θi) · f(θi) ·
θi
0
F( N−1
P c )
(x)dx
F(2· N−1
P c )
(θi)
This simplifies to:
bi(θi) =
N−1
P c · f(θi) ·
θi
0
F( N−1
P c )
(x)dx
F( N−1
P c +1)
(θi)
We then plug this derivative into the equation we derived
(11):
bi(θi) = θi −
Pc
· F(θi) · N−1
P c · f(θi) ·
θi
0
F( N−1
P c )
(x)dx
(N − 1) · f(θi) · F( N−1
P c +1)
(θi)
Cancelling terms yields Equation 12, verifying that our
guessed solution is correct.
Alternative Proof of Theorem 1:
The following proof uses the Revenue Equivalence
Theorem (RET) and the probabilistic auction given as an
interpretation of our cheating seller setting.
In a first-price auction without the possibility of cheating,
the expected payment for an agent with type θi is simply
the product of its bid and the probability that this bid is
the highest. For the symmetric equilibrium, this is equal to:
F(N−1)
(θi) · θi −
θi
0
F(N−1)
(x)dx
F(N−1)(θi)
For our probabilistic auction, the expected payment of
the winning agent is a weighted average of its bid and the
second highest bid. For the bi(·) we found in the original
interpretation of the setting, it can be written as follows.
F(N−1)
(θi) · Pc
θi −
θi
0
F( N−1
P c )
(x)dx
F( N−1
P c )
(θi)
+
(1 − Pc
)
1
F(N−1)(θi)
·
θi
0
x−
x
0
F( N−1
P c )
(y)dy
F( N−1
P c )
(x)
·(N −1)·F(N−2)
(x)·f(x)dx
By the RET, the expected payments will be the same
in the two auctions. Thus, we can verify our equilibrium
bidding strategy by showing that the expected payment in
the two auctions is equal. Since the expected payment is
zero at θi = 0 for both functions, it suffices to verify that the
derivatives of the expected payment functions with respect
to θi are equal, for an arbitrary value θi. Thus, we need to
verify the following equation:
F(N−1)
(θi) + (N − 1) · F(N−2)
(θi) · f(θi) · θi − F(N−1)
(θi)
=
Pc
· F(N−1)
(θi) · 1 − 1−
(N−1
P c ) · F( N−1
P c −1)
(θi) · f(θi) ·
θi
0
F( N−1
P c )
(x)dx
F2( N−1
P c )
(θi)
+
(N − 1) · F(N−2)
(θi) · f(θi) · θi −
θi
0
F( N−1
P c )
(x)dx
F( N−1
P c )
(θi)
+
(1−Pc
) θi−
θi
0
F( N−1
P c )
(y)dy
F( N−1
P c )
(θi)
·(N−1)·F(N−2)
(θi)·f(θi)
82
This simplifies to:
0 = Pc
·
(N−1
P c ) · F(N−2)
(θi) · f(θi) ·
θi
0
F( N−1
P c )
(x)dx
F( N−1
P c )
(θi)
+
(N − 1) · F(N−2)
(θi) · f(θi) · −
θi
0
F( N−1
P c )
(x)dx
F( N−1
P c )
(θi)
+
(1−Pc
) −
θi
0
F( N−1
P c )
(y)dy
F( N−1
P c )
(θi)
·(N−1)·F(N−2)
(θi)·f(θi)
After distributing Pc
, the right-hand side of this equation
cancels out, and we have verified our equilibrium bidding
strategy.
Corollary 2. In a second-price auction in which the
seller cheats with probability Pc
, and F(θi) = θi, it is a
Bayes-Nash equilibrium for each agent to bid according to
the following strategy:
bi(θi) =
N − 1
N − 1 + Pc
θi (6)
Proof. Plugging F(θi) = θi into Equation 5 (repeated
as 12), we get:
bi(θi) = θi −
θi
0
x( N−1
P c )
dx
θi
( N−1
P c )
= θi −
P c
N−1+P c θi
( N−1+P c
P c )
θi
( N−1
P c )
= θi −
Pc
N − 1 + Pc
· θi =
N − 1
N − 1 + Pc
· θi
Theorem 3. In a first-price auction in which each agent
cheats with probability Pa
, it is a Bayes-Nash equilibrium
for each non-cheating agent i to bid according to the strategy
that is a fixed point of the following equation:
bi(θi) = θi −
θi
0 Pa · F(bi(x)) + (1 − Pa) · F(x)
(N−1)
dx
Pa · F(bi(θi)) + (1 − Pa) · F(θi)
(N−1)
(8)
Proof. We make the same guesses about the equilibrium
strategy to aid our search as we did in the proof of Theorem
1. When simplifying the expectation of this setting"s
utility equation (7), we use the fact that the probability that
agent i will have a higher bid than another honest agent is
still F(Φi(bi(θi))), while the probability is F(bi(θi)) if the
other agent cheats. The probability that agent i beats a
single other agent is then a weighted average of these two
probabilities. Thus, we can write agent i"s expected utility
as:
Eθ−i,µa ui(b(θ), µa
, θi) = θi − bi(θi) ·
Pa
· F(bi(θi)) + (1 − Pa
) · F(Φi(bi(θi)))
N−1
As before, to find the equilibrium bi(θi), we take the
derivative and set it to zero:
0 = θi − bi(θi) · (N − 1)·
Pa
· F(bi(θi)) + (1 − Pa
) · F(Φi(bi(θi)))
N−2
·
Pa
· f(bi(θi)) + (1 − Pa
) · f(Φi(bi(θi))) · Φi(bi(θi)) −
Pa
· F(bi(θi)) + (1 − Pa
) · F(Φi(bi(θi)))
N−1
Applying the equations Φi(bi(θi)) = 1
bi(θi)
and Φi(bi(θi)) =
θi, and dividing through, produces:
0 = θi − bi(θi) · (N − 1)·
Pa
· f(bi(θi)) + (1 − Pa
) · f(θi) ·
1
bi(θi)
−
Pa
· F(bi(θi)) + (1 − Pa
) · F(θi)
Rearranging terms yields:
bi(θi) = θi −
Pa · F(bi(θi)) + (1 − Pa) · F(θi) · bi(θi)
(N − 1) · Pa · f(bi(θi)) · bi(θi) + (1 − Pa) · f(θi)
(13)
In this setting, because we leave F(·) unspecified, we
cannot present a closed form solution. However, we can simplify
the expression by removing its dependence on bi(θi).
bi(θi) = θi −
θi
0 Pa · F(bi(x)) + (1 − Pa) · F(x)
(N−1)
dx
Pa · F(bi(θi)) + (1 − Pa) · F(θi)
(N−1)
(14)
To verify Equation 14, first take its derivative:
bi(θi) = 1 − 1−
(N − 1) · Pa
· F(bi(θi)) + (1 − Pa
) · F(θi)
(N−2)
·
Pa
· f(bi(θi)) · bi(θi)) + (1 − Pa
) · f(θi) ·
θi
0
Pa
· F(bi(x)) + (1 − Pa
) · F(x)
(N−1)
dx
Pa · F(bi(θi)) + (1 − Pa) · F(θi)
2(N−1)
This equation simplifies to:
bi(θi) = (N −1)· Pa
·f(bi(θi))·bi(θi))+(1−Pa
)·f(θi) ·
θi
0
Pa
· F(bi(x)) + (1 − Pa
) · F(x)
(N−1)
dx
Pa · F(bi(θi)) + (1 − Pa) · F(θi)
N
Plugging this equation into the bi(θi) in the numerator of
Equation 13 yields Equation 14, verifying the solution.
83
Corollary 4. In a first-price auction in which each agent
cheats with probability Pa
, and F(θi) = θi, it is a
BayesNash equilibrium for each non-cheating agent to bid
according to the strategy bi(θi) = N−1
N
θi.
Proof. Instantiating the fixed point equation (8, and
repeated as 14) with F(θi) = θi yields:
bi(θi) = θi −
θi
0
Pa
· bi(x) + (1 − Pa
) · x
(N−1)
dx
Pa · bi(θi) + (1 − Pa) · θi
(N−1)
We can plug the strategy bi(θi) = N−1
N
θi into this
equation in order to verify that it is a fixed point.
bi(θi) = θi −
θi
0
Pa
· N−1
N
x + (1 − Pa
) · x
(N−1)
dx
Pa · N−1
N
θi + (1 − Pa) · θi
(N−1)
= θi −
θi
0
x(N−1)
dx
θ
(N−1)
i
= θi −
1
N
θN
i
θ
(N−1)
i
=
N − 1
N
θi
Theorem 5. In a first-price auction where F(θi) = θi, if
each agent j = i bids according a strategy bj(θj) =
N−1+αj
N
θj,
where αj ≥ 0, then it is a best response for the remaining
agent i to bid according to the strategy bi(θi) = N−1
N
θi.
Proof. We again use Φj : [0, bj(1)] → [0, 1] as the inverse
of bj(θj). For all j = i in this setting, Φj(x) = N
N−1+αj
x.
The probability that agent i has a higher bid than a single
agent j is F(Φj(bi(θi))) = N
N−1+αj
bi(θi). Note, however,
that since Φj(·) is only defined over the range [0, bj(1)], it
must be the case that bi(1) ≤ bj(1), which is why αj ≥
0 is necessary, in addition to being sufficient. Assuming
that bi(θi) = N−1
N
θi, then indeed Φj(bi(θi)) is always
welldefined. We will now show that this assumption is correct.
The expected utility for agent i can then be written as:
Eθ−i ui(b(θ), θi) = Πj=i
N
N − 1 + αj
bi(θi) · θi −bi(θ)
= Πj=i
N
N − 1 + αj
· θi· bi(θi)
(N−1)
− bi(θi)
N
Taking the derivative with respect to bi(θi), setting it to
zero, and dividing out Πj=i
N
N−1+αj
yields:
0 = θi · (N − 1) · bi(θi)
(N−2)
− N · bi(θi)
(N−1)
This simplifies to the solution: bi(θi) = N−1
N
θi.
Full Version of Example 1:
In a first-price auction where F(θi) =
θ2
i +θi
2
and N = 2,
if agent 2 always bids its type (b2(θ2) = θ2), then, for all
θ1 > 0, agent 1"s best response bidding strategy is strictly
less than the bidding strategy of the symmetric equilibrium.
After calculating the symmetric equilibrium in which both
agents shave their bid by the same amount, we find the best
response to an agent who instead does not shave its bid.
We then show that this best response is strictly less than
the equilibrium strategy. To find the symmetric equilibrium
bidding strategy, we instantiate N = 2 in the general
formula the equation found in [10], plug in F(θi) =
θ2
i +θi
2
, and
simplify:
bi(θi) = θi −
θi
0
F(x)dx
F(θi)
= θi−
1
2
·
θi
0
(x2
+ x)dx
1
2
· (θ2
i + θi)
= θi−
1
3
θ3
i + 1
2
θ2
i
θ2
i + θi
=
2
3
θ2
i + 1
2
θi
θi + 1
We now derive the best response for agent 1 to the
strategy b2(θ2) = θ2, denoting the best response strategy b∗
1(θ1)
to distinguish it from the symmetric case. The
probability of agent 1 winning is F(b∗
1(θ1)), which is the probability
that agent 2"s type is less than agent 1"s bid. Thus, agent
1"s expected utility is:
Eθ2 ui((b∗
1(θ1), b2(θ2)), θ1) = F(b∗
1(θ1)) · θ1 − b∗
1(θ1)
=
(b∗
1(θ1))2
+ b∗
1(θ1)
2
· θ1 − b∗
1(θ1)
Taking the derivative with respect to b∗
1(θ1), setting it to
zero, and then rearranging terms gives us:
0 =
1
2
· 2 · b∗
1(θ1) · θ1 − 3 · (b∗
1(θ1))2
+ θ1 − 2 · b∗
1(θ1)
0 = 3 · (b∗
1(θ1))2
+ (2 − 2 · θ1) · b∗
1(θ1) − θ1
Of the two solutions of this equation, one always produces
a negative bid. The other is:
b∗
1(θ1) =
θ1 − 1 + θ2
1 + θ1 + 1
3
We now need to show that b1(θ1) > b∗
1(θ1) holds for all
θi > 0. Substituting in for both terms, and then simplifying
the inequality gives us:
2
3
θ2
1 + 1
2
θ1
θ1 + 1
>
θ1 − 1 + θ2
1 + θ1 + 1
3
θ2
1 +
3
2
θ1 + 1 > (θ1 + 1) θ2
1 + θ1 + 1
Since θ1 ≥ 0, we can square both sides of the inequality,
which then allows us to verify the inequality for all θ1 > 0.
θ4
1 + 3θ3
1 +
17
4
θ2
1 + 3θ1 + 1 > θ4
1 + 3θ3
1 + 4θ2
1 + 3θ1 + 1
1
4
θ2
1 > 0
84 | sealed-bid auction;game theory;seller;cheating;first-price auction;sealed-bid;auction;agent;cheat;payment;case;bidsecond-price auction;possibility |