%%
%% Copyright 2007, 2008, 2009 Elsevier Ltd
%%
%% This file is part of the 'Elsarticle Bundle'.
%% ---------------------------------------------
%%
%% It may be distributed under the conditions of the LaTeX Project Public
%% License, either version 1.2 of this license or (at your option) any
%% later version.  The latest version of this license is in
%%    http://www.latex-project.org/lppl.txt
%% and version 1.2 or later is part of all distributions of LaTeX
%% version 1999/12/01 or later.
%%
%% The list of all files belonging to the 'Elsarticle Bundle' is
%% given in the file `manifest.txt'.
%%

%% Template article for Elsevier's document class `elsarticle'
%% with numbered style bibliographic references
%% SP 2008/03/01
%%
%%
%%
%% $Id: elsarticle-template-num.tex 4 2009-10-24 08:22:58Z rishi $
%%
%%
\documentclass[preprint,12pt]{elsarticle}
\usepackage{fullpage}

\usepackage{url}

\usepackage{amsfonts,mathbbol} 
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{graphicx}
\usepackage{subfigure}
\usepackage{verbatim}
\usepackage{psfrag}
\usepackage{epsfig}
\usepackage{mathsymb}
\usepackage{algorithmic}

\newtheorem{definition}{{\bf Definition}}
\newtheorem{theorem}{{\bf Theorem}}
\newtheorem{lemma}{{\bf Lemma}}
\newtheorem{corollary}{{\bf Corollary}}
\newtheorem{observation}{{\bf Observation}}
\newtheorem{example}{{\bf Example}}
\newtheorem{proposition}{{\bf Proposition}}


\newcommand{\response}[1]{{\bf Our response: #1}}


\newcommand{\commentv}[1]{{\bf **Victor: #1**}}
\newcommand{\commente}[1]{{\bf **Enrico: #1**}}
\newcommand{\commentz}[1]{{\bf **Zinovi: #1**}}
%\newcommand{\commentv}[1]{}
%\newcommand{\commente}[1]{}
%\newcommand{\commentz}[1]{}

%%%%NOTATION%%%%%%%%
\newcommand{\type}{\ensuremath{\theta}}%type
\newcommand{\types}{\ensuremath{\Theta}}%space of all types
\newcommand{\s}{\ensuremath{s}}%strategy
\renewcommand{\S}{\ensuremath{S}}
\newcommand{\ut}{\ensuremath{u}}%terminal utility
\newcommand{\uet}{\ensuremath{\widehat{u}}}%expected utility by type
\newcommand{\ue}{\ensuremath{\widetilde{u}}}%expected utility 
%\newcommand{\ud}{\ensuremath{\hat{u}}}%distributional utility
\renewcommand{\d}{\ensuremath{h}}%distribution of actions
\renewcommand{\a}{\ensuremath{a}}%action
\newcommand{\A}{\ensuremath{A}}%action set
\newcommand{\as}{\ensuremath{{\bf a}}}%action profile
\renewcommand{\b}{\a}%bid
\renewcommand{\i}{\ensuremath{c}}%bid
\newcommand{\probwin}{\ensuremath{q}}%probability of winning given bids

\newcommand{\btos}{\texttt{{BeliefsToStrategy}}}
\newcommand{\br}{\texttt{{BestResponse}}}
\newcommand{\ul}{\texttt{{UtilityLines}}}
\newcommand{\fp}{\texttt{{FictitiousPlay}}}
\newcommand{\afp}{\texttt{{AsymmetricFictitiousPlay}}}
\newcommand{\TRUE}{\texttt{{true}}}
\newcommand{\es}{\texttt{{equilibrium strategy}}}
\newcommand{\converged}{\texttt{{converged}}}
\newcommand{\convergedstrategy}{\texttt{{ConvergedStrategy}}}

\newcommand{\cost}{\text{\emph{cost}}}
\newcommand{\supp}{\operatorname{supp}} 


\newcommand{\etal}{{\em et al.}}


\journal{Artificial Intelligence}

\begin{document}
\begin{frontmatter}

%% Title, authors and addresses

%% use the tnoteref command within \title for footnotes;
%% use the tnotetext command for the associated footnote;
%% use the fnref command within \author or \address for footnotes;
%% use the fntext command for the associated footnote;
%% use the corref command within \author for corresponding author footnotes;
%% use the cortext command for the associated footnote;
%% use the ead command for the email address,
%% and the form \ead[url] for the home page:
%%
%% \title{Title\tnoteref{label1}}
%% \tnotetext[label1]{}
%% \author{Name\corref{cor1}\fnref{label2}}
%% \ead{email address}
%% \ead[url]{home page}
%% \fntext[label2]{}
%% \cortext[cor1]{}
%% \address{Address\fnref{label3}}
%% \fntext[label3]{}

\title{Computing Pure Bayesian-Nash Equilibria in Games with Finite Actions and Continuous Types}

%% use optional labels to link authors explicitly to addresses:
%% \author[label1,label2]{<author name>}
%% \address[label1]{<address>}
%% \address[label2]{<address>}

\author[barilan,soton]{Zinovi Rabinovich}
\ead{zr@zinovi.net}
\author[soton]{Victor Naroditskiy}
\ead{vn@ecs.soton.ac.uk}
\author[soton]{Enrico H. Gerding\corref{cor1}}
\ead{eg@ecs.soton.ac.uk}
\author[soton,king]{\mbox{Nicholas R. Jennings}}
\ead{nrj@ecs.soton.ac.uk}
\cortext[cor1]{Corresponding author.}
\address[barilan]{Department of Computer Science, Bar-Ilan University, Ramat Gan, 52900, Israel}
\address[soton]{Electronics and Computer Science, University of Southampton, Southampton, SO17 1BJ, United Kingdom}
\address[king]{Department of Computing and Information Technology, King Abdulaziz University, Saudi Arabia}

\begin{abstract}
%\noindent In this paper we extend the well-known fictitious play (FP) algorithm to compute pure strategy Bayesian Nash equilibria in games of incomplete information with finite actions and continuous types (G-FACTs). For our algorithm we prove that, if FP beliefs converge, there exists a pure strategy Bayesian Nash equilibrium consistent with the converged beliefs. We furthermore develop an algorithm to convert converged beliefs into an equilibrium strategy for games where utility functions are linear in type, a large class of games that includes many types of auctions. \commentv{don't find this clear, remove?: We prove that our procedure is correct and furthermore can be used to compute pure $\epsilon$-Nash equilibria within finite iterations.} We then apply our algorithm to a setting with multiple, simultaneous sealed-bid auctions. This problem is complex and results only exist for restricted settings. Our algorithm is able to compute equilibria for which no analytical results exist. Finally, we show how the equilibrium for games with linear utility functions can be formulated in terms of a system of polynomial equations. To benchmark our results, we then solve these equations analytically for a simple case and show that the analytical solutions correspond to those found using our algorithm. 
\noindent %In this paper we 
We extend the well-known fictitious play (FP) algorithm to compute pure-strategy Bayesian-Nash equilibria in private-value games of incomplete information with finite actions and continuous types (G-FACTs). 
We prove that, if the frequency distribution of actions (fictitious play beliefs) converges, then there exists a pure-strategy equilibrium strategy that is consistent with it. 
We furthermore develop an algorithm to convert the converged distribution of actions into an equilibrium strategy for a wide class of games where utility functions are linear in type. This algorithm can also be used to compute pure $\epsilon$-Nash equilibria when distributions are not fully converged. We then apply our algorithm to find equilibria in an important and previously unsolved game: simultaneous sealed-bid, second-price auctions where various types of items (e.g., substitutes or complements) are sold. Finally, we provide an analytical characterization of equilibria in games with linear utilities. Specifically, we show how equilibria can be found by solving a system of polynomial equations. For a special case of simultaneous auctions, we also solve the equations confirming the results obtained numerically.
\end{abstract}

\begin{keyword}
Algorithmic Game Theory \sep Bayes-Nash Equilibrium \sep epsilon-Nash Equilibrium \sep Fictitious Play \sep Simultaneous Auctions 
%% keywords here, in the form: keyword \sep keyword

%% MSC codes here, in the form: \MSC code \sep code
%% or \MSC[2008] code \sep code (2000 is the default)

\end{keyword}

\end{frontmatter}

\section{Introduction}
\label{sec:intro}
\noindent %In this paper, we 
We study the problem of finding a symmetric pure Bayesian-Nash equilibrium in static games (i.e., where decisions are made simultaneously by all players) of incomplete information with independent private values (where the utility of a player depends only on the actions performed by others and not on their type), continuous type spaces and finite action spaces. Existing analytical results for such games mostly focus on auctions, a special case of incomplete information games. However, despite extensive research in this area, the developed theory has little to offer in terms of equilibrium derivation beyond the simplest models such as a single auction selling one or multiple homogeneous items (for an overview of the results, see~\cite{krishnabook}). 
On the computational side, solvers have been designed primarily for games of complete information (e.g.,~\cite{gambit,mckelvey1996computation,agtEqComp}), and can be applied to games of incomplete information with only a small number of actions and types. The main contribution of this paper is an algorithmic technique for computing Bayesian-Nash equilibria in games of incomplete information. We show its efficacy in simultaneous auctions, an important game of which only special cases were solved before. On the analytical side, we provide a novel characterisation of equilibria in a large class of games. This characterisation allows us to derive all equilibria for small simultaneous auction games confirming computational findings.


In more detail, our computational technique is an extension of the fictitious play (FP) algorithm~\cite{brown_51,robinson_51} to games of incomplete information with continuous types. 
%We choose fictitious play since it is a well-known technique for finding equilibria in games of complete information and, for certain classes of games, it is guaranteed to converge (see Section~\ref{sec:related}). 
Fictitious play was initially proposed as an iterative method for computing equilibria in zero-sum games of complete information. In each iteration, the algorithm chooses a best response to the frequency distribution of 
%\commentz{(best-response) -- best response of best response is confusing} 
actions from previous iterations. 
%This frequency distribution is known as fp {\em beliefs}, which is used this to form beliefs about the actions played by the opponent. 
If this frequency distribution, known as FP {\em beliefs}, converges, the converged distribution yields a {\it mixed} strategy Nash equilibrium of the game (see, e.g.,~\cite{fudenberg_levine_98_book}). Building on this, we develop an algorithm that generalises fictitious play to a wide class of games of incomplete information. Unlike regular fictitious play, if our algorithm converges, a {\em pure}-strategy equilibrium is produced. 
%Furthermore, majority of currently available computational techniques and algorithms for finding an NE (in Section~\ref{sec:related} we briefly list some of the more prominent methods) are inapplicable to games of incomplete information with continuous types. As a result, a general technique such as ours has a high practical significance.

%In more detail, our technique extends a well-known fictitious play~\cite{brown_51,robinson_51} algorithm to games of incomplete information. Fictitious play was initially proposed as an iterative method for computing equilibria in zero-sum games of complete information. In each iteration, the algorithm chooses a best response to the frequency distribution of best-response actions from previous iterations. 
%If these frequency distributions, known as FP {\em beliefs}, converge, the converged distribution yields a {\it mixed} strategy Nash equilibrium of the game (see, e.g.,~\cite{fudenberg_levine_98_book}). 
%We generalize fictitious play to games of incomplete information. This is an important contribution for two reasons. First, unlike standard fictitious play, our generalized algorithm in convergence produces a {\em pure} strategy equilibrium. Second, to the best of our knowledge there are no other solvers applicable to games of incomplete information, thus, a widely applicable technique as ours has a higher practical significance, than the original fictitious play algorithm.



%We generalize the fictitious play algorithm to games of incomplete information with {\em finite} action spaces and {\em continuous} type spaces.

%However, when approaching multi-player games with larger sets of actions, even the most advanced algorithms (see, e.g.,~\cite{govindan_wilson_2004}) may fail to converge, and necessitate additional game-specific tuning. Thanks to its simplicity, fictitious play is often used as the first step in a study of a larger complete information game.

%Our technique is applicable to static games of incomplete information with {\em finite} action spaces and {\em continuous} type spaces. 
%(thus the strategy space, which maps types to actions, is uncountably infinite\commentv{why is this important?}). 
Following much of the game-theoretic literature (see, e.g.,~\cite{krishnabook,mwg}), we focus on {\it symmetric} games (where all players have the same type-dependent utility function, action space, type space, and type distribution) with single-dimensional types.\footnote{In \ref{sec:asym} we relax the symmetry assumption. The assumption of single-dimensional types is further discussed in \ref{sec:multtype}.} Our goal is to find a pure symmetric equilibrium, which is known to exist in this class of games under very mild assumptions (see Section~\ref{sec:model} for details). The class of games we consider includes a wide range of commonly studied static games of incomplete information. Examples include single-sided auctions, double auctions, Cournot/Bertrand duopoly with asymmetric information and negotiation with incomplete information.
\label{othergames}
Whereas we assume a continuous type space, our algorithm requires the space of {\it actions} to be finite. In fact, in many cases, such as auctions with discrete bids (consider the auctioneer {\em stepping} up the price in an English auction), finite action spaces are inherent to the problem, yet more difficult to analyse theoretically.
%, finite action spaces are actually more natural but more difficult to analyse theoretically. 
%We provide a general approach for these games, and a specific implementation for a setting where a player's utility function is linear in its type.  
%Most of the analytical results on equilibria deal with continuous action spaces and continuous types. There, continuity is helpful in analytical characterisation. In our work, finiteness of the action space is important for algorithmic reasons. The finiteness of the action space does not mean that the type space should be finite as well: for instance, the fact that the smallest bid increment in an auction is \$10 instead of \$1 does not affect how the bidders value the item sold.
Furthermore, the combination of finite actions and continuous type distributions guarantees (see Section~\ref{sec:model}) existence of a pure equilibrium, and also simplifies the representation of distributions over actions, i.e. FP beliefs.
%\commentz{In more detail, finiteness of the action space is required since our generalised fictitious play algorithm relies on frequency distributions of best response actions (i.e., FP beliefs).} 

While the steps of the fictitious play algorithm are the same in games of complete and incomplete information, novel challenges, such as recovering a pure equilibrium strategy from the converged beliefs, and computing a best-response action distribution, arise in the latter class of games. Unlike games of complete information, the converged frequency distribution in incomplete information games does not correspond to a single mixed strategy.\footnote{This is because, in the case of incomplete information, a strategy is a mapping from type to actions, and there is a continuum of mappings that result in the same distribution of actions.} Moreover, it is not very useful to study mixed equilibria (as opposed to pure ones) for the types of games we consider.  This is because, in games of incomplete information with continuous types, if a mixed equilibrium exists and under mild conditions on the type distributions, there always exists a corresponding pure-strategy equilibrium (resulting in the same action distribution), the latter being a more practical and desirable solution concept.\footnote{Pure equilibrium is a preferred solution concept as it is conceptually simpler and it does not rely on the ability of a player to randomize (see, e.g., discussion on mixed equilibria implementation in~\cite{camerer_BGT_2003}).\label{fn:pure}} 

To this end, we start by proving that, for converged FP beliefs, there exists a pure-strategy equilibrium generating that distribution. This theoretical result applies to converged beliefs which may only be observed asymptotically. In practice, we can only run a finite number of iterations of a FP procedure, never reaching an exact convergence. Therefore, we need a way to compute equilibria from FP beliefs that have not completely converged. In this case, we turn to approximate equilibria: after each iteration of our algorithm, we check whether we can produce an $\epsilon$-equilibrium strategy from current beliefs. For this, we need an algorithm that converts FP beliefs into a strategy, such that the action distribution resulting from this strategy is the same as the beliefs. We design such an algorithm, which we call \btos, for games where the agent's utility is linear in single-dimensional type (see Section~\ref{sec:model} for formal definitions). Linearity in type is a standard assumption in most commonly studied single-parameter games including all forms of single-item auctions where an agent's type denotes the value for receiving the item (\ref{sec:extensions} discusses how our technique can be applied to domains with multi-dimensional types such as multi-unit or combinatorial auctions). When applied to converged beliefs, \btos\ produces a pure-strategy equilibrium. Furthermore, if a sequence of beliefs is converging, \btos\ yields an $\epsilon$-equilibrium for any $\epsilon$ after a finite number of iterations.

We illustrate the power of our approach by finding equilibria in an important and previously unsolved problem: simultaneous sealed-bid auctions. In particular, we study a complete spectrum of combinatorial preferences, from perfect substitutes to perfect complements. We choose simultaneous auctions as it is a well-known fundamental model that has received  attention in the literature before. However, previously both analytical and numerical results were obtained only for special cases~\cite{krishna1996simultaneous,GerdingAAMAS08}.
%In addition to numerical results, we provide an analytical approach for finding equilibria. Specifically, for uniform distribution of types, we show that an equilibrium can be found by solving systems of polynomial equations. We use the approach to derive results for a special case of simultaneous auctions validating the corresponding numerical results.




Finally, in order to benchmark our numerical results, we provide an analytical characterisation of the equilibrium for games with linear utility functions in terms of a system of polynomial equations.  Using this characterisation, we show how, for the simultaneous auctions setting with two players, two bids, and two auctions, the system can be solved analytically, providing exact equilibrium and uniqueness results. We then show that this derived equilibrium matches the results obtained numerically.

Against this background, our contributions to the state-of-the-art are as follows.
\begin{itemize}
\item We extend fictitious play to games of incomplete information with finite actions and single-dimensional continuous types. We prove that, whenever fictitious play beliefs converge, there exists a pure-strategy Bayesian-Nash equilibrium consistent with the converged beliefs. 
\item We provide an algorithm that converts converged beliefs into a pure equilibrium strategy for games with linear utility functions. We also show that, using this algorithm, if beliefs converge asymptotically, we can obtain an $\epsilon$-equilibrium for any $\epsilon>0$ within finite time.
%\item We find equilibria in a fundamental previously unsolved auction
\item We find equilibria in a prominent, yet previously unsolved, auction game --- simultaneous auctions for items ranging from perfect substitutes to perfect complements. 
\item We show that an equilibrium in a wide class of games with finite actions and continuous types can be found by solving a system of polynomial equations. Using this characterisation, we derive an equilibrium for a special case of simultaneous auctions. 
\end{itemize}

The remainder of the paper is structured as follows. We begin with a review of related work in Section~\ref{sec:related}. Our model of games of incomplete information is formally stated in Section~\ref{sec:model}. A generalized fictitious play algorithm for these games is presented in Section~\ref{sec:fp}. In Section~\ref{sec:linu} we provide a best-response algorithm and a procedure for converting FP beliefs to a strategy for games with utility functions linear in type. Section~\ref{sec:simauctions} applies our approach to a simultaneous auctions model. Finally, an analytical characterisation along with an exact derivation for the special case of two auctions, two players, and two bid levels appears in Section~\ref{sec:analytical}. Section~\ref{sec:conclusions} concludes.

%\begin{verbatim}
%Unlike most of existing literature, the action space is multi-dimensional: in the case of 2 simultaneous auctions, a bidder needs to decide what to bid in each auction based on his single-dimensional type.
%most existing numerical techniques are for single-auction
%  note: that the problem is not trivial even in 2nd price with discrete bids
%we can do more
%multi-dimensionality exposure problem (even tho type is single dim, the exp problem is present)
%non-monotonicity of each bid in type makes it hard. but we provide a characterisation: strat must be monotone in slope.
%gambit
%We do not have any theoretical results on convergence of the procedures, however, numerically we observe convergence in all of the runs.\commentv{say more here}
%\end{verbatim}

\section{Related Work}
\label{sec:related}
%Our work relates mainly to three areas of literature: fictitious play, general
%computational algorithms for finding equilibria in games of
%incomplete information and equilibria in simultaneous auctions. We
%discuss each of these areas in turn.
\noindent 
This section provides an overview of fictitious play literature as well as other methods for finding Nash equilibria. The key distinction between ours and extant work is that our technique applies to games of {\em incomplete} information with {\em continuous} types. Note that games of incomplete information with discrete types can be viewed as games of complete information with a separate player for each possible type (see, e.g., Definition 26.1 in~\cite{osborne-rubinstein}). However, this representation is exponential in the number of types, making techniques for complete information games applicable to only very small game instances. A few techniques, which we review below, have been developed specifically for incomplete information games with discrete types. However, they also become intractable as the number of types increases.

In more detail, fictitious play was initially proposed as an iterative method for computing equilibria in static zero-sum games of complete information. It was subsequently shown to converge in several restricted classes of games, such as potential games~\cite{monderer_shapley_PGs_96} and bi-matrix $2\times N$ games~\cite{berger_2005}. For instance, the work of Monderer and Shapley~\cite{monderer_shapley_96_FP} shows FP convergence in a restricted class of complete information games (specifically, games that are {\em response equivalent} to {\em identical payoff games}).
A related method is no-regret learning (or regret matching) where a player compares actions based on their average performance in the past (see, e.g., ~\cite{hart_mas-colell_2000, hart_2005}). This method has been shown to converge to a Nash equilibrium in the same restricted class of games where FP converges (see, e.g.,~\cite{jafari_etal_2001}). %When these assumptions do not hold, 
However, in general, the frequency distribution of actions produced by this method converges to the set of correlated equilibria, which is a weaker solution concept than Nash. 

\label{rel_major_changes_begin}The literature mentioned above applies to settings with complete information. This setting is well studied, and a number of other general-purpose solvers exist for computing Nash and correlated equilibria~\cite{gambit,mckelvey1996computation,agtEqComp,govindan_wilson_2004,papadimitriou_roughgarden_2008,chapman_et_al_2010}). In contrast, there are many fewer solution algorithms for incomplete information games, though some (e.g. ~\cite{hoda_etal_2010,gilpin_etal_2007,koller_etal_96}) can handle (or be adapted to) incomplete information games with discrete finite type sets at the expense of computational feasibility. Notice, however, that they are still inapplicable to the setting of our paper, since we focus on games with continuous types.\label{hoda_gilpin_koller_incomplete_but_discrete}
%% . However, our focus is on games of incomplete information with continuous types, where the above algorithms cannot be applied directly and an additional adaptation comes at the expense of
%Next we review existing techniques designed for games of incomplete information.

%Solution algorithms for incomplete information games commonly specialise on settings with a particularly structure, such as a single auction (see, e.g.,~\cite{reeves2004cbr,armantier08, krishnabook}) or ``tree games" (see, e.g.,~\cite{singh_soni_wellman_2004}). 


%none of the existing solvers are capable of both efficiently representing and solving non-trivial games of incomplete information. 
To address the issue of scalability, compact representations such as tree games~\cite{singh_soni_wellman_2004} (where the utility structure induces a set of dependencies between players that form a tree), Bayesian Action-Graph Games (BAGGs)~\cite{jiang_leyton-brown_2010}, and Multi-Agent Influence Diagrams (MAIDs) \cite{koller_milch_03} have been developed to exploit the game structure: e.g., independence of type distributions and symmetry. An additional feature of the latter two approaches is their ability to make the game structure available to general-purpose solution algorithms. In particular, Jiang \etal~\cite{jiang_leyton-brown_2010} show how two different algorithms, the global Newton method~\cite{govindan_wilson_2003} and the simplicial subdivision method~\cite{van-der-Laan_etal_1987}, can be used with BAGGs, and demonstrate experimentally that these algorithms can result in exponential speedup. However, both BAGGs and MAIDs rely on the fact that the type spaces of the game they encode are discrete and finite. 
%Furthermore, although a generalisation of BAGGs and MAIDs to continuous types is possible, the computational benefits disappear. Specifically, in the case of BAGGs, the number of nodes in its graph representation is directly linked to the number of possible types. Hence, directly using BAGGs with continuous types would require continuous graphs. While possible in theory, it is unclear how to handle these structures computationally. A similar problem occurs with MAIDs, where type conditioned distributions are used. Once again, while continuous conditionals are mathematically available, the inference in such a MAID would become either intractable, or further limit the set of representable games.

 
%   The latter example comes from perhaps the most promising approach to encoding incomplete information games, namely using a graph structure to capture relationships between player utilities and types. \commente{Latter sentence floury, plus is the term tree games really correct? Are trees special cases of graphical structures analysed, or do all games analysed with graphical structure have a tree structure?} Now, .\commente{This does not seem to be an advantage, why study graphical structure in the first place? Surely, the advantage is the speed-up, not that you can use general-solvers.} That is, the representation does not necessarily impose an algorithmic structure to find an equilibrium.  

% As a result, in this paper we avoid using graph-based game representations, and use a generic functional representation (similar in principle to that of Mas-Colell~\cite{mas-colell_84}) convenient to the use with iterative algorithms such as fictitious play or regret-based algorithms.

%In general, however, game solvers are not independent from the game representation. Rather, they tend to focus on some representation particulars, exploiting them to achieve efficiency. Unfortunately, this tends to make many algorithms, that rely on type space being discrete, directly inapplicable to games with continuous types. 

Furthermore, unlike the case of BAGGs and MAIDs, most representations and solution algorithms impose strong restrictions on each other, which consequently limits the class of games they can be efficiently (if at all) applied to. For example, Koller~\etal~\cite{koller_etal_96} had to convert an extensive form game into a sequence form~\footnote{Sequence form is a game description similar to normal form, where action sequences replace pure strategies. For typed games it assumes discrete and finite type space, hence is inapplicable to our domain.} in order to supply a payoff matrix to the underlying Lemke's algorithm. While linear in the size of the extensive form's tree, the number of action sequences in Koller's conversion can be exponential in the number of information sets of the game, which significantly impacts the scalability of the overall algorithm. 
In normal form games with infinite strategy spaces, Stein~\etal~\cite{stein_ozdaglar_parillo_2008} had to either limit the scope to just two players or approximate the solution by discretising the strategy space. %Note that increasing the number of types comes at the cost of losing computational tractability, and a smaller number of types does not allow the equilibrium behaviour of the original continuous case to be reproduced.
%In fact, the converse, that is solving finite games by continuous approximations, may be more effective and general (see e.g.~\cite{ganzfried_sandholm_2010, stein_ozdaglar_parillo_2008}).\commente{Again you lost me. How is continuous approx the converse of what you said before?} 
Reeves and Wellman~\cite{reeves2004cbr} restrict attention to games with two players and piecewise-uniform type distribution and apply an iterated best response to search for Bayesian-Nash equilibrium.% in restricted class of auction for the case of two players. %with piecewise uniform type distributions.%, and , and assumption of piecewise linear opponent strategy.

\label{related_poker}
Another related area of research has resulted from the international poker competition~\cite{zinkevich_littman_06}, which has inspired a number of generally applicable algorithms for solving games of incomplete information. For instance, the counterfactual regret algorithm was developed by Zinkevich \etal~\cite{zinkevich_etal_2007}, and a method combining fictitious play with value iteration was proposed by Ganzfried and Sandholm~\cite{ganzfried2009computing}. Furthermore, Hawkin \etal~\cite{hawkin2011automated} focus on transforming a game with a continuum of actions into a smaller game, and develop a new regret-minimisation algorithm to solve this game which builds on the counterfactual regret algorithm from~\cite{zinkevich_etal_2007}.  These papers differ from our approach in that they view poker as a game with a finite {\em discrete} type space, and their algorithms rely on this property. %Consequently, the algorithms of~\cite{zinkevich_etal_2007,ganzfried2009computing} cannot be directly applied to games with continuous types.
Another approach is by Ganzfried and Sandholm~\cite{ganzfried_sandholm_2010}, who formulate the problem as a mixed integer linear feasibility program. \label{related_gs}
Their algorithm requires the set of types to be finite (and the number of constraints increases linearly with the number of types), but the authors then discuss how the approach can be extended to deal with continuous types. However, this extension requires the type distributions to be piecewise linear, and additional constraints are needed for each segment. By contrast, our algorithm is specifically designed for settings with continuous type spaces, and does not rely on assumptions about the shape of the distribution. Furthermore, different from our approach, their obtained equilibrium is a mixed one (whereas our algorithm always produces a pure-strategy equilibrium). In addition, their approach relies on having a qualitative model of the domain, which means that the number of intervals that divide the type space, as well as the actions associated with each of these intervals, are known. In contrast, our algorithm assumes no such knowledge. \label{rel_major_changes_end} %This has many similarities to work described in Section \ref{sec:analytical}, where we derive equilibria for small problems by solving a system of polynomial equations. The differences are that, on the one hand, our approach does not discretise the type space but, on the other hand, can only be used to derive strategies analytically for small problems due to the equations being polynomial (as opposed to linear). 
%In contrast, our technique is specifically designed for games with continuous types, requires no qualitative model, and always gives a pure-strategy equilibrium. %We note that the comparative strength, particularly in terms of convergence, of no-regret and fictitious play continues to be an open research question (see, e.g.,~\cite{jafari_etal_2001}). Extending no-regret techniques to the class of games studied here remains an open research direction.

%and falls outside the scope of the current work. \commentv{so no regret could be applied to solve the same problem we are solving?}

%In this paper, we chose to use fictitious play, the simplest algorithmic basis, and concentrate on issues raised by continuity of types. 
%Recently, fictitious play has been applied to more complex settings. Ganzfried and Sandholm~\cite{ganzfried2009computing} use FP in stochastic games with imperfect information. 
%Fictitious play has a rich history of extensions, and it was applied to settings where continuous types are the expected and the necessary next step. In particular, 
Closer to the settings considered in this work, Gerding \etal\ \cite{GerdingAAMAS08} applied a variant of fictitious play called {\it smoothed fictitious play} to find mixed strategy equilibria in simultaneous auctions selling perfect substitutes when the number of bidder types is small. By contrast, here we consider continuous types and show that FP can be used to find {\em pure} equilibria. The FP algorithm for finding pure equilibria in games with incomplete information was first introduced in our previous work where the algorithm was applied to simultaneous auctions with perfect substitutes~\cite{ecs17271}. The current paper builds on~\cite{ecs17271} and significantly extends that paper. In particular, we introduce, for the first time, the $\btos$ algorithm to recover the pure strategy from the beliefs; we formally prove several properties of this algorithm; we extend the analysis of single-sided simultaneous auctions to a range of combinatorial preferences, from perfect substitutes to perfect complements; finally, we provide an analytical characterisation of the equilibrium strategy for small settings.  

Our work also contributes to the literature on analytical derivations of equilibrium bidding strategies in the domain of simultaneous single-sided auctions, and auctions with discrete bids. Simultaneous Vickrey auctions selling complementary goods are studied in~\cite{krishna1996simultaneous}. There, a distinction is made between local bidders, who only participate in one given auction, and global bidders who can participate in all auctions. The equilibrium and resultant market efficiency are derived for a model where each auction contains both global and local bidders. The model studied in \cite{krishna1996simultaneous} is further extended to the case of common values in \cite{rosenthal96simultaneous}. The model we consider is more general in that it also applies to games other than auctions, and we obtain solutions for a variety of preferences, from perfect complements to perfect substitutes. The case with substitutable goods is studied by~\cite{szentes03three} in a setting restricted to three sellers and two global bidders and with each bidder having the same value (and thereby knowing the value of other bidders). The space of symmetric mixed equilibrium strategies is derived for this special case. Another setting where bidders face multiple simultaneous sealed-bid auctions is studied in e.g.~\cite{mcafee93,peters97,gerding07}. These papers assume that bidders bid in only a single auction and choose this auction with some probability (where this probability depends on the reserve prices of the auctions). In~\cite{GerdingJAIR2008}, however, it was shown that choosing a single auction is not optimal. Specifically, if all other bidders choose only one auction, and when their types are sampled from distributions with non-decreasing hazard rates (which includes a wide range of common distribution functions including uniform, normal and exponential), the best response is always to place non-zero bids in all auctions. Our paper differs from~\cite{GerdingJAIR2008} since we consider equilibrium behaviour, whereas the analysis in \cite{GerdingJAIR2008} is decision theoretic. Moreover, the analysis is limited to perfect substitutes and empirical evaluation, and relies on discretising the type space. In~\cite{ecs13267} the authors attempt to find the equilibrium strategies for this setting using iterative best response, but they show that, in fact, the strategies never converge. 
  
Finally, a number of researchers have investigated auctions with discrete bid levels. A first-price auction for a single item is considered in~\cite{chwe_89}. There, equilibrium is characterised, and revenues are compared for different increments, defining sets of evenly spaced discrete bids. Discrete bids that are not necessarily uniformly spaced are studied in~\cite{mathews08} in the context of a second-price auction for a single item. A special case of our characterisation of the best-response for linear utilities appears there for a single action/item case. Our analytical characterization goes beyond the case of single-item auctions, allowing derivation of equilibria in previously unsolved problems such as simultaneous auctions (see Section~\ref{sec:analytical}).

\section{Games with Finite Actions and Continuous Types}
\label{sec:model}
\noindent We consider symmetric games of incomplete information with a finite number of actions and players with single-dimensional types, where types are sampled from a continuous type space. A game consists of $n$ players, and the set of players is denoted by $N$. Each player draws his type $\type \in \types\subset\R$ independently from a commonly known continuous distribution over $\types$ with density $f$, and a corresponding cumulative distribution $F$. Without loss of generality, we take the type space to be $\types=[0,1]$. 
%distribution function $\bar{F}:2^\types \to [0,1]$  over a continuous %type space $\types$, where $\bar{F}(\types')$ denotes the probability %that a player has type $\type \in \types'$, $\types' \subseteq %\types$. The corresponding cumulative distribution and probability %density functions are denoted by $F$ and $f$ respectively. The %density function $f$ is assumed to be continuous. 
The same finite set of actions $A = \{\a_1,\ldots, \a_m\}$ is available to each player. We adopt a standard independent private value model where the utility of a player is independent of the types of other players, and of the identity of the player performing an action (only the action matters, not who executed it). Therefore, the utility of a player is a function that depends on his type, his action, and the actions of the other players,  $\ut: \types \times A^n \to \R$.  For our theoretical results, we furthermore require that the utility is continuous in $\theta$. The tuple $\Gamma = \langle N,A,u(\cdot),\types,F(\cdot) \rangle$ then defines a Bayesian game.\footnote{Our notation for agents' utility and type exploits the fact that the game is {\em symmetric}: i.e., each agent has the same type and action spaces and the utility of each agent is independent of his identity. Specifically, each agent's utility is given by the same function $u$, which is not indexed by $i$.\label{ft:sym}} In the following, we  refer to this setting as Games with Finite Actions and Continuous Types (G-FACTs). Our algorithm works in the context of G-FACTS as described here, but note that some of these assumptions can be relaxed. In particular, we discuss extensions to asymmetric games and multi-dimensional types in~\ref{sec:extensions}. The assumptions of independent private values and finite actions are inherent to the algorithm.

As is common in literature on Bayesian games, we study symmetric Bayesian-Nash equilibria: i.e., equilibria where all players follow the same strategy (see, e.g., chapters 2,3,6,7 in~\cite{krishnabook}).
A pure strategy $\s: \types \rightarrow A$ is a function that specifies an action for each player type. We denote by $\S$ the set of all strategies $\s: \types \rightarrow A$. Letting $X=(X_1,\ldots,X_{n-1})\in \types^{n-1}$ denote the random variables representing the types of the other $n-1$ players, the expected utility of a player of type $\type$ playing action $\a_i$ when all other players follow the strategy $\s$ is $\E_{\{X_j\sim F\}_{j=1}^{n-1}}[\ut(\type,\a_i,(\s(X_1),\ldots,\s(X_{n-1})))]$.

Instead of expressing the expected utility in terms of the strategies of other players, it is more convenient to use an equivalent representation in terms of the distribution of actions of the other players.  The latter representation allows us to take advantage of the finiteness of the action space enabling an efficient best response calculation. 
The action distribution resulting from a strategy is derived as follows. Let $\s^{-1}(\a_i) \subseteq \types$ denote the set of all types playing action $\a_i$. The probability that an agent's type is from this set is $\d_s(a_i)=\int_{\s^{-1}(\a_i)}f(x)dx$, and $\d_{\s} \in \Delta(A)$ is the distribution of actions resulting from an agent playing $\s$. 
When all other agents follow the same strategy $\s$, the expected utility of an agent of type $\type$ playing action $\a_i$ can be written as:
\begin{equation}
 \uet(\type,\a_i,\d_{\s}) = \E_{\{Y_j\sim \d_{\s}\}_{j=1}^{n-1}}[\ut(\type,\a_i,(Y_1,\ldots,Y_{n-1}))] \label{eq:expected}
\end{equation}
The expected utility from playing a strategy $\s'(\cdot)$ when everyone else plays a strategy $\s(\cdot)$ is $\ue(\s', \d_{\s}) = \E_{\type}[\uet(\type,\s'(\type),\d_{\s})]$.

\begin{example}\label{example:fpa} To illustrate the notation, and to give an example of a game from the G-FACT class, consider a simple, single-item first-price auction with $n$ bidders, each bidder's value uniformly distributed in $\Theta = [0,1]$ ($F = U(0,1)$), and 4 discrete bids from 0 to 3 dollars ($A = \{0,1,2,3\}$). Furthermore, the strategy is given by:
\[
\s(\theta) = 
\begin{cases}
a_1 = 0 & \text{ if } 0 \leq \theta < 0.2\\
a_2 = 1 & \text{ if } 0.2 \leq \theta < 0.3\\
a_3 = 2 & \text{ if } 0.3 \leq \theta < 0.65\\ 
a_4 = 3 & \text{ if } 0.65 \leq \theta \leq 1\\
\end{cases}
\]
Given that $\theta$ is uniformly distributed, the action distributions are as follows: $h_s(a_1)=0.2$, $h_s(a_2)=0.1$, $h_s(a_3)=0.35$, and $h_s(a_4)=0.35$. Suppose that a player with type $\theta$ derives a utility of $(3 \theta - a_i)$ if she wins the item, and $0$ otherwise. Furthermore, considering a fair tie breaking rule, the probability of winning when the agent ties with $j$ other agents is $\frac{1}{j+1}$. Note that a player wins the auction if either all other bids are lower (and thus, $j=0$), or if $n-j-1$ bids are lower and the remaining $j$ (excluding his own) bids are equal and she wins the tie. Furthermore, there are $\binom{n-1}{j}$ ways to choose $j$ bidders to tie with. Each such tie occurs with probability $\left( h_s(a_i) \right)^{j} \left( \sum_{a_k < a_i} h_s(a_k) \right)^{n-j-1}  $, where the first term ensures that there are $j$ bids equal to the agent's bid, and the second term ensures that all other bids are lower. Then the expected utility of a player with type $\theta$ when playing action $a_i$, given $n-1$ other bidders is:
\begin{equation}
\uet(\type,\a_i,\d_{\s}) = (3 \theta - a_i)  \sum_{j=0}^{n-1} \binom{n-1}{j} \frac{1}{j+1}  \left( h_s(a_i) \right)^{j} \left( \sum_{a_k < a_i} h_s(a_k) \right)^{n-j-1}
\label{eq:fpa}
\end{equation} 
\noindent For example, the probability of winning the auction when placing action $a_1$ is equal to $\frac{1}{n}  h_s(a_1)^{n-1}$. Note that, due to the tie breaking rule, we need to sum over all possible values of $1 \leq j \leq n$, and multiply this by the number of possible occurrences using the binomial coefficient. This calculation becomes more onerous when we consider multiple simultaneous auctions in Section~\ref{sec:simauctions}.      
\end{example}

Now, as mentioned earlier, we are interested in finding equilibrium strategies. We define the necessary terms in the following.

\begin{definition}
\label{def:eqstrategies}
A strategy $\s: \types \rightarrow A$ is a symmetric pure-strategy equilibrium of a game $\Gamma$ if:
\begin{align*}
\ue(\s, \d_{\s}) \ge \ue(\s', \d_{\s}) \quad \forall\ \s' \in \S.
\end{align*}
\end{definition}
Some of our results do not produce an exact equilibrium. In those cases, we use an approximate equilibrium defined below.
\begin{definition}
\label{def:epsilonnash}
A strategy $\s: \types \rightarrow A$ is a symmetric pure-strategy $\epsilon$-equilibrium of a game $\Gamma$ if:
\begin{align*}
& \ue(\s, \d_{\s}) + \epsilon \ge \ue(\s', \d_{\s}) \quad \forall\ \s' \in \S.
\end{align*}
\end{definition}
It will sometimes be convenient to state this definition in terms of deviations in actions for each type rather than deviations in strategies. Definition~\ref{def:eqstrategies} can be re-stated using deviations in actions (see Definition 8.E.1 and Proposition 8.E.1 in~\cite{mwg}).
\begin{definition}
\label{def:eqactions}
A strategy $\s: \types \rightarrow A$ is a symmetric pure-strategy equilibrium of a game $\Gamma$ if for almost~\footnote{``Almost'' in this context means that the probability of all types for which the strategy
$\s$ does not prescribe an optimal action is zero.~\cite{rudin_r-and-c-analysis_book, hildenbrand_74}\label{fn:almost}} every $\type \in \types$ (w.r.t. $F$):
\begin{align*}
\uet(\type,\s(\type),\d_{\s}) \ge \uet(\type,\a_i,\d_{\s})\quad \forall a_i\in A.
\end{align*}

\end{definition}

We note that limiting our analysis to symmetric pure-strategy equilibria does not impact the space of games we can solve. 
\begin{proposition}[from ~\cite{mas-colell_84,radner_rosenthal_82}]
Every G-FACT has a symmetric pure-strategy equilibrium.\label{prop:pure_ne}
\end{proposition}
\begin{proof}[Proof]
G-FACTs belong to a larger class of games where a pure-strategy symmetric equilibrium is known to exist, if two conditions hold~\cite{mas-colell_84,radner_rosenthal_82}: a) the distribution of types is continuous; b) $\uet(\type,\s(\type),\d_{\s})$ is continuous in type  $\type$ and in distribution of actions  $\d_{\s}$. The first condition holds for G-FACT by definition. The second also follows from the assumption that the utilities $\ut(\type,\a_1,...,\a_n)$ are continuous in $\type$, and in addition from the fact that $\uet$ is an expectation of such functions and the corresponding space of events (i.e. the probability that certain combinations of actions are played) is finite. In more detail, $\uet$ is a (finite) linear combination of continuous functions $\ut(\type,...)$, therefore $\uet$ is itself continuous in $\type$. Furthermore, because $\d_\s$ dictates the coefficients of the linear combination that defines $\uet$, $\uet$ is also continuous in $\d_\s$. As a result, a G-FACT can always be solved in terms of a pure-strategy symmetric equilibrium.

In addition, any mixed equilibrium in such games can be converted into a pure-strategy equilibrium using a purification procedure~\cite{radner_rosenthal_82}. Intuitively, since a strategy is a mapping from types to actions, such a purification involves finding a pure-strategy mapping which results in the same action distribution as the original mixed strategy.
\end{proof}

%%  such an equilibrium is known to exist in this class of games, given that $\uet(\type,\s(\type),\d_{\s})$ is continuous in type  $\type$ and in distribution of actions  $\d_{\s}$~\cite{mas-colell_84,radner_rosenthal_82}.\footnote{
%% The condition that $\uet(\type,\s(\type),\d_\s)$ is continuous in both $\type$ and in distribution of action $\d_\s$ follows from the assumption that the utilities $\ut(\type,\a_1,...,\a_n)$ are continuous in $\type$, and since $\uet$ is an expectation of such functions, where the corresponding space of events is finite (i.e., the random variable has finite support). In more detail, $\uet$ is a (finite) linear combination of continuous functions $\ut(\type,...)$, therefore $\uet$ is itself continuous in $\type$. Furthermore, because $\d_\s$ dictates the coefficients of the linear combination that defines $\uet$, $\uet$ is also continuous in $\d_\s$.} Furthermore, any mixed equilibrium in such games can be converted into a pure-strategy equilibrium using a purification procedure~\cite{radner_rosenthal_82}. Intuitively, since a strategy is a mapping from types to actions, such a purification involves finding a pure strategy mapping which results in the same action distribution as the original mixed strategy.

%The condition that $\uet(\type,\s(\type),\d_{\s})$ is continuous in type  $\type$ and in distribution of actions  $\d_{\s}$ is inherent in all single-parameter models where type of an agent denotes the value an agent receives in a ``winning set of outcomes" (e.g., when the agent wins the item in an auction or when a public project is undertaken). Furthermore, in such domains, $\uet(\type,\s(\type),\d_{\s})$ is not only continuous, but also linear in type.\footnote{Disregarding the payment, in each winning outcome the utility $\ut(\type,a)$ is the type, and 0 in all other outcomes; i.e., linear in $\type$ in each case. In G-FACTs the expected utility $\uet(\type,\s(\type),\d_{\s})$ is by definition a finite linear combination of outcome utilities, and therefore linear in $\type$. \label{foot:linear}} In particular,  it holds for all standard auction rules (e.g., first-price, second-price, all-pay) applied either to single-item auctions or to simultaneous auctions studied in this paper.

\section{Fictitious Play for G-FACTs}\label{fp_descript}
\label{sec:fp}
\noindent %Finding equilibria in games of incomplete information is a difficult problem both analytically and numerically. This can be evidenced by the literature on auctions, which forms a large part of the applications of games of incomplete information. As we discussed in Section~\ref{sec:related}, while auctions are well studied, equilibria have been derived analytically only for the simplest auction forms usually limited to a single auction. Furthermore, general computational techniques for deriving equilibria in games of incomplete information are typically restricted to special settings with finite type sets and do not scale with the size of the game (see, e.g., applicability remarks given in Gambit~\cite{gambit} documentation). Our contribution is a computational technique for finding equilibria in games of incomplete information with continuous types. 
In this section we extend fictitious play, which finds mixed equilibria in some complete information games, to search for {\em pure} strategy equilibria in G-FACTS. Before doing so, 
%Fictitious Play (FP) has been introduced by Brown~\cite{brown_51,von_Neumann_brown_50} to solve normal form games. Here, the basic idea is that each player iteratively plays a best response against the observed opponents' frequency of play of each action. This history of play then forms a player's perceived mixed strategy of their opponent(s), also called the FP {\it beliefs}. Play is called {\it fictitious} because its iterations are designed to calculate an equilibrium of a one shot game, rather than guide action selection of an actual interaction between players. 
%Specifically, to compute symmetric equilibria (where play converges to a state where all players play the same mixed strategy), after an initialisation step, standard FP iterates the following two steps until some convergence criterion is satisfied:
we consider the basic FP algorithm as it is applies to normal form games with complete information. 

In detail, at each iteration $t$, the FP algorithm consists of the following two steps:
\begin{itemize}
\item Compute best response: given the {\em belief} that the mixed strategy of the opponent is $\s'_t$, calculate a best response $\s_t$.
\item Update beliefs: merge $\s'_t$ and $\s_t$ into a new mixed strategy $\s'_{t+1}$.
\end{itemize}
These steps are then repeated until some convergence criterion is satisfied. A standard way to perform the merge in the second step is by averaging all best responses observed thus far. As a result, the influence of any subsequent best-response strategy diminishes with time. However, other approaches are suggested, e.g. using a weighted average, where higher weights are assigned to more recent strategies, or using a sliding window average, where only a small list of recent best responses is kept (see~\cite{fudenberg_levine_95} for a discussion on these variations). 
%Notice that no actual game playing occurs, hence the term {\em fictitious}. In other words, FP is designed to calculate an equilibrium of a one shot game, rather than guide action selection of an actual interaction between players. 

%. These different approaches effect not only the convergence speed, but also guarantees in terms of converging to an equilibrium\commentv{sounds funny: seems to suggest that you can converge when there is no equilibrium. is it correct?}. We refer to~\cite{fudenberg_levine_95} for an overview of variations of FP. 


%The original design of FP is based on complete information games and does not directly apply to
%games of incomplete information. In games of incomplete information with finite action spaces, we can still talk about distr
%Nonetheless, due to anonymity of the game and the symmetry of
%the equilibrium we are interested in, it is possible to modify FP to
%encompass games of incomplete information. These modifications, however, are not trivial in their nature or the effect they have on the overall algorithm. In fact, several insights gained through these modifications allowed us to formulate several analytical, rather than computational, results (see Section~\ref{sec:analytical}). Our key modification is based on the observation that,
%with respect to the effects it has on utility, a strategy is
%completely characterised by the action distribution it produces. We
%therefore modify FP to work with action distributions as its beliefs, resulting in an extended FP algorithm shown in Figure~\ref{fig:fp}
%was designed to work games of complete information, Clearly, standard FP only works for games where the number of strategies is finite. By contrast, our setting with incomplete information consists of an infinite number of mappings from types to actions, and therefore we cannot directly apply standard fictitious play. However, because in our setting players' utilities only depend on the actions of other players, and the number of actions is finite, we can use FP to update (the beliefs about) the action distribution instead of producing a mixed strategy. In doing so, we need to extract the distribution from the best-response strategy. Given this, the detailed extended FP algorithm to deal with our setting is given in Figure~\ref{fig:fp}.
\begin{figure}[ht]
%{\scriptsize
{\center
\begin{tabular}{|l |} \hline \parbox{3.2 in} {
\begin{tabbing}
\textbf{Algorithm \fp} \\
\textbf{Input:} \:\:\: \= game $\Gamma = \langle N,A,u(\cdot),\types,F(\cdot) \rangle$, initial beliefs $\d^0$, update rule $\kappa$\\
\textbf{Output:} \> if converges, equilibrium strategy\\
\\ \ttfamily
1:\quad \= set iteration count $t=0$ \\
2: \>  {\bf repeat}\\
3: \> \quad strategy $\s=\br(\Gamma,\d^t)$\\
%compute the best-response strategy: \\%$\s:\types \rightarrow A$\\
 %\> \quad \quad {\em (see Equation~\ref{eq:br} in Section~\ref{sec:br})}\\
%\> \quad \quad $\forall \type,\ \s(\type) \in \arg\max\limits_{a\in A}\uet(\type,a,\d^t)$\\
4: \> \quad compute the corresponding action distribution:\\
\> \quad \quad $\forall a_i \in A: \d_{\s}(a_i)=\int_{\s^{-1}(\a_i)}f(x)dx$\\
5: \> \quad update beliefs:\\
 \> \quad \quad  $\forall a_i \in A: \d^{t+1}(a_i)=\kappa(t)\d^t(a_i)+(1-\kappa(t))\d_{\s}(a_i)$\\
6: \> \quad set $t = t+1$\\
7:  \> {\bf until} $\converged$\\
8:  \> {\bf return} $\btos(h^{t+1})$
\end{tabbing}
} \\ \hline
\end{tabular} \\}
\caption{\label{fig:fp}Fictitious play algorithm for symmetric games of incomplete information.}
\end{figure}
%\begin{algorithmic}[1]
%\STATE \\
%\STATE Set initial beliefs $\d^0 \in \Delta(A)$\\
%\REPEAT
%\STATE Compute the best-response strategy $\s:\types \rightarrow A$
%\STATE Compute the corresponding best response {\it distribution}:\\
%\centerline{$\forall a \in A: \d_{\s}(a)=F(\s^{-1}(a))$}
%\STATE Update beliefs:\\
%\centerline{$\forall a \in A: \d^{t+1}(a)=\alpha(t)\d^t(a)+(1-\alpha(t))\d_{\s}(a)$}
%\UNTIL{ converged(h^{t+1})}
%\STATE {\bf return} $\btos(\d^{t})$
%\end{algorithmic}
%} \\ \hline \end{tabular} \\}
%\caption{\label{fig:fp}FP algorithm for symmetric games of incomplete information.}
%\end{figure}
%\commentv{update line numbers once the pseudocode is set}
The algorithm in Figure~\ref{fig:fp} generalises the two steps described above to symmetric games of incomplete information with finite actions and continuous types. An input to the algorithm is initial beliefs, $\d^0$, about the action distribution. At each iteration $t$, the best-response strategy is computed (line 3) with respect to the beliefs about the action distribution of an opponent, $\d^t$ (since we search for symmetric equilibria,  each opponent draws his action from the same distribution). The algorithm for computing a best-response strategy is referred to as \br, and the details of the algorithm depend on the specific domain (since the types are continuous and we cannot simply enumerate all possible strategies as with discrete type spaces). In Section~\ref{sec:br} (see Figure~\ref{fig:brlinear}) we provide an instantiation of the algorithm for efficiently computing the best response for the setting with linear utility functions. 
Formally, $\s$ is a best-response strategy (or simply a ``best response") to an action distribution $\d$ if:
\begin{equation}
 \s(\type) \in \arg\max\limits_{a_i\in A}\uet(\type,a_i,\d) \quad \forall \type\label{eq:br}.
\end{equation}


Once the best response, $\s$, is obtained, its corresponding action distribution, $\d_{\s}$, is
calculated (line 4), and the beliefs of the next iteration,
$\d^{t+1}$, are generated using an update rule (e.g., a standard update rule, $\kappa(t)=\frac{t}{t+1}$). If beliefs converge (line 7), a pure-strategy equilibrium can be obtained from these beliefs (line 8) as we prove in Theorem~\ref{exist_eq_con_bel}. We provide an algorithm termed \btos\ for recovering an equilibrium strategy for the case of linear utilities in Figure~\ref{fig:beliefsToStrategy}.  

%\commentv{present br here with notation $(A',\i)$}.

Convergence in fictitious play occurs if $\d^{t} \to \d^*$ as $t \to \infty$. This asymptotic convergence is called {\em convergence in beliefs} (see, e.g., ~\cite{fudenberg_levine_98_book}). In games of complete information, $\d^*$ is both the frequency distribution of actions and an equilibrium mixed strategy. In the incomplete information games studied here, $\d^*$ is just the frequency distribution of actions, and does not explicitly correspond to an equilibrium strategy (we demonstrate this in Section~\ref{sec:hom}). Nevertheless, a pure-strategy equilibrium can be recovered from the converged beliefs $\d^*$ by taking the best response to it as stated below.
\begin{theorem}\label{exist_eq_con_bel}
If fictitious play beliefs converge $\d^{t} \to \d^*$ as $t \to \infty$, then there is a strategy $\s^*$ that is a best response to the converged beliefs $\d^*$ and that induces $\d^*$ as its action distribution $\d_{\s^*}=\d^*$; i.e., $\s^*$ is an equilibrium strategy. 
%and there is a unique best-response strategy $\s(\d^*)$, then this %strategy is an equilibrium.
\end{theorem}
\begin{proof}
To prove the theorem, we first consider the mapping from an action distribution, $\d$, to the set of action distributions produced by all best-response functions to $\d$. This step is also used in Theorem 2 of~\cite{mas-colell_84} to prove existence of an equilibrium distribution of actions, i.e. $\d_{\s^*}$.
%\commentv{Zinovi, please define ``equilibrium distribution of actions"}. 
We then proceed by assuming the converse: $\d^*$ is not a member of distributions produced by best-response strategies to $\d^*$. We show that it contradicts convergence of $\d^t$ to $\d^*$. Details of the proof follow.

Given that the action space $A=\{a_1,...,a_{m}\}$ is finite, all distributions over $A$ form a simplex $\Delta(A)$. Denote for $1\leq i\leq m$, $e_i\in\Delta(A)\subset\mathbb{R}^{m}$ a vector with $i$'th element set to one, $e_i^i=1$, and all others to zero, $e_i^j=0,\forall i\neq j$. Let $E=\{e_i\}_{i=1}^{m}$. Then for any type $\type$, and a distribution of opponent actions $\d$, the set of (pure) best responses can be described in terms of the following correspondence:
$$\Phi(\type,\d)=\{e_i\in E|\ut(\type,a_i,\d)\geq\ut(\type,a_j,\d)\ \forall a_j \in A\}.$$

$\Phi$ is non-empty and closed-valued, and upper hemicontinuous. By integrating over $\type$ we can obtain the set of all action distributions produced by best responses to $\d$. Let $\Psi(\d)=\int\Phi(\type,\d)f(\type)d\type$, i.e. the set of action distributions generated by different best-response strategies, where:\\
$$\int\Phi(\type,\d)f(\type)d\type = \left\{\int\phi(\type,\d)f(\type)d\type\ |\ \phi:\types\times\Delta(A)\rightarrow E,\ \phi(\type,\d)\in\Phi(\type,\d)\right\}.$$ 
Since $f$ is continuous, $\Psi$ is non-empty, compact and convex-valued, and upper hemicontinuous.\footnote{Relevant theorems, their origin and
application to equilibria analysis can be found in the book by
Hildenbrand~\cite{hildenbrand_74}. Specifically, see Theorem 4 on p.64 and Proposition 7 on p.73.} 

Now, assume that $\d^*\not\in\Psi(\d^*)$, in other words no best-response strategy to $\d^*$ has the action distribution $\d^*$. In this case, since $\Psi$ is compact and convex-valued, $\d^*$ can be separated from $\Psi(\d^*)$. Intuitively, it means that the distance from $\d^*$ to any distribution generated by a best response to $\d^*$, although small, is not negligible. More formally, there exist two open neighbourhoods, $U_1$ of $\d^*$ and $U_2$ of $\Psi(\d^*)$, so that the following holds: 
$$\d^*\in U_1,\ \ \Psi(\d^*)\subset U_2,\ \ U_1\cap U_2=\emptyset.$$ Furthermore, these neighbourhoods can be chosen so that there exists $\epsilon>0$ such that $U_1$ is an open ball of radius $\epsilon$, $U_1 = B_\epsilon(\d^*)$, and the distance between $U_1$ and $U_2$, $d(U_1,U_2)$, is at least $\epsilon$. In addition, since $\Psi$ is upper hemicontinuous, we can reduce $\epsilon$ to guarantee that $\forall \d\in U_1, \Psi(\d)\subset U_2$. In other words, best responses to distributions close to $\d^*$ have action distributions that are very close to those generated by best responses to $\d^*$ itself.

Notice that since $\Psi$ is compact and convex-valued, there is a constant $c>0$ such that for any $\d\in U_1, \d'\in\Psi(\d)\subset U_2$, and $0<\lambda<1$ it holds that $d(\d,U_2) > d((1-\lambda)\d+\lambda\d',U_2)+c\lambda$.
%what does compactness have to do with this: closed+convex not enough, needs also to be bounded. o/w constant would be zero.
%double optimization: find a pair of points one in h*, the other in phi(h*) such that rate of approach is minimized as you change lambda. the rate will be at least this for any two points. the minimum rate of approach is called c.

Now, let $T$ be such that $d(\d^T,\d^*)<\epsilon$; i.e., $\d^T\in B_\epsilon(\d^*)$. During a FP update, $s(\d^T)$ is a best response to $\d^T$, and $\d^{T+1}=\frac{T}{T+1}\d^T+\frac{1}{T+1}h_s$, where $h_s(a_i)=\int\mathbb{1}(s(\type)=a_i)f(\type)d\type$. By definition $h_s\in\Psi(\d^T)$ and $\Psi(\d^T)\subset U_2$, thus $h_s\in U_2$. As argued above $d(\d^{T+1},U_2)+c\frac{1}{T+1}<d(\d^T,U_2)$. Since $\sum_t\frac{1}{t}=\infty$ and in each FP iteration the distance between $h^T$ and $U_2$ decreases by  $c\frac{1}{T+1}$, there exists $t>T$ so that $\d^t\not\in B_\epsilon(\d^*)$. This contradicts convergence of $\d^t$ to $\d^*$. Therefore, $\d^*\in\Psi(\d^*)$. 
%lamdba = 1/(t+1)
%c depends only on u1 and u2. is indep of t. minimized over all h, h' when computed c.
%u2 is farther than e away from h*
%moving from u1 to u2. until you finally leave u1. violating convergence.
%this rate c applies only while you are inside u1. (don't know if will ever reach u2 but will leave u1)


Since $\d^*\in\Psi(\d^*)$, there exists a selection function $\s^*:\types \rightarrow A$, so that almost everywhere $\ut(\type,s^*(\type),\d^*)>\ut(\type,a_j,\d^*)\ \forall a_j \in A$, and $\d^*(a_i)=\d_{\s^*}(a_i)=\int\mathbb{1}(s(\type)=a_i)f(\type)d\type$. In other words $\s^*$ is the best-response strategy to the action distribution it produces, and hence an equilibrium.
\end{proof}
\begin{corollary}
If the best response $\s^*$ to $\d^*$ is unique, then it is an equilibrium strategy.
\end{corollary}


Equilibrium properties of fictitious play apply only to asymptotically converged beliefs. In numerical simulations, we are limited to a finite number of iterations and have to deal with approximate convergence. Consider a natural measure to establish convergence, which occurs once the (Euclidean) distance between $\d^{t}$ and $\d^{t+1}$ falls below some {\em convergence error}. 
%\footnote{\commentv{i don't think we should get into this: in games of complete info (e.g. matching pennies) strategies switch completely every time period but empirical distribution still converge and the rate is exactly $\frac{1}{t}$. The footnote seems to only refer to a test we perform when we compare two consecutive beliefs (i.e., $\d^t$ and $\d^{t+1}$ instead of deriving an analytical form for $\lim_{t \to \infty}\d^t$.} 
%Notice that the update rule $\alpha(t)$ incorporates action distribution for best response in iteration $t$ with the weight $\frac{1}{t+1}$. This artificially forces convergence at the rate of $\frac{1}{t}$. A meaningful measure of real convergence must compensate for that. To make sure that the beliefs converge faster than the artificial convergence rate of $\frac{1}{t}$, the convergence error $\delta(t)$ must depend on $t$ and satisfy $\lim_{t \to \infty}\delta(t)t \to 0$.}
This, however, is not a reliable convergence measure, as there is no guarantee that the distance in beliefs does not exceed the convergence error in later iterations. 

To avoid the problem with identifying convergence in beliefs in a finite number of iterations, we can instead check at each iteration if an $\epsilon$-equilibrium has been reached.\footnote{We discuss the appropriateness of this measure of convergence in Section~\ref{sec:convergence}.} This is done by constructing a strategy from the beliefs and checking if that strategy is an $\epsilon$-equilibrium strategy. We provide a procedure for converting beliefs to strategy in the next section for the case of linear utilities. 

%A downside of this convergence metric is that while the utility of an $\epsilon$-equilibrium strategy is close to the equilibrium one, the strategy itself may be arbitrarily far away from the true equilibrium. That is, we cannot in general guarantee $\epsilon$-approximation to the equilibrium in strategies, but only in utility. Nevertheless, in practice, we have found that $\epsilon$-approximation in strategies does occur. For instance, in the numerical results of our experiments with simultaneous auctions, in all of the settings we tried we observe convergence to an $\epsilon$-equilibrium both in strategy and in utility. In fact, for a small class of problems, we derived exact equilibrium analytically, and have formally confirmed that in these problems FP produces $\epsilon$-approximation in strategy.


%We address this issue for games with linear utilities studied in the rest of the paper. We demonstrate that convergence in utilities corresponds to convergence in strategies: i.e.,  an $\epsilon$-equilibrium strategy approaches the equilibrium strategy as $\epsilon \to 0$.

%The best 
%\begin{align*}
%& \ue(\s(\d^t), \d^t) + \epsilon \ge \ue(\s'(, \d^t)
%\end{align*}


%The fictitious play procedure relies on best response (line 4) and an algorithm for computing a strategy from a converged action distribution (line 9). Note that in many cases the latter can be achieved by simply computing a best response and checking that this is indeed an equilibrium. However, this may not be sufficient in all cases. To this end, in Section~\ref{sec:linear} we discuss both an approach to compute the best response and generate the strategy from the beliefs, for a setting where utility functions are linear in the player's type.  



\section{Applying FP to G-FACTs with Linear Utilities}
\label{sec:linu}
\noindent 
In this section we instantiate the \br\ and \btos\ algorithms for a particular setting where a player's {\it expected} utility, $\uet(\type,\s(\type),\d_{\s})$, is linear in his type, $\type$.  Note that our \fp\ algorithm from Figure~\ref{fig:fp} does not rely on linearity, and other procedures can be developed for non-linear settings. However, linearity is actually common in many games: in particular, it is inherent in all single-parameter models where the type of an agent denotes the value an agent receives in a ``winning set of outcomes" (e.g., when the agent wins the item in an auction or when a public project is undertaken).\footnote{Note that, since the expected utility (see Equation~\ref{eq:expected}) is a linear combination of the individual utilities $\ut(\type,a)$, the expected utility $\uet(\type,\s(\type),\d_{\s})$ is linear in $\theta$ if the individual utilities are linear. Therefore, when we say that utilities are linear, this also means that the {\it expected} utility is linear.\label{fn:linear}}  This can be seen in Example~\ref{example:fpa} from Section~\ref{sec:model} (see Equation~\ref{eq:fpa}), where the expected utility is linear in $\theta$. This includes not only all one-shot single-item auctions (e.g., first-price, second-price, all-pay, see~\cite{krishnabook}), but also the simultaneous auctions studied in Section~\ref{sec:simauctions}.

%\commentv{Zinovi, make sure you're ok with this paragraph. I removed continuity in $\d$.}
%Recall that the fictitious play algorithm applies to games with utility functions $\uet(\type,\cdot,\d)$ continuous in $\type$ and $\d$. In this section we provide a procedure for converting beliefs to strategies that relies on linearity of $\uet(\type,\cdot,\d)$ in $\type$. Notice that linearity in type is a rather weak assumption as most common games (such as virtually all types of auctions, see, e.g.,~\cite{krishnabook}) satisfy this assumption (also see Footnote~\ref{foot:linear}). Furthermore, in~\ref{sec:extensions} we point out a few practical ways to relax the type-linearity and type-dimensionality assumptions.

%% Recall that the fictitious play algorithm applies to games with utility functions $\uet(\type,\s(\type),\d_{\s})$ continuous in $\type$. In this section we provide a procedure for converting beliefs to strategies that relies on linearity of $\uet(\type,\s(\type),\d_{\s})$ in $\type$. Notice that linearity in type is a rather weak assumption as most common games (such as virtually all types of auctions, see, e.g.,~\cite{krishnabook}) satisfy this assumption (also see Footnote~\ref{foot:linear}). Furthermore, in~\ref{sec:extensions} we point out a few practical ways to relax the type-linearity and type-dimensionality assumptions.

In the following, we start by making a few observations about the structure of a best response when utilities are linear and provide an algorithm for finding it. We then use these results to construct an algorithm for converting converged beliefs to a pure-strategy equilibrium. Together with a convergence metric described below, these algorithms instantiate our \fp\ algorithm for games with linear utilities.

\subsection{Best Response}
\label{sec:br}
\noindent %Understanding the functional structure (e.g. continuity and convexity in its arguments) of a best response is important for both analytical and computational reasons. By definition, an equilibrium strategy is a best response to the equilibrium play of the other agents, so in characterising a best response, we partially characterise equilibrium. Numerically, fictitious play relies on a best-response computation in each iteration.
%Next, we discuss the structure of a best response and describe an algorithm for finding it.
When (expected) utilities are linear in $\type$ for a given $a_i$ and $\d$, we refer to the expected utility functions $\uet(\cdot,a_i,\d)$ as {\em utility lines}, and these functions are of the form:
\begin{equation}
\uet(\type,a_i,\d_{\s})=\theta \cdot  \text{slope}(a_i,\d_{\s}) + \text{intercept}(a_i,\d_{\s}),
\label{def:utilityline}
\end{equation}
\noindent
where the slope and y-intercept are constant for a given action $a_i$ and action distribution $\d_{\s}$. In the following, let $L=\{\uet(\type,a_i,\d_{\s})\}_{a_i \in A}$ denote the set of all utility lines. Each utility line can be represented by its slope and intercept, and so we will sometimes use $L=\{\sigma_i,\iota_i\}_{i \in \{1,\ldots,m\}}$, where $\sigma_i$ and $\iota_i$ are the slope and intercept associated with action $a_i$.   

\begin{figure}
\begin{center}
\includegraphics[width=8cm]{../../results/GRAPHS/envlinear}
\end{center}
\caption{Utility lines for actions $A = \{a_1,a_2,a_3,a_4\}$ given action
  distribution $h$. The bold intervals are the best-response $(A',\i)$: $A' = \{\a_1,\a_3,\a_4\}$, $\i = (c_1,c_2)$.\label{fig:envelope}}
\end{figure}

Now, in general, we can see that the best response corresponds to the actions associated with the {\em upper envelope} of the utility functions in $L$. Formally, an upper envelope is given by $u^*(\theta)=\max_{a_i\in A} \uet(\type,a_i,\d)$.  In the case of linear utility functions, this upper envelope consists of a piecewise linear function, where each line segment corresponds to a particular utility line (and each utility line corresponds to a particular action). Furthermore, the upper envelope is always convex (to see this, note that, for any two crossing lines, their upper envelope is convex).
\begin{observation}
\label{obs:convex}
In the case of linear utility functions, the upper envelope is piece-wise linear and convex. 
\end{observation}

An example of a best response is shown in Figure~\ref{fig:envelope}. More formally, the upper envelope can be represented as a partition of the type space $[0,1]$ into intervals, each labelled with its utility line and corresponding action. Let $\i\in [0,1]^{m'-1} \mid 0 < \i_1 < \i_2 < \cdots < \i_{m'-1} < 1$ denote a partition into $m'$ intervals $[0,\i_{1}],\ [\i_{1},\i_{2}],\ [\i_{2},\i_{3}],\ldots,\ [\i_{m'-1},1]$ and let $A'$ denote the set of actions $\{a'_1,\ldots,a'_{m'}\} \subseteq A$, where $a'_j \in A$ is the best-response action on the interval $[\i_{j-1},\i_j]$. Note that each action $a'_j \in A'$ maps to an action $a_i \in A$, but the indexing is different. Similarly, let $L'=\{\uet(\type,a'_i,\d_{\s})\}_{a'_i \in A'}=\{\sigma'_i,\iota'_i\}_{i \in \{1,\ldots,{m'}\}} \subseteq L$ denote the corresponding utility lines. Due to Observation~\ref{obs:convex}, note that $\sigma'_i \geq \sigma'_j$ whenever $i>j$. In Figure~\ref{fig:envelope}, the upper envelope is given by the triple $(L',A',\i)$ where $A' = \{\a'_1,\a'_2,\a'_3\} = \{\a_1,\a_3,\a_4\}$ and $\i = (c_1,c_2)$.  Then, the pair $(A',\i)$ describes the corresponding best response. 

\label{ulalg}
We are now ready to introduce the algorithms needed to compute the best response. The first step is to compute the utility lines, which depends on the rules of the game $\Gamma = \langle N,A,u(\cdot),\types,F(\cdot) \rangle$. We assume an algorithm for doing this is available (see Figure~\ref{fig:ul}), but cannot provide a specific algorithm since this depends on the details of the problem domain. We will, however, instantiate the algorithm for the simultaneous auctions setting in Section~\ref{sec:simauctions} (in which case the expected utility is given by Equation~\ref{eq:ulsimauc}).
\begin{figure}[ht]
{\center
\begin{tabular}{|l |} \hline \parbox{3.2 in} {
\begin{tabbing}
\textbf{Algorithm \ul} \\
%\textbf{Input:} \:\:\: \= utility lines $\{\uet(\type;\a_i,\d)\}_{\a_i \in A}$\\
\textbf{Input:} \:\:\: \= game $\Gamma = \langle N,A,u(\cdot),\types,F(\cdot) \rangle$, distribution of actions $\d$\\
\textbf{Output:} \> utility lines $L=\{\sigma_i,\iota_i\}_{i \in \{1,\ldots,m\}}$\\ %, sorted actions $A$\\
\\
1: \quad \= \textbf{for} \= $i=1$ to $m$\\
2:   \> Given game $\Gamma$, calculate the slope and intercept:\\
\>    \>   $\sigma_i = \text{slope}(a_i,\d)$\\
\>    \>  $\iota_i = \text{inter}(a_i,\d)$\\
3:  \> {\bf return} $\{\sigma_i, \iota_i \}_{i \in \{1,\ldots,m\}}$
\end{tabbing}
} \\ 
\hline \end{tabular} \\}
%%
\caption{Generating utility lines for a game $\Gamma$ given a distribution of actions $\d$, where a utility line is defined by its slope and intercept.\label{fig:ul}}
\end{figure}



Given an algorithm for computing the utility lines, Figure~\ref{fig:brlinear} presents an algorithm for computing the best response, which proceeds as follows.
%We are only interested in the best response on the value interval $[0,1]$.
First, we generate a utility line for each action $a \in A$. Then, all lines are sorted according to their slope (line 2). For ease of exposition, we slightly abuse notation and refer to $\sigma_1$ as the lowest slope, followed by $\sigma_2$, etc. Similarly, the action with the lowest corresponding slope is referred to as $a_1$ followed by $a_2$, etc. The best response at $\type=0$ is selected in line 3. Note that this simply requires finding the utility line with the highest intercept (since $\uet(0,\a_i,\d)=\iota_i$). This utility line forms the initial upper envelope $(L',A',c)$. Now, due to the convexity of the upper envelope (Observation~\ref{obs:convex}), line segments at $\type > 0$ need to have slopes of at least $\sigma_{i}$, which means that we can disregard any utility lines $j < i$. Hence, the for loop at line 5 starts with $j=i+1$. 
%\commentz{This footnote\footnote{The slopes of utility lines are nonnegative as the utility is an increasing function of type.} is incorrect -- the slopes are not necessarily all non-negative, and the utility does not need to increase with type. However, the statement in the text is ok.}
%and lower utilities at $\type=0$ are below $\uet(\type,\a_i,\d)$ 
%for $\type \ge 0$ and are not part of the upper envelope. Thus, the utility line $\uet(\type,\a_i,\d)$ defines an upper envelope of the set of lines
%$\{\uet(\type,\a_j,\d)\}_{j=1}^{i}$. 
In each iteration of the main loop (line 5), we consider whether to include the $j^{th}$ utility line in the current envelope $(L',A',c)$ possibly replacing one or more previously added lines. Now, since the lines are considered in the order of their slope, there are only two possible cases: it can either lie entirely below the current envelope, in which case it has no effect on the upper envelope and can be disregarded; or, it intersects the envelope at a unique point $x \in (0,1)$. Note that it cannot lie entirely above the envelope, since it has to be below the envelope at point $\type=0$ (otherwise, this would mean that $\iota_{j} > \iota_{i}$ which contradicts the maximisation in line 3) and that it cannot cross the envelope $(L',A',c)$ at more than one point (its slope is higher than the slopes of $L'$, and, therefore, once it crosses $(L',A',c)$, it increases faster than any of the lines $L'$ and does not cross them).  
 
Whenever the $j^{th}$ utility line crosses the current envelope, this envelope is updated as follows. First of all, due to Observation~\ref{obs:convex}, and since we know that $\sigma_{j}$ is higher than any existing slope in $L'$, the line segment necessarily needs to appear at the end of the envelope. Therefore, since the intersection point is $x$, we can remove any utility lines in the current envelope which appear after $x$. These are the lines $z>k$, and they are removed in line 5.1. Then, a new line segment is added to the envelope which intersects with the $k^{th}$ line at point $x$, and provides the best response for $\type \in [x,1]$ (i.e., the new utility line is appended at the end). Note that, since the utility lines are considered in the order of their slope, and new lines are always appended at the end, the resulting upper envelope is convex (as it should be). 
         
%is added to the set of considered lines and the envelope is updated. Due to sorting in step 1, each new line has a higher slope than any of the lines in the current upper envelope. It can either lie below the envelope, in which case it does not affect the upper envelope, or it can intersect the envelope at $x \in (0,1)$. Then, it lies above the envelope for $\type \in [x,1]$. The envelope is updated to reflect this. At the end of iteration $i$, $(A',\i)$ is an upper envelope of the utility lines $\{\uet(\type,\a_j,\d)\}_{j=1}^{i}$. After the last iteration, we get an upper envelope of all utility lines as desired.



\begin{figure}[ht]
{\center
\begin{tabular}{|l |} \hline \parbox{3.2 in} {
\begin{tabbing}
\textbf{Algorithm \br} \\
%\textbf{Input:} \:\:\: \= utility lines $\{\uet(\type,\a_i,\d)\}_{\a_i \in A}$\\
%\>  actions $A$ sorted according to the slope of their utility lines\\
\textbf{Input:} \:\:\: \= game $\Gamma = \langle N,A,u(\cdot),\types,F(\cdot) \rangle$\\
\> distribution of actions $\d$\\	
\textbf{Output:} \> best response $\s:\types\rightarrow A$ represented by $(A',\i)$\\
\\
%1:\quad \= sort actions according to the increasing slope of their utility lines\\
%\> $a_1$ refers to the action with the highest slope, followed by $a_2$, etc.\\
%1: \quad \= generate utility lines $\{\uet(\type,\a_i,\d)\}_{\a_i \in A} = \ul(\Gamma,\d)$\\
1: \quad \= generate utility lines $L=\{\sigma_i,\iota_i\}_{i \in \{1,\ldots,m\}} = \ul(\Gamma,\d)$\\
%2:\> let $\pi$ denote the permutation which sorts the utility lines according to their slope \\
%\> i.e. find $\pi$ such that: $\sigma_{\pi(1)} \le \sigma_{\pi(2)} \le \ldots$\\
%\> ($a_\pi(1)$ refers to the action with the highest slope, followed by $a_2$, etc.)\\
2:\> sort the utility lines in increasing order of slope \\
\> let $a_i$ denote the action with the $i^{th}$ lowest slope: $\sigma_{1} \le \sigma_{2} \le \ldots$\\
3: \>  find the index of the utility line that maximises utility at $\type=0$: \\
	\> $i = \arg\max_{j \in \{1,\ldots,m\}} \uet(0,a_{j},\d_{\s}) = \arg\max_{j \in \{1,\ldots,m\}}  \iota_{j}$\\
4: \>  let $\sigma'_1=\sigma_{i}$, $\iota'_1=\iota_{i}$, and $\a'_1=\a_{i}$ and define the initial 
envelope as:\\
\> $L'=\{\sigma'_1,\iota'_1\}$, $A'= \{\a'_1\}$ and $\i=()$\\
%\> (the envelope is defined on $\type \in [0,1]$)\\
5: \>  \textbf{for} \= $j=i+1$ to $m$\\
%\>  \>  {\bf if} \= $\uet(\type,\a_{\pi(j)},\d)$ intersects the envelope $(A',\i)$ at $x \in (0,1)$\\
\>  \>  {\bf if}\ \ \= $\uet(\theta,a_{j},\d_{\s}) = \theta  \sigma_{j} + \iota_{j}$ intersects the envelope $(L',A',c)$ at $x \in \Theta$\\
%\>  \>  \> upd\= ate   the envelope to make $a_{\pi(j)}$ the best-response for $\type \in [x,1]$\\
%\>  \>  \>  \>  (using the notation $A' = \{a'_1,\ldots,a'_{m'}\}$ and $\i = (\i'_1,\ldots,\i_{m'-1})$)\\
\>  \>  \> \= let $k \in \{1,\ldots,|A'|\}$ denote the index of the intersected utility line in $L'$:\\
\> \> \> i.e., where $x \in [\i_{k-1},\i_{k}]$, with $\i_0=0$ and $\i_{|A'|}=1$\\
\> \> \> update the envelope $(L',A',c)$ as follows:\\
5.1:  \> \> \> \= remove all utility lines with index $z > k$ from $(L',A',c)$\\ 
5.2: \> \> \> append utility line with index $j$ at position $k+1$,\\
\> \> \> i.e. $\sigma'_{k+1}=\sigma_{j}$, $\iota'_{k+1}=\iota_{j}$, $a'_{k+1}=a_{j}$, and $c_{k}=x$\\ 
%\>  \>  \>  \>  $A' = \{a'_1,\ldots,a'_{k+1}, a_{j}\}$ and $\i = (\i_1,\ldots,\i_k,x)$\\
%\>  \>  \> add $x$ at the end of $\i$ and $\a_j$ at the end of $A'$\\
6: \> {\bf return} $(A',\i)$
\end{tabbing}
} \\ \hline \end{tabular} \\}
%%
\caption{An algorithm for computing best response when agents' utilities are linear in a single-parameter type $\type \in [0,1]$.\label{fig:brlinear}}
\end{figure}


Ignoring computation of the utility lines, the runtime of the \br \ algorithm is dominated by line 5. Note that, to find the intersection point and the corresponding line segment in $L'$ (if any) requires looping through all lines in $L'$, which has at most $m-1$ values. Moreover, the for loop at line $5$ also has at most $m-1$ values. Therefore, the worst-case runtime is in the order $O(m^2)$.  We note that the upper envelope can be computed in $m\log m$ time (see, e.g.,~\cite{hershberger1989finding}). However, we opt for a simpler implementation since it is efficient in practice (since $m'$ is typically much smaller than $m$) and (as discussed in Section~\ref{sec:convergence}) the total run-time of one iteration of our algorithm is likely to be dominated by the computation of the individual utility lines. 


To apply our method, one needs to be able to compute utility lines. Since there are $m$ actions, the utility lines are represented by $m$ slopes and intercepts. This is independent of other game parameters such as the number of players.\footnote{This is largely due to symmetry, but even in the asymmetric case (discussed in \ref{sec:asym}), the number of utility lines is $n \cdot m$ and so scales linearly in the number of players.} The computation of utility lines is specific to the particular domain, and this can become a bottleneck. However, in practice, it is often possible to reduce the computation of a utility line by using a compact representation of a game (such as action-graph games~\cite{jiang_leyton-brown_2010}), but we cannot provide any general analysis. We do provide the details of how to efficiently compute utility lines in the domain we study in Section~\ref{sec:simauctions}. There, computation is dominated by the domain-specific tie-breaking rule (see \ref{app:222}).
%At the same time, by taking advantage of the structure of the auction game (e.g., the fact that, if a certain bid wins an auction, a higher bid also wins), the computation of a single utility line is not affected by the number of actions, when keeping the number of auctions fixed (see \ref{app:222} for details).\footnote{Such scalability cannot always be achieved in general, although in practice it is often possible to reduce the computation of a utility line by using a compact representation of a game (such as action-graph games~\cite{jiang_leyton-brown_2010}).}   



%\subsection{Convergence in Strategies}
%We mentioned in Section~\ref{sec:fp} that identifying convergence in a finite number of iterations is challenging and discussed how a weaker notion of convergence in utilities may be used instead. In this section, we develop an algorithm that allows a stronger measure of convergence: convergence in strategies\commentv{talk about how approximate convergence in utilities may correspond to totally different strategies}. The algorithm converts beliefs $\d^t$ into a strategy that has the action distribution $\d^t$ and approaches the best response as the beliefs $\d^t$ converge.

\subsection{Converting Fictitious Play Beliefs to a Pure Strategy}
\label{sec:b2s}
\noindent Although Theorem~\ref{exist_eq_con_bel} shows that an equilibrium strategy can be recovered from the limit of fictitious play beliefs, it does not provide an exact algorithm for doing so, but rather assumes that such a procedure exists. From a theoretical point of view this assumption is valid, since the necessary {\em purification} procedures are guaranteed to exist (see, e.g.,~\cite{radner_rosenthal_82}). 
%However, it does not promote a practical algorithm application, and this section remedies this shortcoming. 
However, by itself, existence of a procedure is insufficient to apply the algorithm. For this reason, in this section provide an explicit purification algorithm \btos\ (see
Figure~\ref{fig:beliefsToStrategy}) for G-FACTs with type-linear utilities. In addition, based on the insight of this procedure, we point out in~\ref{sec:extensions} the steps necessary for generalisation of this procedure to non-linear utility functions. 

\begin{figure}[ht]
%{\scriptsize
{\center
\begin{tabular}{|l |} \hline \parbox{3.2 in} {
\begin{tabbing}
\textbf{Algorithm \btos} \\
\textbf{Input:} \:\:\: \= game $\Gamma = \langle N,A,u(\cdot),\types,F(\cdot) \rangle$\\
\> distribution of actions $\d$\\	
%\>  actions $A$ sorted according to the slope of their utility lines\\
%,utility lines $\{\uet(\type,\a_i,\d)\}_{\a_i \in A}$\\
%\textbf{Input:} \:\:\: \= distribution of actions $\d$, set of actions $A$\\
\textbf{Output:} \> equilibrium strategy $(A',\i)$\\
\\
1:\quad\=  gather actions played with positive probability $\hat{A} = \{\a_i \in A \mid \d(\a_i) > 0\}$\\
2: \> generate utility lines for actions $\hat{A}$\\ 
\> $L=\{\sigma_i,\iota_i\}_{i \in \{1,\ldots,|\hat{A}|\}} = \ul(\langle N,\hat{A},u(\cdot),\types,F(\cdot) \rangle,\d)$\\
%2:\quad \= generate utility lines $\{\uet(\type,\a'_i,\d)\}_{\a'_i \in A'}$ and sort them according to increasing slope \\
%\> ($a'_1,\ldots, a'_{m'}$ refer to the actions in the sorted set $A'$)\\
%3: \> \quad \= sort actions in $A'$ according to their slope under $\d$\\
% \> \> (actions $a'_1,\ldots, a'_{m'-1}$ refer to the sorted set $A'$)\\
3: \> sort the utility lines in increasing order of slope \\
\> let $a'_i$ denote the action with the $i^{th}$ lowest slope: $\sigma_{1} \le \sigma_{2} \le \ldots$\\
\> and define the ordered set $A' = (a'_1,\ldots,a'_{|\hat{A}|})$\\
4: \> gen\= erate the strategy that produces action distribution $\d$\\
\>\> define $\i=(\i_1,\ldots,\i_{|A'|-1})\in R^{|A'|-1} \mid \i_j = \sum_{i=1}^j \d(\a'_i) \quad \forall 1 \le j\le |A'|-1$\\
4:  \> {\bf return} $(A',\i)$
\end{tabbing}
} \\ \hline \end{tabular} \\}
\caption{\label{fig:beliefsToStrategy}An algorithm for converting distribution of actions $\d$ into a pure strategy when agents' utilities are linear in a single-parameter type $\type \in [0,1]$.%\commente{SHOULD THE ALGORITHM NOT HAVE $h$ AS INPUT INSTEAD OF $h^*$?}\commentv{sure}
}
\end{figure}
The \btos\ algorithm in Figure~\ref{fig:beliefsToStrategy} constructs a strategy where action $a_i$ is played with probability $\d(a_i)$. Actions are sorted in ascending order of slopes of their utility lines and the action with the lowest slope $a'_1$ is played by the types $\type \in [0,\d(a'_1)]$. The next action $a'_2$ is played by the types $\type \in [\d(a'_1),\d(a'_1)+\d(a'_2)]$, etc. 
%Actions with zero probability are removed for ease of exposition. 
The algorithm has two important properties.
%First, for any distribution of actions $\d$ it computes a strategy $\s=\btos(\d)$, such that $\s$ generates $\d$, i.e., $\forall \a_i\in A,\ \d(\a_i) = \int_{\s^{-1}(\a_i)}f(x)dx$. 
First, for a converged distribution of actions $\d^*$, $\btos(\d^*)$ is an equilibrium. Second, if $\d$ is sufficiently close to a converged distribution, $\btos(\d)$ is an $\epsilon$-equilibrium strategy. These properties are the subject of the following two theorems. 

\begin{theorem}\label{b2s_in_eq}
If fictitious play beliefs converge $\d^{t} \to \d^*$ as $t \to \infty$, then a pure-strategy equilibrium can be constructed using the algorithm in Figure~\ref{fig:beliefsToStrategy}.
\end{theorem}
\begin{proof}
%an equilibrium is a br to $\d^*$

As we have shown in Theorem~\ref{exist_eq_con_bel}, convergence of beliefs means that the distribution $\d^*$ is produced by an equilibrium strategy: i.e., there exists a best response $\s^*$ to $\d^*$ that generates $\d^*$ itself. We also know that the latter property holds for $\btos(\d^*)$, since it produces $\d^*$. Therefore, to conclude that $\btos(\d^*)$ is an equilibrium, it remains to be shown that $\btos(\d^*)$ is a best response to $\d^*$. To achieve this, we will compare the $\btos(\d^*)$ strategy and an arbitrary best response to $\d^*$, $\s^*$. We will show that either $\btos(\d^*)$ completely coincides with $\s^*$, or has the same utility for all types, and therefore is also a best response to $\d^*$.

Now, recall that in the case of linear utility lines, any best response can be expressed using the interval representation $(A',\i)$. In this representation, the intervals $(A',\i)$ are listed in a non-decreasing order of their respective utility line slopes (see Observation~\ref{obs:convex}). Furthermore, the length of interval $[c_{i-1},c_i]$ is given by $\d^*(a'_i)$. But this is exactly the strategy produced by $\btos(\d^*)$. Hence, if the best response to $\d^*$ is unique, $\btos(\d^*)$ is necessarily this best-response strategy.

%In this representation, the order of actions in $A'$ has to be consistent with $\d^*$:\commentv{???} i.e., actions are listed in a non-decreasing order of their respective utility line slopes. Furthermore, the sequence of interval sizes in $\i$ has to be equal to $\d^*(A')$.\commentv{this is just 1, why say this?} This is precisely the structure of the policy produced by $\btos(\d^*)$. Hence, if best response to $\d^*$ is unique, $\btos(\d^*)$ is necessarily a best-response strategy. \commentv{use corollary 1. seems like a one line proof}

The only discrepancy between $\btos(\d^*)$ and a best response $\s^*$ to $\d^*$ may occur if some set of actions, $A''\subset A'$, have the same utility line, when evaluated at $\d^*$. In this situation, it is possible that $\btos(\d^*)$ and $\s^*$ choose different actions from $A''$ for some {\em disagreement} types. However, all these {\em disagreement} types will belong wlog. to a single interval $K$ where all actions from $A''$ constitute a best response. Since, any action $a_i \in A''$ is a best-response for types in $K$, $\btos(\d^*)$ is a best response for all types in $K$. 

Furthermore, actions in $A''$ are in a strict order (with respect to the utility line slope) with actions in $A'\setminus A''$. Hence, $\btos(\d^*)$ assigns actions in $A''$ for types in $K$, and only for types in $K$, and the same is true for any best response to $\d^*$. If the remaining structure, i.e. the use of actions in $A'\setminus A''$, of the best response $\s^*$ is unique for the complement of $K$, $\bar{K}$, then this structure coincides with that of $\btos(\d^*)$ over $\bar{K}$. As a result, $\btos(\d^*)$ chooses best-response actions for all types, and is a best-response strategy to $\d^*$ for all types. If the best-response structure over $\bar{K}$ using $A'\setminus A''$ is not unique, we repeat the argument regarding {\em disagreement} types for $\bar{K}$ using $A'\setminus A''$. Since the set of actions is finite, there may be only a finite number of such iterations before the remaining best-response structure becomes unique. We conclude, that even if $\btos(\d^*)$ differs from $\s^*$ in its choices of actions for some types, this does not lead to reduction of utility. Therefore, $\btos(\d^*)$ is a best response to $\d^*$.

%Since, any action $a_i \in A''$ is a best-response for types in $K$, $\btos(\d^*)$ is a best response.

%Since the exact choice of actions in $A''$ is immaterial over %$K$, $\ue(b2s(\d^*),\d^*)=\ue(s^*,\d^*)$. This can occur only if %$b2s(\d^*)$ is another best response to $\d^*$.

Finally, since $\btos(\d^*)$ is a best response to $\d^*$ and generates $\d^*$, it is the equilibrium strategy.
\end{proof}


%We are able to do this as in the case of linear utilities there is a total order on all actions. It may be possible to develop a similar technique for general utility curves...

%In games of complete information, converged fictitious play beliefs (i.e., action distribution $\d$) correspond to a mixed strategy equilibrium: the equilibrium strategy is to play each action with the probability given by the beliefs (e.g., see Proposition 2.2 in~\cite{tlg}).
%Note that there is a one-to-one correspondence between the beliefs and the mixed strategy that generates them. By contrast, in games of incomplete information, beliefs can be represented not only by mixed, but also by pure strategies. Specifically, beliefs $\d$ can be represented by any pure strategy that plays action $\a$ on a subset $\hat{V} \subseteq V$ of size $\bar{F}(\hat{V}) = \d(\a)$. 

\begin{theorem}\label{b2s_is_epsilonning}
Let $\{\d^t\}_{t=1}^{\infty}$ be a converging fictitious play sequence, 
$\d^t\rightarrow \d^*$. Denote by $\s^t$ the best response to $\d^t$ calculated during iteration $t$ of fictitious play. Then:

$|\ue(\btos(\d^t),\d^t)-\ue(\s^t,\d^t)|\rightarrow 0$.
\end{theorem}
\begin{proof}
Notice that, if there are multiple best responses to $\d^*$, their utility is necessarily the same. 
%So any best response can be substituted as $\s^*$, including the %equilibrium strategy.
Since the utility lines $\uet(\type,a_i,\d)$ are continuous in
$\type$ and $\d$, and $m$ is finite, their upper envelope is a continuous function in $\type$ and $\d$. Recall also that we have assumed the set of types to be compact and $f$ continuous, hence $\ue(\s,\d)=\E_{\type\sim f}[\uet(\type,\s(\type),\d)]$ is also uniformly continuous in $\d$ for any (not necessarily best-response) strategy $\s$ (with a finite interval representation $(A,\i)$). As a result, for any strategy $s$ and for any $\delta>0$ exists $T$ such that for all $t>T$, $|\ue(\s,\d^t)-\ue(\s,\d^*)|<\delta$. In particular, we can choose $T$ so that $|\ue(\btos(\d^t),\d^t)-\ue(\btos(\d^t),\d^*)|<\delta$. Furthermore, a similar result holds for $\ue(\s[\d],\d)$, where $\s[\d]$ is a functional that returns a best-response strategy (e.g., the one from Figure~\ref{fig:brlinear}), such that $|\ue(\s^t,\d^t)-\ue(\s^*,\d^*)|<\delta$. 
%
To see this, recall that the utility of the best response is an upper envelope of a finite set of functions continuous  in $h$, hence also continuous  in $h$. As a result, $\ue(\s[\d],\d)$ is uniformly continuous as a function of $\d$, and the necessary inequality follows.

%\commentv{what's the point of this sentence?}

%% , where $\s$ is the (best response) strategy engendered by the %% upper envelope. As a result, for any $\delta>0$ there is $\epsilon>0$ %% so that if $\|\d_1-\d_2\|<\epsilon$ then %% $|\ue(\s_1,\d_1)-\ue(\s_2,\d_2)|<\delta$. As we have %% $\d^t\rightarrow\d^*$, for any $\delta>0$ there is $T$ so that for all %% $t>T$ $|\ue(\s^t,\d^t)-\ue(\s^*,\d^*)|<\delta$.

Now, define the following correspondence: $$\bar{\Phi}(\d)=\{s:\forall\type, \uet(\type,s(\type),\d)\geq\uet(\type,a_j,\d)\ \forall a_j \in A\}.$$ Notice that, similarly to $\Phi$ and $\Psi$ from Theorem~\ref{exist_eq_con_bel}, this correspondence is non-empty, closed- and compact-valued and upper hemicontinuous. We define the distance between two strategies as $d(s_1,s_2)=\int\mathbb{1}(s_1(\type)\neq s_2(\type))f(\type)d\type$. Then for any positive $\delta$ that is less than the probability of any action that is part of a best response to $h'$ or $h^*$, i.e. holds that: 
\begin{align*}
& \delta< \min(\min_{a_i' \in A'}h'(a'_i),\ \min_{a_i^* \in A^*}h^*(a^*_i)),
\end{align*}
there exists $T$ such that for all $t>T$ it holds that for any $s_1\in\bar{\Phi}(\d^t)$ and $s_2\in\bar{\Phi}(\d^*)$, the distance $d(s_1,s_2)<\delta$. In particular, there exists $\s^*\in\bar{\Phi}(\d^*)$ such that $d(s^*,s^t)<\delta$. Since both strategies are a best response and the distance between the strategies is at most $\delta$, the order of actions in $(A',\i)$ must be the same as in $(A^*,\i)$. In particular, this means that $\btos(\d^t)$ and $\btos(\d^*)$ use the same order of actions in their interval structure. In fact, they differ only over a set of types of size $d(\btos(\d^t),\btos(\d^*))<\delta$. Since utility is bounded, we have:
\begin{align*}
& |\ue(\btos(\d^t),\d^*)-\ue(\btos(\d^*),\d^*)|  = \\
& |\ue(\btos(\d^t),\d^*)-\ue(s^*,\d^*)|<c\delta,
\end{align*}
where $c>0$ is some constant. 
Aggregating all three bounds together we have:
\begin{eqnarray*}
&|\ue(\btos(\d^t),\d^t)-\ue(\btos(\d^t),\d^*)|<\delta\\
&|\ue(\btos(\d^t),\d^*)-\ue(\s^*,\d^*)|<c\delta\\
&|\ue(\s^t,\d^t)-\ue(\s^*,\d^*)|<\delta.
\end{eqnarray*}
Hence, we obtain that, for any $\delta$, there exists a $T$ so that for all $t>T$, the following holds:

$|\ue(\btos(\d^t),\d^t)-\ue(\s^t,\d^t)|<c'\delta$, for some finite $c'>0$. 
\end{proof}

%can be obtained by FP in finite time. In more detail, at each iteration of a FP run, we calculate \btos\ for the current FP beliefs $\d^t$. It is easy to verify whether this strategy is an $\epsilon$-equilibrium: i.e., calculate if  $|\ue(\btos(\d^t),\d^t)-\ue(\s^t,\d^t)|<\epsilon$. 
Theorem~\ref{b2s_is_epsilonning} guarantees that if FP converges, then an $\epsilon$-Nash equilibrium is necessarily obtained at some finite iteration. Furthermore, the proof structure allows another practical simplification. Specifically, before applying \btos, we can simplify $\d^t$ by filtering out all actions that appear with numerically negligible probability (i.e. below the threshold of $\delta$) and renormalizing.\footnote{A similar thresholding procedure was applied to mixed strategies in Ganzfried~\etal~\cite{ganzfried_sandholm_waugh_2012}.\label{fn:g-s-w}} %Formally, we calculate and test $\btos(\widehat{d})$, where:
%\[\widehat{d}(a)\propto\begin{cases}\d^t(a)&\d^t(a) > \delta\\0&otherwise\end{cases}\]. 

%% let $B'=\{b \in B \mid \d^t(b) > \delta\}$, where $\delta$ is %% numerical threshold we wish to apply. 


%% We calculate and test $b2s(\hat{d})$, where $\hat{d}this is a %% necessary condition for FP to converge, and hence

%% Specifically, it allows us to guarantee that if FP does converge, it %% can be effectively detected, and FP stopped after a finite number of %% steps. In more detail, during the FP run we use \btos procedure to %% test.... Since FP is an asymptotical process, the exact equilibrium %% may never be reached, and in practice FP is stopped after a finite %% number of steps once sufficient In fact, 

%% \commentz{to self: rewrite the following para}
%% The first step is to choose the set of equilibrium bids: these are the %% bids that have a probability bounded away from zero in $\d^t$ as $t %% \to \infty$. In practice, we stop fictitious play after a finite %% number of iterations and any action that was part of a best response %% at some time period has a positive probability. We filter out the %% actions that have a small probability due to the finite number of %% iterations by requiring the bids to meet a minimum threshold: $B' = %% \{b \in B \mid \d^*(b) > \epsilon\}$. We then renormalize %% probabilities of bids $B'$ to add up to one.
%% Steps 2 and 3 define an upper envelope strategy based on bids $B'$ and %% their probabilities $\d^*$. The upper envelope is specified by %% intervals with increasing slope corresponding to the sorted bids. %% Convergence in beliefs to $\d^*$ means that the beliefs are

\section{Simultaneous Auctions}\label{sec:simauctions}
\noindent In previous sections we discussed an algorithm for finding equilibria in G-FACTs, and an implementation for linear utility functions. In the current section, we apply the algorithm to a setting where bidders participate in multiple simultaneous, single-sided, sealed-bid auctions.\footnote{We note that a variation of the algorithm has also been successfully applied to a more complex double auction setting in~\cite{ecs20991}. See Section~\ref{sec:conclusions} for more details.} Simultaneous auctions are a natural generalisation of single-item auctions when multiple items are available for sale from different sellers. 
%The model has been studied before, but, due to the complexity of the analytical derivation, only very limited equilibrium results are available~\cite{krishna1996simultaneous}. 
%Similarly, extant computational results for this model are available only for the special case of perfect substitutes~\cite{GerdingAAMAS08,ecs17271}.
As we discussed in Section~\ref{sec:related}, existing computational techniques cannot be applied to this setting due to continuous type spaces, while discretisation of the type space comes at the expense of computationally feasibility. That is, the settings with more than a few discrete types and bid levels are beyond the computational reach of most techniques.
%At the same time, simultaneous auctions are a common way of selling multiple goods, and variants have been used to sell U.S. treasury bonds and mobile spectrum licences. 

The purpose of analysing this setting is two-fold. First, we demonstrate the efficacy of our algorithm for a complex setting where no analytical solution exists, and give convergence results.
Second, we demonstrate that our algorithmic technique can be used to contribute to the auction literature by providing an extensive empirical characterisation of the equilibrium bidding behaviour in simultaneous auctions. This empirical analysis is augmented in the next section, where we derive an analytical characterisation for a basic setting and show that the equilibria found for that setting match those that are found with our numerical approach. 

In particular, in our experiments we focus on simultaneous Vickrey (i.e., second-price) auctions. However, any other pricing (e.g., first-price or all-pay) could be chosen as this does not affect the algorithm (but affects the equilibrium strategies). The auctions are simultaneous in that a bidder needs to make a decision on how much to bid in each auction without knowing any of the outcomes (unlike sequential auctions where the winner of an auction is known before a bid is placed in another auction). For this setting, it has been shown in prior decision-theoretic work~\cite{GerdingJAIR2008} that, even though each individual auction is incentive compatible (bidding the true value for the item being auctioned is a dominant strategy), and even when the items are perfect substitutes (the bidder does not derive extra benefit from winning more than one item), a bidder is often better off bidding in multiple auctions and shading their bids, as opposed to choosing a single auction and bidding truthfully. Furthermore, in the case of substitutable\footnote{Two items are substitutable if the utility from winning both of them is less than the sum of the utilities for each individual item. Similarly, two items are {\em complementary} if the utility from winning both of them is more than the sum of the utilities for each individual item.} goods, the bidding strategies are typically non-monotonic in type, which makes finding the equilibrium bidding strategies a challenging task. 
%While, the work in \cite{GerdingJAIR2008} is decision-theoretic in nature since it assumes that there is a single bidder that can bid in multiple auctions, and all other bidders choose a particular auction.
In this section, we extend this work to a game-theoretic analysis in which all players can participate in all auctions, and the aim is to compute an equilibrium strategy. Here, we consider a wide range of combinatorial structures, including substitutes and complements. 

\subsection{Simultaneous Vickrey Auctions}
\label{sec:sva}
\noindent
%\commentv{May be better to not have another Model section, but instantiate the original model.}
%\commente{NOTE, $m$ IS ALSO USED AS NUMBER OF ACTIONS. NEED TO REVISE}
We consider a setting with $k$ simultaneous sealed-bid single-item Vickrey auctions. The items sold in different auctions are heterogeneous.
%each selling a heterogeneous item. 
The set of auctions (equivalently, items) is denoted by $K=\{1,\ldots,k\}$.  The set of players $N$ corresponds to the bidders. Each bidder has a {\em single-dimensional} privately-known type $\type$ which is i.i.d. sampled from a c.d.f. $F$ with continuous support on $[0,1]$. $F$ is assumed to be common knowledge. The finite action space is given by a set of joint bids defined as follows. Each auction has a finite set of admissible bids levels $B \subset \R_+$, and a bidder chooses a bid for each auction.\footnote{\label{fn2}We argue that having a finite set of bids is  not necessarily restrictive in practice, since bids are often rounded to an appropriate level (e.g. to the nearest dollar amount for small bids, the nearest ten-fold for larger bids, etc). In addition, the set of admissible bids can be further restricted by the auctioneer to increase seller revenue~\cite{mathews08,ecs11548}.} For simplicity, we furthermore assume that all auctions have the same bid levels and these are equally spaced (both of these simplifications can be trivially relaxed but we choose this restriction to reduce the number of parameters to consider). Thus, the action space is $\A=B^k$. Note that simultaneous auctions with discrete bids and continuous types are an instance of G-FACTs since the action space is finite, but the types are continuous. The only piece missing from a full specification of a Bayesian game is the utility function, which we define next.

%The Bayesian game $\Gamma = \langle N,A,u(\cdot),V,F(\cdot) \rangle$ encodes simultaneous auctions.

While the type of a bidder is single dimensional, we assume that the bidders have combinatorial preferences: i.e., the items are heterogeneous and may range from perfect substitutes to perfect complements and combinations of these. This is achieved by a function $\phi: 2^{K} \to \R$, common to all bidders which specifies a {\em complementarity structure} of the auctions. The {\em value} that a bidder with type $\type$ derives from winning a subset of items $\eta \subseteq K$ is given by the product $\phi(\eta) \type$. Notice that the relative values of bundles are the same across bidders. In essence, the type is a scaling parameter: if bidder 1 has type $x$ and bidder 2 has type $3x$, this means that bidder 2's value for each bundle is 3 times as high as the value of bidder 1. We acknowledge that single-dimensional types are more restrictive than multi-dimensional types that allow each bidder to have his own complementarity structure. Nevertheless, our restricted model is a good approximation for scenarios where items are likely to have a common complementarity structure (e.g., the bundle of left and right shoes is valuable, while each item in isolation is not).

As an example consider the case of two auctions ($K=\{1,2\}$) that we study in detail in the rest of the paper. Let $\alpha= \phi(\{1\})$, $\beta=\phi(\{2\})$, and $\gamma = \phi(\{1,2\})$ denote the value from winning only the first auction, only the second auction, and both auctions, respectively. Then, having $\alpha=\beta=\gamma=1$ corresponds to a setting of perfect substitutes and free disposal. That is, a bidder does not gain from winning multiple items, but there is no cost either (not including any additional payments from winning multiple auctions). Our computational and analytical techniques do not rely on the assumption of free disposal, and, for completeness, we consider complementarity structures where free disposal does not hold: i.e., the values $0 \le \gamma \le \min(\alpha,\beta)$. At its extreme, $\gamma=0$, we have valuations where winning both items results in zero value (this could be interpreted as the cost of disposal of the second item being equal to the independent value of the first item). Furthermore, $\alpha=\beta=0$ and $\gamma=1$ represents the case with perfect complements. That is, a bidder only receives utility from winning both items. Finally, setting $\gamma=\alpha+\beta$ means that the items are independent. 

In addition, we can model auctions selling heterogeneous items. For example, $\beta=2\alpha$ and $2\alpha \le \gamma \le 3\alpha$ model the case when the item sold in the second auction is twice as valuable as the item from the first auction, and these items exhibit some degree of substitutability. Such preferences could arise, for example, when the same type of item is sold in two different quantities (e.g., 1-liter and 2-liter cartons of milk), but having both is more than a bidder typically needs.

The assumption of a common complementarity structure $\phi$ enables us to model combinatorial valuations while keeping bidder types single dimensional. Alternatively, true combinatorial valuations would endow each bidder with his own combinatorial structure, making each type $2^{|K|}$-dimensional. This more general model is left open for future work (our extension to multi-parameter domains in~\ref{sec:multtype} may be helpful). Thus, in the present work we focus on the common complementarity structure.%, which already provides more generality than previously studied models of simultaneous auctions (as discussed in Section~\ref{sec:related}).


Given a complementarity structure $\phi$, the expected utility of a bidder from playing action $\a \in \A$ is:
\begin{align}
& \uet(\type,\a,\d)=\type \sum\limits_{\eta\subseteq  K} \phi(\eta) \probwin(\a,\eta,\d) - \cost(\a,\d) \label{eq:ulsimauc}
\end{align}
where:
\begin{itemize}
\item $\probwin(\a,\eta,\d)$ is the probability that playing an action (bids) $\a \in \A$  results in winning exactly the set $\eta$ of auctions, given the distribution of actions from the opponents, $\d$. In~\ref{app:222} we show how this probability is calculated (note that this is not trivial since we have discrete bids and therefore need to take into account a tie breaking rule).
\item $\cost(\a,\d)$ is the expected payment when placing action $\a$ and given distribution $\d$. \ref{app:222} shows how this is calculated \label{lab1} (as can be seen in the appendix, note that the  expected payment is simply the sum of the expected payment for each auction, and so is much easier to calculate since this can be done independently for each auction). 
\end{itemize}

Importantly, note that expected utility $\uet$ is linear in $\type$, which allows us to apply the algorithms from Section~\ref{sec:linu}. 



%In Appendix~\ref{app:???}, we show how these auction rule applies to the 2x2x2 example with an additional assumption that $B=\{low,high\}=\{0,1\}$. Interestingly, the calculation of the best-response strategy does not principally depend on these details either. Since the set of possible bids is finite, the exact structure of $P(\b,\eta)$ and $c(\b)$ places no restriction on the ability to optimize $u(\b,v,P)$. In fact, the calculation of the best possible utility, and its corresponding strategy (see Section~\ref{sec:br}), can lead to structural classification of the possible equilibria, and ultimately results in the analytical approach that we present in Section~\ref{analytic_results}. \commentv{Not sure what the last two sentences are saying. Zinovi, please rephrase.}





\subsection{Numerical Results}\label{empirical_results}
\noindent
In this section we present equilibrium results obtained by running the fictitious play algorithm described in Figures~\ref{fig:fp},~\ref{fig:ul},~\ref{fig:brlinear}, and~\ref{fig:beliefsToStrategy}. The algorithm in Figure~\ref{fig:ul} is instantiated using Equation~\ref{eq:ulsimauc}. The details of the tie-breaking rule appear in~\ref{app:222} (throughout the analysis we use the exact tie breaking rule unless specified otherwise). In the following, we start with the experimental setup in Section~\ref{sec:setup}. Then, in Section~\ref{sec:convergence} we measure the empirical convergence of the algorithm to an $\epsilon$-Nash equilibrium. The actual equilibria obtained are first discussed with only 2 bid levels per auction in Section~\ref{sec:hom} for homogenous items, and in Section~\ref{sec:het} for heterogeneous items. In Section~\ref{sec:multbids} these results are extended to more than 2 bid levels. 

\subsubsection{Experimental Setup}
\label{sec:setup}
\noindent
A game is specified by the number of auctions, the number of bidders, a set of possible bids, a complementarity structure, and a distribution of agents' types. 
In all of the experiments, we focus on 2 auctions, and a uniform distribution of types between 0 and 1. We begin the numerical investigation with the simplest possible setting: 2 bidders, 2 bid levels per auction, and complementarity structures where the individual value of each item is the same (i.e., the items sold at both auctions are identical). In this setting, we find an equilibrium for each degree of complementarity from substitutes to complements. The observed equilibria for this simple setting enable us to identify some properties that continue to hold in the more complicated setting we consider next: auctions with more than 2 bidders, and auctions selling different items. We then further expand the setting by considering more than 2 bid levels.

\begin{table}
\begin{center}
\begin{tabular}{|l|l|}
\hline
Parameter & Value(s)\\
\hline
Number of auctions ($k$) & 2\\
Number of bidders ($n$) & 2,5,10\\
Number of bid levels per auction ($|B|$) & 2,5,10\\
Type probability distribution ($F$) & $Uniform([0,1])$\\
Complementarity structure ($\alpha$,$\beta$) & varies\\
Complementarity structure ($\gamma$) & 0, .05, .1, \ldots, 2.95, 3\\
Initialisation of FP beliefs & $random$\\
Number of runs for each setting & 30\\
Number of FP iterations per run & 5000\\
\hline
\end{tabular}
\caption{Experimental Settings}
\label{tab:settings}
\end{center}
\end{table}


An overview of various experimental settings is given in Table~\ref{tab:settings}. Although we tested with many other values as well, these are representative of the results that we obtained. The bid levels in $B$ are equally spaced between $0$ and $1$. This means that, if the number of bid levels is $2$, then $B=\{0,1\}$. On the other hand, if this is set to $5$, then $B=\{0,0.25,0.5,0.75,1.0\}$, etc. Recall that the number of bid levels is per auction.  This means that, for example, if this number is $10$, then the total number of actions for a bidder when there are 2 simultaneous auctions is $10^2=100$. Furthermore, $random$ initialisation of the FP beliefs means that the initial probability of each action is set randomly between $0$ and $1$, and then normalised so that the probabilities sum to one (note that this is different from having each action played with equal probability). These values are sampled anew for different runs of the same experiment. Therefore, the (only) difference between runs is the initial FP beliefs (since the algorithm itself is deterministic). The aim in having multiple runs is to see whether or not different initial beliefs result in different equilibrium strategies being computed. When multiple runs converge to the same $\epsilon$-equilibria, we are more confident that a true equilibrium has been identified as we describe next. We run each experiment 30 times to obtain statistically significant results based on $95\%$ confidence intervals. 


\subsubsection{Convergence and Scalability}
\label{sec:convergence}
\noindent
In this section we empirically analyse to what extent the results converge, and the computational runtime required as we scale the number of bidders and bid levels. These results provide a useful insight into the practical applications of the algorithm.
In more detail, we measure convergence in terms of the size of $\epsilon$ in the $\epsilon$-Nash equilibrium (see Definition~\ref{def:epsilonnash} in Section~\ref{sec:model}). %Recall that theoretical results in Section~\ref{sec:fp} show how to recover an exact equilibrium from converged fictitious play beliefs. However, convergence of fictitious play cannot be observed in a finite number of iterations. Thus, after a finite number of iterations we can only obtain approximate equilibria. Our choice of approximate equilibrium concept is $\epsilon$-equilibrium (see Definition~\ref{def:epsilonnash}). 
Recall that the $\epsilon$ of a given strategy $\s$ is given by the difference between the utility obtained by playing a best response $s^*$ to $\d_{\s}$ and the utility from playing $\s$ when the action distribution is $\d_{\s}$:  $\epsilon(\s) = \ue(\s^*, \d_{\s}) - \ue(\s, \d_{\s})$. In particular, we would like to measure the $\epsilon$ of the strategy that can be constructed from the current FP beliefs, $\d^{t}$, using the \btos\ algorithm. Thus, we set $s=\btos(\d^{t})$ and $\d_{\s}=\d^{t}$. In addition, to obtain a unit-free measure of convergence so that we can compare different settings,\footnote{Equilibrium utility for different complementary structures could be very different. Thus, the same absolute difference may constitute 1\% of utility for one complementarity structure, and 200\% for another.\label{fn:error}} we use a standard approach to normalise the difference, resulting in the so-called {\it relative error}~\cite{abramowitz1964handbook}:
\begin{equation}
\label{eq:relerror}
error(t)=\frac{\ue(\s^*, \d^{t}) - \ue(\btos(\d^{t}), \d^{t})}{\ue(\s^*, \d^{t})}
\end{equation}
Note that the error is guaranteed to be between $0$ (the equilibrium) and $1$ (as far as a strategy can be from the equilibrium).



The results using this measure appear in Figure~\ref{fig:conv}, which shows the percentage of runs that converge to a given error within a number of iterations, for all settings described in Table~\ref{tab:settings}, and where each setting is run 30 times, and $\alpha=.7$ and $\beta=1$ (the results are very similar for other values of $\alpha$ and $\beta$). This figure shows that virtually all runs of the algorithm converge to $\epsilon$-equilibrium with a small error. Moreover, as the number of iterations increases, the percentage of runs that are within $\epsilon$ of the equilibrium keeps increasing. This indicates that, on average, once an $\epsilon$-equilibrium for a given $\epsilon$ is reached, running extra iterations does not lead to divergence. 

\begin{figure}
\begin{center}
\includegraphics[width=.6\textwidth]{../../results/GRAPHS/conv6_m2_nonid1}
\caption{Percentage of runs converged within a given error in simulations with $\alpha=.7$, $\beta=1$, and the parameters $n$, $|B|$, and  $\gamma$ taking each of the values described in Table~\ref{tab:settings}. For each combination of parameters, 30 simulations were run.}
\label{fig:conv}
\end{center}
\end{figure}

A potential weakness of the $\epsilon$-equilibrium concept is that, even though the gain from deviation may be very small, the $\epsilon$-equilibrium strategy may be arbitrarily far away from an exact equilibrium strategy (see, e.g.,~\cite{shoham-book}). We address this concern in two ways. First, we run the same settings starting from different initial beliefs. If the algorithm consistently converges to the same strategy,\footnote{As measured, for example, by a negligibly small Euclidean distance between the action distributions from different runs after a fixed number of iterations.} this increases our confidence that the true equilibrium is obtained (note that the converse is not true, since converging to different strategies could simply mean that there exist multiple equilibria). We found that, using our algorithm, all of the simulations for 2 bid levels (Sections~\ref{sec:hom} and~\ref{sec:het}), as well as the simulations with more bid levels for auctions selling weakly complementary items (i.e., $\gamma \ge \alpha + \beta$), converged in the latter stronger sense. 

Second, we compare the strategies with analytical results for settings where these can be derived. In particular, for a special case of 2 bidders, 2 auctions, and 2 bid levels, we are able to derive equilibria analytically (our derivation is discussed in Section~\ref{sec:222}). We see that the analytical results are identical to the equilibrium results obtained computationally in Section~\ref{sec:hom}. Thus, in this special case, $\epsilon$-equilibria obtained numerically are approximating exact pure-strategy equilibria. Although we do not have a theoretical proof of convergence in general, we note that the equilibria we obtain for variants of this special case (e.g., with more than 2 bidders or with heterogeneous items) follow the same structure, which we take as a reasonable evidence that these approximate equilibria are close (in terms of the strategy, not just the utility) to exact equilibria.

We now consider the amount of computation required, both in terms of the number of iterations, as well as time elapsed before convergence to an $\epsilon$-Nash equilibrium. In these experiments, we choose $error=0.01$ (i.e., where the error is no more than one percent of the total utility), but the results show similar trends for other values. In particular, we are interested to see how our algorithm scales for the simultaneous auctions setting with $k=2$ auctions, when we increase the number of bidders and the number of discrete bid levels. Although each experiment is run 30 times with different initial beliefs, the results in this section show the average over the 15 runs with the lowest runtime. We do this because the results vary depending on the initial beliefs, and we see that, while in most cases the results converge within a couple of hundred iterations, there are a few outliers which skew the results and take much longer or do not converge to the required error within the maximum numbers of iterations (which was set to 2500 for these particular experiments). We avoid these outliers by taking the top half of the runs. Furthermore, we argue that, in practice, it is possible to run a number of experiments in parallel to see which one converges first, which would have the same effect. All of the experiments were run on a Linux cluster with 2.27 Ghz Nehalem processors and the simulation was implemented in Java. 

\begin{figure}
\centerline{
	\psfrag{pref1}{\footnotesize $\alpha=1.0,\gamma=1.0$}
	\psfrag{pref3}{\footnotesize $\alpha=1.0,\gamma=1.5$}
	\psfrag{pref5}{\footnotesize $\alpha=1.0,\gamma=2.0$}
	\psfrag{pref7}{\footnotesize $\alpha=1.0,\gamma=2.5$}
	\psfrag{pref9}{\footnotesize $\alpha=1.0,\gamma=3.0$}
	\psfrag{pref2}{\footnotesize $\alpha=0.7,\gamma=1.0$}
	\psfrag{pref4}{\footnotesize $\alpha=0.7,\gamma=1.7$}
	\psfrag{pref6}{\footnotesize $\alpha=0.7,\gamma=2.4$}
	\psfrag{pref8}{\footnotesize $\alpha=0.3,\gamma=1.0$}
	\psfrag{pref10}{\footnotesize $\alpha=0.3,\gamma=1.3$}
	\psfrag{pref12}{\footnotesize $\alpha=0.3,\gamma=1.6$}
  \includegraphics[scale=0.4]{../../results/GRAPHS/time3_d10.eps}
  \includegraphics[scale=0.4]{../../results/GRAPHS/time4_d10.eps}
}
\caption{Average number of iterations required to reach $error=0.01$ (left) and time per iteration (right) for different values of $n$ and different preference structures with $\beta=1$. Here, we use the exact tie breaking rule (note that the number of bidders does not affect the time per iteration when using the approximate tie breaking rule). The number of bid levels is set to $|B|=10$. Results are averaged over the 15 fastest of 30 runs. The error bars denote the 95\% confidence intervals.}
\label{fig:time_n}
\end{figure}

\begin{figure}
\centerline{
	\psfrag{pref1}{\footnotesize $\alpha=1.0,\gamma=1.0$}
	\psfrag{pref3}{\footnotesize $\alpha=1.0,\gamma=1.5$}
	\psfrag{pref5}{\footnotesize $\alpha=1.0,\gamma=2.0$}
	\psfrag{pref7}{\footnotesize $\alpha=1.0,\gamma=2.5$}
	\psfrag{pref9}{\footnotesize $\alpha=1.0,\gamma=3.0$}
	\psfrag{pref2}{\footnotesize $\alpha=0.7,\gamma=1.0$}
	\psfrag{pref4}{\footnotesize $\alpha=0.7,\gamma=1.7$}
	\psfrag{pref6}{\footnotesize $\alpha=0.7,\gamma=2.4$}
	\psfrag{pref8}{\footnotesize $\alpha=0.3,\gamma=1.0$}
	\psfrag{pref10}{\footnotesize $\alpha=0.3,\gamma=1.3$}
	\psfrag{pref12}{\footnotesize $\alpha=0.3,\gamma=1.6$}
  \includegraphics[scale=0.4]{../../results/GRAPHS/time3_n10.eps}
  \includegraphics[scale=0.4]{../../results/GRAPHS/time4_n10.eps}
}
\caption{Average number of iterations required to reach $error=0.01$ (left) and time per iteration (right) for different values of $|B|$ and different preference structures with $\beta=1$. The number of bidders is set to $n=10$, and the approximate tie breaking rule is used. Results are averaged over the 15 fastest of 30 runs. The error bars denote the 95\% confidence intervals.}
\label{fig:time_d}
\end{figure}


There are two factors that determine the total computation time: the number of iterations required and the computation time for each iteration. The effect of the first factor can be seen in Figures~\ref{fig:time_n}(left) and~\ref{fig:time_d}(left) which show the average number of iterations required to reach the equilibrium, as we increase the number of bidders, respectively the number of bid levels per auction, for a variety of preference structures. This result shows that the number of required iterations always increases as the number of bidders increases, but typically flattens out as we increase the number of bid levels.  An increase in the number of iterations is indicative of the difficulty of the problem, and this suggests that problems with more bidders are more challenging to solve, which seems intuitive. In most cases, however, the increase seems to be linear or even sublinear, and so has relatively little impact on the final computation. Interestingly, the preference structure also has a large effect on the difficulty of the problem, and generally increasing the asymmetry between the two auctions increases the number of iterations required. On the other hand, increasing the number of bid levels merely increases the granularity and, for a given relative error, this has little effect on the number of iterations needed to converge. 

Whereas the algorithm scales relatively well in terms of the number of iterations for the simultaneous auctions domain, it is less promising when we consider the computation required for each iteration. Here, the computation required is mainly due to computing the utility lines (the \ul \ algorithm in Figure~\ref{fig:ul}, which requires finding the slope and intercept in Equation~\ref{eq:ulsimauc}), and computing the best response (i.e., the \br\ algorithm in Figure~\ref{fig:brlinear}).\footnote{\label{fn1}Note that the computation of the \btos\ algorithm is negligible compared to the other algorithms since the main part consists of sorting the actions by slope. Furthermore, the \btos\ algorithm is only required to compute the relative error (Equation~\ref{eq:relerror}), and the strategy itself once the process has converged.} We first consider the effect of the number of bidders which, due to the tie breaking rule, affects the computation of utility lines. Note that, from \ref{app:222}, we can see that, to compute the exact probability of winning, we need to consider all possible numbers of ties in each auction. As a result, for $m=2$ auctions the computation required scales in the order $O(n^3)$ with the number of bidders. The empirical results in Figure~\ref{fig:time_n} (right) are for the same settings as before, and show the average real time (in seconds) required to compute an iteration, as we increase the number of bidders. Note that, as we can expect, the number of bidders has a large impact on the computation, but the preference structure does not. 

Clearly, we can do much better by simply using an approximation of the tie breaking rule, and a simple approximation which scales well with the number of bidders is given in \ref{sec:approxtie}. Using this approximation, the increase in computation due to an increase in the number of bidders becomes negligible. Furthermore, we empirically consider the {\it additional} error (in terms of the $\epsilon$-Nash) introduced by this approximation (we do so by computing the best response both with and without the approximation, and computing the error in both cases). From this we can determine that the error decreases and goes to zero as the number of bid levels goes to infinity, but empirically we find that the error is already very small for small numbers of bid levels. For example, the average additional error is less than $0.003$ when the number of bid levels is $20$. 


In terms of the runtime when increasing the number of bid levels, in the case of $k=2$ the number of actions is equal to $|B|^2$, where $|B|$ is the number of bid levels, and so the time complexity of a single iteration is at least $O(|B|^2)$. Furthermore, as discussed in Section~\ref{sec:br}, the time complexity for finding the best response using our algorithm is $O((|B|^2)^2) = O(|B|^4)$. This is consistent with the empirical results depicted in Figure~\ref{fig:time_d}(right) which show the time per iteration when using the approximate tie breaking rule, as the number of discrete bid levels increases.\footnote{We note that there is considerable scope for optimising the code, e.g. by using a more efficient algorithm for computing the upper envelope or by detecting and removing dominated utility lines (see Section \ref{sec:br}).} As a result, for $k=2$, we can easily compute results for settings of $100$ discrete bid levels per auction.   


%All of the graphs in Sections~\ref{sec:het} and~\ref{sec:hom} (i.e., graphs for 2 bid levels) plot equilibria averaged over 30 runs. Each of the runs however produced virtually the same equilibrium: we tried plotting error bars, but they were too small to be visible.

%However, for many simulations with substitutable items with more than 3 bid levels, convergence was only in the weaker $\epsilon$-equilibrium sense.

%In all experiments, the algorithm converges with a probability of at least XXX to within YYY of an equilibrium. 

%VICTOR: DO NOT THINK THIS IS TRUE: WE'RE SEEING THIS BECAUSE OF AVERAGING OVER DIFFERENT SETTINGS. FOR SOME SETTINGS IT ALWAYS CONVERGED, FOR OTHERS IT NEVER DOES. OR NOT?? WHEND DOES IS NOT CONVERGE?
%Even for a very small $\epsilon$, an equilibrium is found with a significant probability after XXX iterations. Thus, we can find an $\epsilon$-equilibrium with high by restarting the procedure after with random initial beliefs after XXX iterations and iterating the process multiple times.









%\section{Analytical Results}\label{analytic_results}
%In this section we show that the structure of the envelope can be
%classified, leading to a set of polynomial systems of equations. Each
%system becomes solvable within a specific region of preference
%space. By bounding these regions we can break down the preference
%space into classes were a particular shape of equilibrium is
%obtained. Furthermore, the exact analytical expression can be obtained
%for the equilibrium parameterised by the preference.
%\newpage






\subsubsection{Equilibrium Results for Homogenous Items}
\label{sec:hom}
\noindent
We first consider a simple setting where the items sold at each auction are identical, the set of bids is $B=\{0,1\}$, and there are only $n=2$ bidders. Given that the auctions are identical, we set $\alpha=\beta=1$, and vary the value of $\gamma$ as specified in Table~\ref{tab:settings} and explained in Section~\ref{sec:sva}. This value ranges from  $\gamma=0$, which models a setting of extreme substitutes without free disposal, to $\gamma=3$ which corresponds to complements. In-between are perfect substitutes ($\gamma=1$) and independent auctions ($\gamma=\alpha+\beta=2$). In what follows we analyse the results after $5000$ iterations of the fictitious play algorithm (this number was found to be sufficiently large for experiments to converge to a very small error). 

\begin{figure}[ht!]
\begin{center}
\subfigure[bids] {
	\label{fig:twobidstrategy}
	\includegraphics[width=.45\textwidth]{../../results/GRAPHS/stratm2_n2_d2_id_all_7}}
\subfigure[utility] {
	%\label{fig:ub2}
	\includegraphics[width=.45\textwidth]{../../results/GRAPHS/utilm2_n2_d2_id_all_7}}
\caption{Equilibrium strategy and corresponding utility for $\gamma=1.4$ and $n=2$.}
\label{fig:stratutil}
\end{center}
\end{figure}


To illustrate the results, Figure~\ref{fig:stratutil} shows the strategy and corresponding utility lines generated by the \btos\ procedure at the end of a particular run for a setting with $\gamma=1.4$ (i.e., a representative value where agents have substitutable preferences). This figure shows that, for this setting, all 4 possible actions are played with non-zero probability. Moreover, as can be expected, agents with higher types bid higher. Specifically, agents with a low type play $(0,0)$; agents with a very high type play $(1,1)$, and in-between types play either $(0,1)$ or $(1,0)$. From this example we can see several interesting trends. First, the slopes of the utility lines are increasing as expected, except for actions $(0,1)$ and $(1,0)$ where the slopes seem to be identical. Second, the actions $(0,1)$ and $(1,0)$ seem to be played with equal probability (note that the type interval is of equal size and types are uniformly distributed). In fact, however, the fictitious play beliefs assign {\it almost} equal probabilities to $(1,0)$ and $(0,1)$, and the slopes are {\it almost} identical. Furthermore, the slopes oscillate: if at iteration $t$ the action $(1,0)$ has a slightly higher slope, then at iteration $t+1$ the action $(0,1)$ has a higher slope. 
%We observe $\delta^t$ to decrease as $t$ grows.
This is because the best-response strategy also oscillates, and only one of the actions $(1,0)$ or $(0,1)$ is played with non-zero probability in best response, never both, and these two actions alternate. This illustrates why a special \btos\ procedure is needed to find an equilibrium and why simply taking the best response does not result in an equilibrium strategy. 

In more detail, this fluctuating behaviour mimics the FP dynamics in games of complete information such as matching pennies, where the best response for the mismatching player is heads whenever the probability of playing tails is above one half, and tails otherwise. There, the beliefs asymptotically approach equal probability of playing heads and tails yielding an equilibrium strategy. Similarly, here the FP beliefs for actions $(1,0)$ and $(0,1)$ become increasingly similar as the number of iterations increases. However, in contrast to complete information games where FP beliefs define a unique mixed strategy,  in games of incomplete information, FP beliefs do not correspond to a single strategy. Thus, we need to convert FP beliefs into a strategy that induces these beliefs and is roughly a best response to them.  The \btos\ procedure accomplishes this goal as we proved in Section~\ref{sec:b2s}.  Furthermore, in Section~\ref{sec:222} we formally show that the strategy found by the \btos\ procedure in fact corresponds to the analytically derived strategy for the case of $n=2$ bidders, 2 bid levels, and 2 auctions. In particular, we can see that, in equilibrium the two actions $(1,0)$ and $(0,1)$ are always played with identical probability as expected.\footnote{It is worth noting that, in the homogenous case, there actually exists a continuum of equilibrium strategies. This is because the agents are indifferent between playing $(1,0)$ and $(0,1)$. For example, the strategy where the intervals for $(0,1)$ and $(1,0)$ in Figure~\ref{fig:stratutil} are swapped is also an equilibrium. Also, there exist equilibria with more intervals. However, all of these equilibria result in the same action distribution, and this action distribution is unique (see also Section~\ref{sec:222}).} 

\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
 value range & bid\\
\hline
$0 \le v \le h(0,0)$ & $(0,0)$\\
$h(0,0) \le v \le h(0,0)+ h(1,0)$ & $(1,0)$\\
$h(0,0)+ h(1,0) \le v \le h(0,0)+ h(1,0)+h(0,1)$ & $(0,1)$\\
$h(0,0)+h(1,0)+h(0,1) \le v \le 1$ & $(1,1)$\\
\hline
\end{tabular}
\end{center}
\caption{Strategy corresponding to beliefs $h$.\label{tbl:btos}}
\end{table}


%In iterations with $h^t(1,0) < h^t(0,1)$, the \btos\ strategy bids $(1,0)$ for lower types and $(0,1)$ for higher types of the interval (this is the case shown in Table~\ref{tbl:btos}). The reverse happens in iterations with $h^t(1,0) > h^t(0,1)$. While the order may change in each iteration, both strategies approach $\epsilon$-equilibrium as $t$ increases. For ease of exposition, in the graphs that follow we ignore the order of these bids, and specify the interval on which either of the bids is played. In the corresponding strategy, we assume $(1,0)$ is played on the lower part of the interval as presented in Table~\ref{tbl:btos}. Given this, beliefs $h^t$ correspond to a unique strategy. 

%Notice that the bid $(0,0)$ is played by the lowest types and the bid $(1,1)$ is played by the highest types.

Next, we consider the equilibrium strategies for different values of $\gamma$. We can plot action distributions more concisely than equilibrium strategies and take advantage of \btos\ to map each action distribution to a strategy.
%For ease of exposition, in the following we ignore the actual mapping of types of actions, and only illustrate the converged FP beliefs, i.e., the action distribution. 
To this end, Figure~\ref{fig:bidsm2_d2_n2_id} plots action distributions for each value of the complementarity parameter $\gamma$ between $0$ and $3$. This figure (and other figures that follow) shows action distributions (i.e., FP beliefs) after $5000$ FP iterations, and {\it averaged over $30$ runs}. We omit the error bars in the figures because the confidence intervals are very small and cannot be seen. This shows that, starting from different initial beliefs, the beliefs converge to the same action distribution. An equilibrium strategy can be recovered from the action distributions by applying the \btos\ procedure. The resulting strategy appears in Table~\ref{tbl:btos}.

\begin{figure}[ht!]
\begin{center}
\includegraphics[width=.45\textwidth]{../../results/GRAPHS/bidsm2_d2_n2_id}
\end{center}
\caption{Action distributions (i.e., FP beliefs after 5000 iterations) for auctions with $n=2$ bidders selling homogeneous items. \label{fig:bidsm2_d2_n2_id}}
\end{figure}

As can be seen, the action distributions appear to be continuous in the complementarity parameter $\gamma$ (see Figure~\ref{fig:bidsm2_d2_n2_id}). Furthermore, the values of $\gamma$ can be partitioned into three intervals, each corresponding to a different set of actions being played with non-zero probability. For low values of $\gamma$ (in the case of highly substitutable items), action $(1,1)$ is never played; in the mid-range, all 4 actions are played with non-zero probability; in the case of complementarities $(1,0)$ and $(0,1)$ are never played. We denote these intervals by $[0,\hat{\gamma}_1]$, $[\hat{\gamma}_1,2]$, and $[2,\infty]$, where $\hat{\gamma}_1$ is the lowest value of $\gamma$ for which the bid $(1,1)$ is played in equilibrium. As can be seen in Figure~\ref{fig:bidsm2_d2_n2_id}, for this particular setting the value of $\hat{\gamma}_1 \approx 1.2$.\footnote{We derive the exact value in Section~\ref{sec:222}.}  Furthermore, as soon as $\gamma$ reaches the value of additive valuations ($\gamma=\alpha+\beta=2$), the bids $(1,0)$ and $(0,1)$ are not played at all as the agents try avoiding winning a single item.  Interestingly, this is consistent with existing analytical results in the literature for continuous bids whereby only equal-bid pairs are played for items that display complementarity~\cite{krishnabook} (when the auctions are identical).

%On the first interval, only the bids $(0,0)$ and $(1,0)$ are played; for intermediate values, all bids are played; and for $\gamma$ above 2, only the bids $(0,0)$ and $(1,1)$ are played. 


\begin{table}[ht!]
\begin{center}
\begin{tabular}{|l || c | c | c |c|}
\hline
 & $\gamma \in [0,\hat{\gamma}_1]$ & $\gamma \in [\hat{\gamma}_1,2]$ & $\gamma \in [2,\infty]$ & $\gamma = \infty$\\
\hline
$f(0,0)$ & decreases & increases & decreases & 0\\
$f(1,0)=f(0,1)$ & increases & decreases & 0 & 0\\
$f(1,1)$ & 0 & increases & increases & 1\\
\hline
\end{tabular}
\caption{Equilibrium analysis for homogeneous items and 2 bid levels.\label{tbl:eq}}
\end{center}
\end{table}

Table~\ref{tbl:eq} further analyses the strategy and shows that equilibrium action distributions are monotone in $\gamma$ within each of the intervals (the values for $\gamma=\infty$ are based on simulations for large (but finite) values of $\gamma$). Furthermore, the probability of playing the ``highest" possible bid of $(1,1)$ increases as the items become more complementary. In fact, we observe that, in the limit, the probability of bidding $(1,1)$ approaches 1 from below. However, for any finite $\gamma$, the bid $(0,0)$ is played in equilibrium by the types that are small enough, resulting in a positive $h(0,0)$.



%The second and the third intervals are separated by the $\gamma$ value of 2 which corresponds to additive valuations. The additive valuations case separates substitutable items ($\gamma<2$) from the complementary items $\gamma>2$.  
In the remainder of this section, we show that our technique can be used to derive equilibria for more than 2 bidders. The results for 5 and 10 bidders are shown in Figure~\ref{fig:id510}. We observe the pattern identified for 2 bidders continues to holds for 5 and 10 bidders. In particular, we observe the same types of interval, where only the value of $\hat{\gamma}_1$ (the lowest $\gamma$ for which the action $(1,1)$ is played) changes. Furthermore, the monotonicity results shown in Table~\ref{tbl:eq} are identical for these intervals.  Comparing across the graphs, we notice that $\hat{\gamma}_1$ increases with the number of bidders. That is, the items must display more complementarity for $(1,1)$ to be played in equilibrium when there is more competition. This is a result of an increasing cost associated with the bid $(1,1)$: the same strategy $\s$ results in a higher (second) price  as the number of bidders playing the strategy increases. This leads to a higher expected cost of winning with the bid of 1, discouraging bidding 1 unless the type is sufficiently high. This effect is also reflected in a lower probability of playing $(1,0)$, $(0,1)$, and $(1,1)$ across $\gamma$ when $n=10$ compared to when $n=5$.
% (this argument does not apply when comparing $n=2$ with $n=10$ for $\gamma < 1$: i.e., when there is no free disposal)


\begin{figure}[ht!]
\begin{center}
\subfigure[$n=5$] {
	%\label{fig:}
	\includegraphics[width=.45\textwidth]{../../results/GRAPHS/bidsm2_d2_n5_id}}
\subfigure[$n=10$] {
	%\label{fig:ub2}
	\includegraphics[width=.45\textwidth]{../../results/GRAPHS/bidsm2_d2_n10_id}}
%\caption{Action distributions for auctions selling homogeneous items.\label{fig:id510}}
\caption{Action distributions for auctions selling homogeneous items. \label{fig:id510}}
\label{fig:ratios1}
\end{center}
\end{figure}






\subsubsection{Equilibrium Results for Heterogeneous Items}
\label{sec:het}
\noindent
Our next set of results considers auctions selling different items and where an agent has different valuations for these items. To illustrate the effect of the degree of asymmetry, we run two sets of experiments for different relative values of the items. In the first set, the value of the item sold in one auction is $.7$ of the value of the item sold in the other auction ($\alpha=.7$, $\beta=1$). In the second set of experiments, one item is much less valuable: its value is only $.3$ of the value of the other item ($\alpha=.3$, $\beta=1$). Notice that the case of additive valuations, beyond which the items become complementary, occurs at $\gamma=\alpha+\beta$, which is $\gamma=1.7$ for the first case and $\gamma=1.3$ in the second.

\begin{figure}
\begin{center}
\subfigure[$\alpha=.7$, 2 bidders] {
	%\label{fig:}
	\includegraphics[width=.45\textwidth]{../../results/GRAPHS/bidsm2_d2_n2_nonid1}}
\subfigure[$\alpha=.3$, 2 bidders] {
	%\label{fig:ub2}
	\includegraphics[width=.45\textwidth]{../../results/GRAPHS/bidsm2_d2_n2_nonid2}}
\subfigure[$\alpha=.7$, 5 bidders] {
	%\label{fig:}
	\includegraphics[width=.45\textwidth]{../../results/GRAPHS/bidsm2_d2_n5_nonid1_nolegend}}
\subfigure[$\alpha=.3$, 5 bidders] {
	%\label{fig:ub2}
	\includegraphics[width=.45\textwidth]{../../results/GRAPHS/bidsm2_d2_n5_nonid2_nolegend}}
\subfigure[$\alpha=.7$, 10 bidders] {
	%\label{fig:}
	\includegraphics[width=.45\textwidth]{../../results/GRAPHS/bidsm2_d2_n10_nonid1_nolegend}}
\subfigure[$\alpha=.3$, 10 bidders] {
	%\label{fig:ub2}
	\includegraphics[width=.45\textwidth]{../../results/GRAPHS/bidsm2_d2_n10_nonid2_nolegend}}
\caption{Action distributions for auctions selling heterogeneous items.\label{fig:nonid}}
\end{center}
\end{figure}


Action distributions for each setting with 2, 5 and 10 bidders are plotted in Figure~\ref{fig:nonid}. As before, these results are averaged over 30 runs. Since the item sold in the second auction is more desirable, we can see that, in equilibrium, $(0,1)$ is played more often than $(1,0)$: the curve $h(0,1)$ is above $h(1,0)$ for all values of $\gamma$. Furthermore, we note that, even though the actions $(1,0)$ and $(0,1)$ are played on adjacent intervals, the switching of the optimal best response in each iteration, which we observed in the homogeneous case, does not occur here. After sufficiently many iterations, the bid $(1,0)$ is always selected by lower types, and the bid $(0,1)$ is always selected by the higher types. Thus, Table~\ref{tbl:btos} still provides an equilibrium strategy.


We observe similarities in the equilibrium structure of homogeneous and heterogeneous cases. After identifying the regions where the set of bids played with non-zero probability does not change, we notice that  as in the homogeneous case, the probability of each bid within a region is monotonic in $\gamma$. The bids $(1,0)$ and $(0,1)$ are not symmetric when items are heterogeneous, resulting in more regions. For $n=5$ and $n=10$, there are five regions summarised in Table~\ref{tbl:eqnonsym}.
\begin{table}[ht!]
\begin{center}
\begin{tabular}{|l || c | c | c |c|c|c|}
\hline
 & $\gamma \in [0,\hat{\gamma}_1]$ & $\gamma \in [\hat{\gamma}_1,\hat{\gamma}_2]$ & $\gamma \in [\hat{\gamma}_2,\hat{\gamma}_3]$ & $\gamma \in [\hat{\gamma}_3,\hat{\gamma}_4]$ & $\gamma \in [\hat{\gamma}_4,\infty]$ & $\gamma = \infty$\\
\hline
$f(0,0)$ & decreases & decreases & increases & decreases & decreases & 0\\
$f(1,0)$ & 0 & increases & decreases & 0 & 0 & 0\\
$f(0,1)$ & increases & increases & decreases & decreases & 0 & 0\\
$f(1,1)$ & 0 & 0 & increases & increases & increases & 1\\
\hline
\end{tabular}
\caption{Equilibrium analysis for heterogeneous items and 2 bid levels. The bid $(1,1)$ is played with positive probability for $\gamma>\hat{\gamma}_2$. The bid $(1,0)$ is played with zero probability for $\gamma < \hat{\gamma}_1$ ($\hat{\gamma}_1$ is zero for $n=5$ and $n=10$) and $\gamma > \hat{\gamma}_3$.  The bid $(0,1)$ is played with zero probability for $\gamma > \hat{\gamma}_4$.\label{tbl:eqnonsym}}
\end{center}
\end{table}
Comparing the graphs for $n=5$ and $n=10$, we notice that as in the homogeneous case, the probability of bidding $(0,0)$ is higher for $n=10$ while the probabilities of the other bids are lower. Comparing two complementarity structures $\alpha=.7$ and $\alpha=.3$, we observe that the bid $(1,0)$ is played more often when the item is worth .7 than when it is worth .3 (symmetrically, the bid $(0,1)$ is played less often). This corresponds to a higher competition for the item when it is more desirable.

\subsubsection{Equilibrium Results for Auctions with More Than Two Bid Levels}
\label{sec:multbids}
\noindent
The strategies analysed so far were limited to two bid levels. However, our technique can be applied to any number of bid levels. Here we discuss results for ten bid levels, but similar results hold for other bid levels. With more than two bid levels, there is no easy way to represent the results concisely for each value of $\gamma$ as we did before (since the number of possible actions is large). Therefore, we select a few representative values of $\gamma$ to illustrate the types of equilibria we find. To this end, Figure~\ref{fig:mbhom} shows equilibria for homogeneous items with a small degree of complementarity. The bid submitted in each auction is plotted as a function of type. We see that, consistent with the two-bid and continuous case (see ~\cite{krishna1996simultaneous}), in the case of homogeneous items, the bids in both auctions are the same (i.e., the lines coincide) and are given by an increasing step function. In fact, we observe that for any $\gamma>\alpha+\beta$, the equilibrium follows this structure. 

\label{compitems}
In the case of complementary heterogeneous items, the strategy follows the same form: the bid in each auction is an increasing step function, which is also consistent with the results for 2 bid levels (see, e.g., Figure~\ref{fig:twobidstrategy}). This can be seen in Figure~\ref{fig:mbhet}, which shows equilibrium strategies for heterogeneous items with a small degree of complementarity. We tried many other parameter settings (i.e., changing the number of bidders, bid levels, and complementarity structures with $\gamma > \alpha + \beta$), and equilibria for all of them followed this structure. Moreover, we noticed that, for high degrees of complementarity, the step functions coincide as in the homogeneous case.

\begin{figure}[ht!]
\begin{center}
\subfigure[$\alpha=\beta=1$, $\gamma=2.2$] {
	\label{fig:mbhom}
	\includegraphics[width=.45\textwidth]{../../results/GRAPHS/stratm2_n5_d10_id_all_11}}
\subfigure[$\alpha=.3$, $\beta=1$, $\gamma=1.4$] {
	\label{fig:mbhet}
	\includegraphics[width=.45\textwidth]{../../results/GRAPHS/stratm2_n5_d10_nonid2_all_7}}
\caption{Complementary items: equilibrium strategies for 5 bidders with 10 bid levels.\label{fig:mb}}
\end{center}
\end{figure}


Our results for substitutable items (i.e., $\gamma<\alpha+\beta$) are not as conclusive for two reasons. First, even though convergence to $\epsilon$-equilibrium with a low error was always observed, multiple runs did not always produce the same equilibrium. Second, we could not discern any general patterns as we did in the complementary case. To illustrate this, we plot equilibrium strategies for two settings where multiple runs led to the same equilibrium. For weakly substitutable items, the equilibrium resembles the increasing step functions which characterised equilibria for complementary items. An example of this is in Figure~\ref{fig:mbsub1}. However, when items are stronger substitutes, equilibrium is more difficult to describe.  Figure~\ref{fig:mbsub2} shows equilibrium when items are close to being perfect substitutes.  %A special where we can discern a pattern is for items where one item is much more valuable than the other, and the value of the second item
\begin{figure}[ht!]
\begin{center}
\subfigure[$\alpha=.7$, $\beta=1$, $\gamma=1.6$] {
	\label{fig:mbsub1}
	\includegraphics[width=.45\textwidth]{../../results/GRAPHS/stratm2_n2_d5_nonid1_all_8}
	}
\subfigure[$\alpha=.7$, $\beta=1$, $\gamma=1$] {
	\label{fig:mbsub2}
	\includegraphics[width=.45\textwidth]{../../results/GRAPHS/stratm2_n2_d5_nonid1_all_5}
	}
\caption{Substitutable items: equilibrium strategies for 2 bidders with 5 bid levels.\label{fig:mbsub}}
\end{center}
\end{figure}

Before proceeding to an analytical characterization of equilibria, we note that all of the numerical results described in this section are for two simultaneous auctions. The algorithm is applicable to any number of auctions, however, we chose to study this case as it is already complex enough (and has not been solved before). Furthermore, computing the fair tie-breaking rule for three or more auctions becomes too cumbersome (the case with two auctions is already complex enough as can be seen in~\ref{app:222}). We emphasise that settings with 3 and more simultaneous auctions can be studied with our FP algorithm given an approximation of the tie-breaking rule.


In the next section we provide analytical characterization of equilibria. The equilibria that we derive there analytically confirm our numerical results from in Section~\ref{sec:hom}.

%The next section we prove this analytically for the case of homogeneous items and binary bids --- the setting that started our numerical investigation.




\section{Analytical Characterisation of Equilibria for Linear Utilities}
\label{sec:analytical}
\noindent
This section provides an analytical characterisation of equilibria for the case when agents' utilities are linear in type (as defined in Section~\ref{sec:linu}). In more detail, we reduce the problem of finding equilibria to solving systems of polynomial equations. While this characterization holds for all games with linear utilities, deriving equilibrium relies on the ability to solve the systems exactly. We demonstrate that it can be done for simultaneous auction games studied in Section~\ref{sec:hom}. Specifically, we analytically derive the equilibria and prove their uniqueness for each complementarity structure in simultaneous auctions for two homogeneous items with two bid levels and two bidders. 
%The characterisation can be used to derive equilibria in simultaneous auctions when the number of bidders, bids, and auctions is small. 
%We then apply the technique to find equilibria for the simultaneous auctions setting studied in Section~\ref{sec:hom} 
%\commente{We need to say something about how general this is. In other words, how difficult is it to apply the same characterisation to find equilibria in other settings?}
%Specifically, for the case of two identical auctions, two distinct bids, and two bidders, we analytically derive the equilibria and prove their uniqueness for each complementarity structure. 
%\commente{Do we need to discuss why we do not look at non-identical at this point, or more bids, etc?}

We compare these equilibria to the empirical results in Section~\ref{sec:hom}. This comparison is important since, even though the empirical results generally converge to a very small error, the small error only means that deviating from the approximate equilibrium strategy results in at most a small benefit.  However, there are no guarantees that the approximate equilibrium is similar to the theoretical Nash equilibrium (in terms of the action distributions). Given this, the results in this section confirm that the $\epsilon$-Nash equilibria discovered for this setting are the same as the unique exact equilibria derived below, and provide a validation for the fictitious play approach (at least in the simultaneous auctions domain). This holds for the entire range of complementarity structures for identical auctions studied in Section~\ref{sec:hom}.


%Under the private value model for selling a single object in a high
%bid auction, when bids and values are continuous, all equilibrium
%strategies are increasing in the bid~\ref{eqinshba00}. In fact, there
%is a unique symmetric equilibrium \commentv{need to find results on
%uniqueness of symmetric equilibria. Maskin in "Uniqueness of
%equilibrium in sealed high-bid auctions" claims "With a symmetric
%distribution of types, it is well known that there is only one
%symmetric equilibrium (Milgrom and Weber, 1982; Maskin and Riley,
%1984)." Could not find the result for ind private value model in
%milgrom's paper.}
%%uniqueness in affiliated models is shown (krishn'a auction
%%theory). what about private values?  It is well-known that in single
%%auction equilibrium strategies, the bid increases in
%%value\commentv{check + reference}.
%Model with discrete bids was considered by in~\ref{discretefirstbid}.
%\commentv{list papers talking about sim auctions} Nobody has provided
%general characterisation results for the setting with simultaneous
%auctions.


%In more detail, we first analyse best response starting with simple observations and giving an analytical characterisation. This characterisation is then extended to characterise equilibrium.

Our analytical characterisation begins with the analysis of best response. We continue using the best response representation from Section~\ref{sec:br}. Thus, a best response is specified by a set of $m'$ actions $A'\subseteq A$ ordered according to the slope and the corresponding intervals (represented by an increasing vector $\i \in \R^{m'-1}$) on which each action is played: action $a'_j$ is the best-response action on the interval given by $[\i_{j-1}, \i_{j}]$.  This representation is without loss of generality for pure-strategy best response: the actions of any best response are increasing in the slope of the utility lines\footnote{This follows from the fact that the best-response function (see Equation~(\ref{eq:br})) is convex.} and a single action is played for each type. 
%More formally, any best response can be described as a pair $(A'=\{a'_1 \preceq \ldots \preceq a'_{m'}\},\i)$ satisfying
%\begin{align}
%& A' \subseteq A \label{eq:subset}\\
%& 0 < \i_1 < \ldots < \i_{m'-1} < 1 \label{eq:const}
%\end{align}
Whereas in Section~\ref{sec:br} we presented an algorithmic procedure for finding a best response (see Figure~\ref{fig:brlinear}), we now provide an analytical characterisation. First, we give a few observations that follow immediately from the best-response structure.

%We begin with a more formal treatment of multiple best responses. When the actions ($A'$) comprising the upper envelope have distinct slopes, there is a unique best-response strategy. Conversely, if there is 

Recall from equilibrium results for single-item auctions with continuous actions that the equilibrium action (i.e., bid)
increases in type. However, in our model the set of actions is discrete and may have a more complex underlying structure (as would be the case, for example, in the simultaneous auctions model, where bids are
not single dimensional), and there is no self-evident total order among them. To remedy this, we consider slopes of the
utility lines. These slopes provide a total order over actions. As we noted in Observation~\ref{obs:convex}, the slopes  of actions played in a best response strategy increase in type. 
%An obvious property of an upper envelope (see Figure~\ref{fig:envelope} for an illustration) is that the {\em slope} of an upper envelope (equivalently, best-response) utility line increases in type. 
%\begin{observation}
%\label{obs:monotonicitybr}
%The slope of the utility envelope of a best-response strategy increases in type.
%\end{observation}
Since an equilibrium is a best response, the same applies to equilibrium actions, providing an equilibrium monotonicity condition.
\begin{observation}
\label{obs:monotonicity}
%The slope of an equilibrium action increases in type.
The slope of the utility envelope of an equilibrium strategy increases in type.
\end{observation}
A direct consequence of this is another observation.
\begin{observation}
If all slopes are distinct in equilibrium, an action cannot be played on more than one interval.
\end{observation}


For a given action distribution $\d$, we say that $a_i\preceq a_j$ if the slope of $a_i$'s utility line is smaller than the slope of $a_j$'s utility line. With this notation, a best response is characterised by the following lemma.
\begin{lemma}
\label{lem:ue}
Given a set of available actions $A = \{a_1\preceq \ldots \preceq a_m\}$, the pair $(A'=\{a'_1 \preceq \ldots \preceq a'_{m'}\},\i)$ is a pure-strategy best response to the action distribution $h$ if and only if the following equations are satisfied:
\begin{align}
& A' \subseteq A \label{eq:subset}\\
& 0 < \i_1 < \ldots < \i_{m'-1} < 1 \label{eq:const}\\
& \uet(\i_{j},a'_j,h) = \uet(\i_{j},a'_{j+1},h) \quad \forall\ 1 \le j \le m'-1 \label{eq:ut}\\
& \uet(0,a'_1,h) \ge \uet(0,a_k,h) \quad \forall\  a_k \prec a'_1 \label{eq:utineq1}\\
& \uet(\i_{j},a'_j,h) \ge \uet(\i_{j},a_k,h) \quad \forall\ 1 \le j \le m' \quad a'_j\prec a_k\prec a'_{j+1} \label{eq:utineq2}
\end{align}
where $\i_0 = 0$, $\i_{m'} = 1$, and $\a'_{m'+1}$ is a dummy action (i.e., it does not appear in $A$) and has a slope above $a_{m}$ (this dummy action is used in Equation~(\ref{eq:utineq2})).
\end{lemma}
\begin{proof}
See~\ref{app:lemmaproof}.
\end{proof}

%The best response on the interval $[\i_{j-1}, \i_{j}]$ for $1 \le j \le
%m'$ is given by the utility line $u(\cdot,a_{\sigma(j)},h)$ for bid
%$a_{\sigma(j)} \in A'$.

%In auctions, only some information from the bid distribution is
%relevant to an agent's utility. Specifically, only the highest bid in
%each auction and the number of agents placing that bid (for
%tie-breaking) affects an agent's utility. For bid distributions
%resulting from symmetric equilibria, this information is contained in
%the equilibrium distribution of bids of any agent and the total
%number of agents.  A strategy is a symmetric equilibrium if and only
%if it is the best response to the bid distribution resulting from the
%other bidders playing this strategy.
%Equilibrium actions of agent $i$ are the best response to the actions of the
%other agents. 
%Lemma~\ref{lem:ue} tells us that a best-response
%takes the form of a piecewise linear upper envelope $(A',\i)$. 
%The
%strategy $(A',\i)$ constitutes a symmetric equilibrium when it is the
%best response to the action distribution it generates as stated in the


The analytical characterization of a best-response provides a partial characterization of equilibrium: each equilibrium strategy is a best response. To be an equilibrium, the best-response strategy must be a best-response to itself. We formalise this in the theorem below.
\begin{theorem}
\label{cor:symeq}
A strategy $\s$ is a pure-strategy symmetric equilibrium of the game $\Gamma = \langle N,A,u(\cdot),\types,F(\cdot) \rangle$ with $\uet(\type,\s(\type),\d_{\s})$ linear in $\type$ if and only if: 
\begin{align*}
& s(\theta) = a'_j \mid \theta \in [\i_{j-1},\i_{j}]
\end{align*}
where $(A'=\{a'_1 \preceq \ldots \preceq a'_{m'}\},\i)$ satisfies
Equations~(\ref{eq:subset})-(\ref{eq:utineq2}) as well as:
\begin{align}
& h(a'_j) = F(\i_j)-F(\i_{j-1}) \quad \forall\ 1 \le j \le m'\label{eq:pdf}\\
& h(a_j) = 0 \quad \forall a_j \notin A'\label{eq:pdf2}
%& h(b;a) = \sum_{b' \in A' \mid b' \le b} h(b';a)\label{eq:cdf}
\end{align}
%As the strategy is the best response, it has the form described in
%Definition~\ref{def:ue}: i.e., it is given by a set of bids $B'$
%ordered according to $\sigma$ and constants $\i$. In any such
%strategy, the bid $b_{\sigma(j)} \in B'$ is played only by the types
%$\thet\i_i \in [\i_j, \i_{j-1}]$; i.e., with the probability
%$F(\i_j)-F(\i_{j-1})$. A piecewise linear strategy is a symmetric
%equilibrium if is the best response to the price distribution it
%induces.
\end{theorem}
\begin{proof}
A strategy is an equilibrium if and only if it is a best response (and thus can be represented by a pair $(A',\i)$) to itself: i.e., to the action distribution it induces. Equations~(\ref{eq:subset})-(\ref{eq:utineq2}) ensure the strategy is a best response. The action distribution corresponding to a strategy $(A',\i)$ is easy to express analytically: the probability of playing an action $a'_j \in A'$ is the same as the probability that the type is from the interval $[c_{j-1},c_j]$ (Equation~(\ref{eq:pdf})) while the probabilities of all other actions are zero (Equation~(\ref{eq:pdf2})). 
%A best response can be found by solving the equations from Lemma~\ref{lem:ue}. Thus, an equilibrium strategy $(A',\i)$ must satisfy equations Equations~(\ref{eq:ut})-(\ref{eq:utineq2}). A strategy $(A',\i)$ is an equilibrium if it is the best response to the action distribution it induces. This condition is stated in Equations~\ref{eq:pdf} and~\ref{eq:pdf2}.
\end{proof}

%The price distribution is defined by the upper-envelope strategy,
%which in turn is the best response to this price distribution.
%Theorem~\ref{cor:symeq} tells us that finding equilibria is
%equivalent to finding a set of actions and corresponding parameters that
%satisfy Equations~(\ref{eq:subset})-(\ref{eq:utineq2}) and~(\ref{eq:pdf})-(\ref{eq:pdf2}).  
A direct way of searching
for an equilibrium is for each possible subset of actions $A' \subseteq
A$ to check whether there exist parameters $\i$ satisfying best-response (\ref{eq:const})-(\ref{eq:utineq2}) and action distribution (\ref{eq:pdf})-(\ref{eq:pdf2}) equations.
%One way to search for an equilibrium is to pick an ordered set of
%bids $B'\subseteq B$ and solve Equations~\ref{eq:ut} where the
%distribution $G$ is a function of $\i$.  The solution is valid only if
%$0 \le \i_1 \le \ldots \le \i_{|B'|} \le 1$.  Note, that the ordering
%of slopes of bids $B'$ under the price distribution defined by $\i$ is
%the same as the original ordering of bids in $B'$ as $\i$ defines an
%upper envelope where the bids appear in order.
Although in general the equations can be arbitrarily complex depending
on the distribution of types and number of players, they are almost
always numerically solvable (see, e.g.,~\cite{courtois_etal_2000,sturmfels_2002_book_polyeq,mathematica,eberly_2000}). A complete
analytical characterisation is tractable when the number of actions is small. In the next section, we provide such a characterisation for the simultaneous auctions setting with 2 bid levels studied in Section~\ref{sec:hom}.
%Even there analytical derivation is rather cumbersome. More general
%examples are hardly solvable analytically, justifying the computation
%approach we present in section~\ref{}.


%\subsection{Equilibrium Derivation for A Basic Case}
\subsection{Two Identical Items, Two Bidders, Two Bids Per Auction, Uniform Distribution of Types}
\label{sec:222}
\noindent
In this section, we use the above characterisation to provide an analytical derivation of the equilibrium for the simultaneous auctions setting studied numerically in Section~\ref{sec:hom}. 
Specifically, we restrict our attention to 2 auctions each selling an identical item and
2 bidders with types uniformly distribution between $0$ and $1$. Furthermore, there are 2 bid levels per auction: $0$ and $1$. The
set of possible joint bids is therefore $A = \{(0,0)\ (0,1)\ (1,0)\ (1,1)\}$. As before, we set
$\alpha=\beta=1$.  The only remaining complementarity parameter is
$\gamma$, which determines how much more or less an agent values
having both items.
%The only assumption we place of $\gamma$ is that of free disposal:
%$\gamma \ge 1$.
In the following, we analytically derive equilibria as a function of $\gamma$. 
%We only
%consider positive values of $\gamma$ which model any natural
%complementarity structure. Negative values of $\gamma$ mean that the
%agent receives negative utility when he wins both items. Free disposal
%condition is satisfied for $\gamma$ values of 1 and above, and the
%value of 2 corresponds to independent auctions.

We start by making several observations.  First, due to uniform distributions, the bid distribution in Equation~(\ref{eq:pdf}) induced by a strategy $(A',\i)$
becomes:
\begin{align*}
& h(a'_j;\i) = F(\i_j)-F(\i_{j-1}) =  \i_j - \i_{j-1}
\end{align*}

Second, we note that the actions $(0,1)$ and
$(1,0)$ must be played with equal probability in
equilibrium.  To see this, suppose that $(1,0)$ is played more often than
$(0,1)$. Then, the probability that the second auction has the price of
$0$ is higher. However, since the agent is indifferent between winning either item the best response is to play $(0,1)$ more often.  Therefore, in equilibrium, the probabilities of playing $(1,0)$ and $(0,1)$ are the same, and these actions have an identical utility
line in the best response. As a result, the best response
interval on which either of the bids is played is continuous. That is, if these
bids are played in an equilibrium on the interval $[\i_1,\i_2]$, then
there is a continuum of equivalent equilibria where:
\begin{align}
& s(\theta_i) = (0,1) \text{ or } (1,0) \mid f(1,0) = f(0,1) = \frac{\i_2-\i_1}{2} && \text{if } \theta_i \in [\i_1,\i_2] \label{eq:continuum}
\end{align}
This explains the switching behaviour we observed in fictitious play (see Section~\ref{sec:hom}): at the equilibrium point any order of bids $(1,0)$ and $(0,1)$ is acceptable. However, any small change away from $h((1,0))=h((0,1))$, leads to a unique preferred order.
%A simple equilibrium of this form is the one switching from $(1,0)$
%to $(0,1)$ halfway between $a_1$ and $a_2$: i.e., setting the
%parameter $a_2 = a_1 + \frac{a_3-a_1}{2}$. For the sake of clarity,
%we focus on this particular equilibrium in our derivations and let
%$a_2$ be determined by $a_1$ and $a_3$.
For notational convenience, since the probabilities of playing $(1,0)$ and
$(0,1)$ are the same in equilibrium, in the following we merge this action into a single action and refer to the merged action as $(1,0)$. Specifically, saying that the action $(1,0)$ is played on the interval $[c_{j-1},c_j]$ means that the actions $(1,0)$ and $(0,1)$ are played with equal probabilities on this interval.

Given the above observation, we can identify a unique order of slopes for each of the three actions. Note that, regardless of $h$, the action $(1,0)$ wins in all the cases when the action of $(0,0)$ wins. Similarly, the action $(1,1)$ wins in all the cases when the action $(1,0)$ wins. Thus, $(0,0)$ has the lowest slope, $(1,0)$ is next, and $(1,1)$ has the highest slope. In particular, notice that the order does not depend on $\gamma$. Now, following Observation~\ref{obs:monotonicity}, in equilibrium, the slope increases in type, which means that $(1,0)$ (or $(0,1)$) is played by higher types than $(0,0)$, and $(1,1)$ is played by higher types than $(1,0)$. Note that this is consistent with the equilibrium strategy described in Table~\ref{tbl:btos}.

The next step is to see which actions are played in equilibrium, and with what probabilities. It is easy to see that for any action distribution,  $(0,0)$ is the best response for types that are low enough and, thus, is played with a positive probability in any equilibrium. However, it is never the case that $(0,0)$ is the only action in the support. These
observations imply that the possible sets of equilibrium actions are
$\{(0,0)\ (1,0)\}$, $\{(0,0)\ (1,1)\}$, and the set of all actions
$\{(0,0)\ (1,0)\ (1,1)\}$.  In fact, as we will show, each of these sets corresponds to an equilibrium for some range of complementarity structures. 

As an example, consider the set $A' = \{(0,0)\ (1,0)\}$. In the
notation of Lemma~\ref{lem:ue}, $m' = |A'| = 2$ and, to establish the probability of each action being played we are looking
for the intersection point $0\le \i_1 \le 1$ satisfying:
%Since bid $(1,1)$ is not part of the set, the piecewise linear
%strategy must set $a_3=1$; and since $(0,0)$ and $(1,0)$ are part of
%$B'$, the strategy sets $0<a_1<1$. Parameters $a_2$ and $a_3$ are
%linked and $a_3=1$, we have $a_2 = a_1 + \frac{a_3-a_1}{2}$. The only
%degree of freedom is $a_1$ which by Lemma~\ref{lem:ue} for the set
%$B' = \{(0,0)\ (1,0)\}$ must satisfy
\begin{align*}
& u(\i_1,(0,0),h(\cdot;\i)) = u(\i_1,(1,0),h(\cdot;\i))\\
& u(1,(1,0),h(\cdot;\i)) \ge u(1,(1,1),h(\cdot;\i))
\end{align*}
A solution exists only for $0<\gamma \le 2(2 - \sqrt{2})$ and is unique:
\begin{align*}
& \i_1 = \frac{-4 - \gamma + \sqrt{16\gamma + \gamma^2}}{-4 + 2\gamma}
\end{align*}
Carrying out a similar analysis, we derive equilibria for the other 2
action sets. Details of these derivations can be found in~\ref{app:derivation}. These derivations show that there is a {\it unique} equilibrium (except for variations between actions $(0,1)$ and $(1,0)$) for each value of $\gamma$. More formally:
\begin{theorem}
\label{thm:eq222}
The simultaneous auctions game defined by 2 bidders, actions 
\begin{align*}
A = \{(0,0)\ (0,1)\ (1,0)\ (1,1)\}
\end{align*}
uniform distribution of types in $[0,1]$, and complementarity structure $\alpha=\beta=1$ and $\gamma > 0$ has a unique\footnote{We are treating all equilibria given by Equation~(\ref{eq:continuum}) as one.} equilibrium defined below for every value of $\gamma$.\\
For $0<\gamma \le 2(2 - \sqrt{2})$ the equilibrium is $A'=\{(0,0)\ (1,0)\}$ and $\i = (\i_1)$ where:
\begin{align*} 
& \i_1 = \frac{-\gamma-4 \pm \sqrt{\gamma^2 + 16\gamma}}{2(\gamma-2)}
\end{align*}
For $2(2 - \sqrt{2}) < \gamma < 2$  the equilibrium is $A'=\{(0,0)\ (1,0)\ (1,1)\}$ and $\i = (\i_1,\i_2)$ where:
\begin{align*}
& \i_1 = \frac{2 \left(2-2 \gamma+\sqrt{-\gamma^2+\gamma^3}\right)}{4-4 \gamma+\gamma^2}\\
& \i_2 = \frac{-6 \gamma^2+4 \sqrt{(-1+\gamma) \gamma^2}+2 \gamma \left(2+\sqrt{(-1+\gamma) \gamma^2}\right)}{(-2+\gamma)^2 \left(-\gamma+\sqrt{(-1+\gamma) \gamma^2}\right)}
\end{align*}
For $\gamma=2$  the equilibrium is $A'=\{(0,0)\ (1,1)\}$ and $\i = (.5)$.\\
For $2 < \gamma$  the equilibrium is $A'=\{(0,0)\ (1,1)\}$ and $\i = (\i_1)$ where:
\begin{align*}
& \i_1 = \frac{-6-\gamma+\sqrt{-28+44 \gamma+\gamma^2}}{4 (-2+\gamma)}
\end{align*}
\end{theorem}
Figure~\ref{fig:eq} plots the action distributions defined in the theorem above. Notice that the graph is virtually identical (up to very fine precision) to Figure~\ref{fig:bidsm2_d2_n2_id} obtained via numerical simulations.
As we conjectured in Section~\ref{sec:hom}, the equilibrium probabilities of each action are continuous in $\gamma$.  The structure identified in Table~\ref{tbl:eq} is also confirmed. Using the analytical characterisation, we determined that the smallest value of $\gamma$ for which the bid $(1,1)$ is played in equilibrium is $2(2 -
\sqrt{2})$. This provides the exact value of $\hat{\gamma}_1$, which we roughly estimated to be around $1.2$ by looking at Figure~\ref{fig:bidsm2_d2_n2_id}. 
%The vertical lines
%separate three regions with different sets of equilibrium bids. \commentv{make this consistent with graphs in the numerical section.}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=.45\textwidth]{../../results/GRAPHS/analytical}
\end{center}
\caption{Analytical Results: action distributions for auctions with 2 bidders selling homogeneous items.\label{fig:eq}}
\end{figure}




Furthermore, the value of $\gamma = 2$ corresponds to independent auctions and
equilibrium can be found for each auction separately. The single
auction equilibrium with possible bids $0$ and $1$ can be easily
derived: the unique equilibrium is to bid $0$ for the values below
$.5$ and to bid $1$ for the values above $.5$. Combining individual
equilibrium strategies, we get the equilibrium strategy: bid $(0,0)$
for the values below $.5$ and $(1,1)$ for the values above. Each of
the two equilibrium bids has the probability of $.5$ as can be seen in
Figure~\ref{fig:eq} for $\gamma=2$.

Our analytical results show that, even though the approximate equilibrium strategy is not guaranteed to be similar to the actual Nash equilibrium strategy, in practice, we find that the empirical results are very close to the exact ones. %This analysis is useful since, even though there are no theoretical guarantees, it empirically validates the fictitious play approach as a method for finding equilibria in simultaneous auction games. 


\section{Conclusions}
\label{sec:conclusions}
\noindent
In this work we generalise FP to games of incomplete information with discrete actions and continuous types. We prove that, if FP beliefs converge, a {\em pure}-strategy Bayesian-Nash equilibrium can be constructed from the beliefs' limit point. Our algorithm recovers this equilibrium in case of (asymptotic) convergence. Furthermore, a pure $\epsilon$-equilibria for any $\epsilon > 0$ can be obtained after a finite number of iterations.

%Next we highlight several key features of our FP approach, and contrast it with existing general solvers. First,
The key distinguishing feature of our FP approach is that it works directly with continuous types and remains scalable in the number of agents and actions. This is in contrast to other currently available solvers (e.g., those listed in Section~\ref{sec:related}) that are typically only able to find equilibria in settings with discrete type spaces of small size or two players. Although, recent advances (such as graphical game representations and hybrid algorithms) allow discrete solvers to scale to larger type spaces, they nevertheless fail to accommodate cardinally larger continuous types. On the other hand, our algorithm is applicable to a large class of games with continuous type spaces and each iteration of FP can be computed efficiently. Furthermore, our FP algorithm can be applied to a wide range of auction settings, providing equilibrium calculations that otherwise would require specialised analyses and solution algorithms. 
%Second, a common practice to find approximate equilibria in games with continuous strategy spaces (in our case due to continuous types) is to discretise the space (see e.g. ~\cite{DAMAS-133,soni_singh_wellman_2007,stein_ozdaglar_parrilo_2008}). In contrast, our FP solution handles continuous type spaces directly, and in case of convergence yields an exact equilibrium.
%\commente{Don't see the difference between the first point and the second. They both talk about discrete vs continuous types. I propose removing the last 2 sentences.}

%In so doing ... TODO ... and proved several theoretical results. This contrasts with existing general solvers for such games which are typically only able to find equilibria in settings with small numbers of actions and types. 

%% In more detail, we showed that, if FP beliefs converge, a pure strategy equilibrium can be constructed from the beliefs' limit point. Furthermore, we have constructed an explicit procedure for this purpose. This procedure also allows us to obtain pure $\epsilon$-equilibria in finite time, where $\epsilon \rightarrow 0$ if beliefs converge in the limit. 

To illustrate the efficacy of our algorithm, we perform a set of numerical experiments, where FP was applied to a range of simultaneous auctions settings, where players have various combinatorial preferences for the items. 
%Previous attempts to calculate equilibria for these settings by iterated best response were unable to converge to an equilibrium. 
The experiments show that FP converges to a very small $\epsilon$ in the settings we investigate, providing an empirical characterisation of equilibria in a complex domain for which no general theoretical results exist. We then analyse these equilibria in detail. The results show that, for weakly complementary items, as we vary the complementarity structure, the changes in equilibrium bids are continuous (there are no jumps). Furthermore, we observe that the bids are monotonic within each range of the complementarity parameter, i.e. where the support of equilibrium bids does not change. These characteristics continue to hold as we increase the number of bid levels and the number of bidders, although the position of the regions shift.   


%We generalised FP to games of incomplete information and showed that %if FP beliefs converge, a pure strategy equilibrium can be recovered %from them. We provided an algorithm for recovering such a pure %strategy when agents' utilities are linear, and ran simulations which %showed that FP converges in a variety of simultaneous auction %settings.

While the numerical results show convergence to $\epsilon$-Nash equilibria with very small $\epsilon$ (in the order of less than $1\%$ of the utility), there is no guarantee that this equilibrium is close to the true pure Nash equilibrium (which is known to exist for our setting).  Therefore, to further support our results, in addition to the algorithm, we developed a full analytical characterisation for small settings. This shows that the equilibrium results are in fact unique, and correspond to those found by the FP algorithm. 

%Our analytical results for a simple auction setting find a unique pure strategy equilibrium. We notice characteristics of the equilibria. The equilibrium probabilities of playing each bid are continuous in the complementarity structure. Furthermore they are monotonic on regions where the set of equilibrium bids remains the same. These properties continue to hold in auction settings with more bidders and different complementarity structures as we observe numerically. 
%This type of analysis can be helpful for deriving analytical results: %observe properties by solving a simple case analytically, confirm %numerically that they hold more generally, and attempt extending %analytical results.

%The FP technique presented here is applicable to a wide variety of games (including all types of one-shot auctions). 

%Noticeably, the overall approach we took to obtain these results is %quite generically applicable: observe properties by solving a simple %case analytically, confirm
%numerically that they hold more generally, and attempt extending
%analytical results. 

%Notably, both our methodology and results are of high general applicability across the field. For instance, our results can be useful from a mechanism design perspective. As a practical example, a seller who controls the set of allowable bids, could use the procedure to compare equilibrium revenue for different sets of allowable bids. The analytical technique for deriving the equilibrium could also be applied to compare equilibrium properties of various auction formats for auctions with few bids and bidders.

We observe numerical convergence of FP to $\epsilon$-equilibrium in all of our experiments, and to the same strategy in the experiments with complementary items.  However, at the moment, we are not able to prove convergence analytically. The problem also proves elusive in games of complete information where results are available only for restricted settings (see, e.g.,~\cite{fudenberg_levine_98_book,monderer_shapley_96_FP}). In fact, there are counter examples where FP is known not to converge. Identifying restricted settings of incomplete information where the generalised version of fictitious play provably converges remains open for future work. %In this respect, it is important to recall the possibility of extending to G-FACTs the no-regret algorithmic module, which has previously proved to be an important practical and theoretical counterpart of FP in finite games.\commente{Don't see how no-regret can help with proving convergence, as you say in literature review, there are also few theoretical results known for this algorithm (and so actually you contradict yourself here)}


%We are particularly interested in determining what guarantees (if any) can our assumptions, both secondary (such as utility linearity in type or type dimensionality) and primary (such as independent private type or finiteness of action space), provide with respect to FP convergence. 
%In fact, since FP implementation would not principally change to handle other forms of auctions, it can be suggested that FP converges in the majority of auction settings that intersect with G-FACTs. 
%For instance, \ref{sec:multtype} describes how our FP implementation can be extended to handle multi-dimensional type spaces, allowing it to address combinatorial or multi-criteria auctions.

%GSP with incomplete information.
\label{futurework}
Also open for future research are applications of the technique presented here to other domains (e.g., multi-unit or combinatorial auctions), both as a means of testing the convergence properties of FP, and as a means of obtaining numerical solutions to initiate a study of equilibrium properties in these domains. In fact, the technique outlined in this paper has been recently applied to compute equilibrium trading strategies in simultaneous double auctions in~\cite{ecs20991}. The authors show that in such settings, the FP algorithm consistently converges, allowing equilibrium trading strategies to be identified. There, the authors go even beyond a simple double auction to multiple simultaneous double auctions, where both buyers and sellers need to choose a double auction where they place bids and asks respectively. This setting is complex due to the presence of both positive and negative networks effects; buyers are attracted to double auctions with many sellers, but would like to avoid competing buyers, and conversely for sellers.

%\bibliographystyle{model1b-num-names}
\bibliographystyle{elsarticle-num}
\bibliography{documentnew}

\appendix
%% \section{Cost and Winning Probabilities}\label{cost_win_numeric}
%% As a part of the best response calculation or as a part of equilibrium
%% verification, it is necessary to calculate the exact utility of a
%% bid. In this section we will show how this is done in the symmetric
%% case, where we assume that all (or all but one players) undertake the
%% same strategy.

%% Let the bidding be between $N$ ($|N|=n$) bidders on $K$ auctions with
%% distribution $\mu$ over the common type space $\types=[0,1]$.  Given %% that
%% players follow some strategy $h:\types \rightarrow\b$, where
%% $\b=\prod\limits_{k\in K}B_k$ the space of complete bids,
%% which engenders a distribution $m$ over the space of complete bids
%% $\b$ of each of the players. We would like to calculate the
%% utility of a player that has bid $\b\in\b$, given that
%% the player has been assigned type $\type \in \types$. In other words %% we would
%% like to provide an explicit expression for $u(\type,\b,m)$.

%% We will begin by calculating the probability that the player wins a
%% particular subset $\eta\subset K$ of auctions. For a given complete
%% bid $\b\in\b$ we will denote by
%% $\b_\eta=(b_k)_{k\in\eta}$ and correspondingly
%% $\b_{-\eta}=(b_k)_{k\in K\setminus\eta}$. For any two complete
%% bids $\bar{\b}$ and $\b$ we will take the binary
%% relationships to be element-wise. For example
%% $\bar{\b}_\eta>\b$ means that for all $k\in\eta$ holds
%% that $\bar{\b}_k>\b_k$, however it does not limit the
%% relationship between $\bar{\b}_{-\eta}$ and
%% $\b_{-\eta}$. Given this notation, the probability in question
%% is given by:

%% %$$
%% %P(\b,\eta)=\sum\limits_{\bar{\b}....
%% %$$


\section{Extensions}
\label{sec:extensions}
\noindent In this appendix we review some of the assumptions we used in the main body of the paper regarding the setting, utility functions and equilibria. Specifically, we consider, in turn, the limitations of a symmetric setting, single-dimensional type and linear dependency of the utility in type. In more detail, we show how the algorithm can be used to handle asymmetric settings and give a sketch of extensions to our algorithms to resolve the remaining limitations.

\subsection{Asymmetric Fictitious Play}
\label{sec:asym}
\noindent It is straightforward to extend the algorithms presented in this paper to asymmetric settings where each player has a potentially different action space, $A$, utility function, $u(\cdot)$, type space \types, and distribution over types $F(\cdot)$, resulting in  asymmetric equilibria. Formally, an {\it asymmetric} Bayesian game is defined by $\Gamma_{asym} = \langle N,\{ A_i,u_i(\cdot),\types_i,F_i(\cdot) \}_{i \in N} \rangle$.\footnote{Importantly, even though each agent has a different type distribution, $F_i(\cdot)$, we still require that these distributions are common knowledge. That is, we do not consider settings where some players have asymmetric beliefs about another player, and extending the FP algorithm to such settings is non-trivial.} In equilibrium, each player $i \in N$ can have a different strategy $s_i(\cdot)$, resulting in action distribution $\d_{s,i}(\cdot)$. Moreover, the expected utility function of a player $i$ of type $\theta \in \Theta_i$ when playing action $\a_i$ in an asymmetric setting given the action distributions of {\it other} players $j \in N_{-i}=N\backslash{i}$ is defined as:

\begin{equation*}
 \uet_i(\type,\a_i,\{\d_{\s,j}\}_{j \in N_{-i}}) = \E_{\{Y_j\sim \d_{\s,j}\}_{j \in N_{-i}}}[\ut_i(\type,\a_i,\{Y_j\}_{j \in N_{-i}})].
\end{equation*}
Similarly, the expected utility from playing a strategy $\s'(\cdot)$ when players $j \neq i$ play  strategies $\s_j(\cdot)$ is $\ue_i(\s', \{\d_{\s,j}\}_{j \in N_{-i}}) = \E_{\type \sim X_i}[\uet_i(\type,\s_i(\theta),\{\d_{\s,j}\}_{j \in N_{-i}})]$.

Given this we can define an {\it asymmetric} equilibrium as follows:

\begin{definition}
\label{def:asymeqstrategies}
A strategy profile $\s_i: \types_i \rightarrow A_i$, $i \in N$ is an asymmetric pure-strategy equilibrium of a game $\Gamma_{asym}$ if:
\begin{equation*}
\ue_i(\s_i, \{\d_{\s,j}\}_{j \in N_{-i}}) \ge \ue_i(\s', \{\d_{\s,j}\}_{j \in N_{-i}}) \quad \forall\ \s' \in \S_i, i \in N.
\end{equation*}
\end{definition}
\noindent
The remaining definitions can be modified analogously to the asymmetric setting.

We now turn to the FP algorithm in Figure~\ref{fig:fp}. To handle asymmetric settings, the algorithm now needs to compute the best response and maintain a separate set of beliefs for each player $i \in N$, which need to be updated separately (note that, if a subset of the players are symmetric, these can be grouped together into a single representative player). There are two approaches in which the beliefs can be updated: simultaneously or sequentially. In the former case, the best response for each player is calculated based on the beliefs from the previous iteration (at time $t$). In the latter case, the FP beliefs of each player are updated sequentially and these updated beliefs are used by the next player to calculate his best response. Although simultaneous updating is most commonly used in standard FP, Berger has shown that sequential updating actually has better convergence properties~\cite{berger_2006_WS}. 

The amended FP algorithm containing both alternative updating rules is given in Figure~\ref{fig:afp}. Note that the \br\ procedure requires a minor modification to the input to include the index of the player whose strategy we are computing, and the action distribution for each player $j \neq i$. However, no other modifications are needed to this algorithm. Furthermore, note that the \btos\ procedure needs to be executed for each player. Finally, the convergence criterion needs to be modified since the best response can produce a different $\epsilon$ for each player.

\begin{figure}[ht]
%{\scriptsize
{\center
\begin{tabular}{|l |} \hline \parbox{3.2 in} {
\begin{tabbing}
\textbf{Algorithm \afp} \\
\textbf{Input:} \:\:\: \=  game $\Gamma_{asym} = \langle N,\{ A_i,u_i(\cdot),\types_i,F_i(\cdot) \}_{i \in N} \rangle$,\\
\>initial beliefs $\d^0_i, i \in N$, update rule $\kappa$\\
\textbf{Output:} \> if converges, equilibrium strategy\\
\\
1:\quad \= set iteration count $t=0$ \\
2: \>  {\bf repeat}\\
3: \> \quad {\bf for} $i \in N$\\
%4a: \> \quad \quad compute the best-response strategy of player $i$: \\%$\s:\types \rightarrow A$\\
 %\> \quad \quad {\em (see Equation~\ref{eq:br} in Section~\ref{sec:br})}\\
4a:\> \quad \quad using simultaneous updating:\\
\> \quad \quad \quad strategy $s=\br(\Gamma_{asym}, i, \{\d^t_{j}\}_{j \in N_{-i}})$\\
4b:\> \quad \quad  using sequential updating:\\
\> \quad \quad \quad strategy $s=\br(\Gamma_{asym}, i, \{\d^{t+1}_{j}\}_{j < i}, \{\d^{t}_{j}\}_{j > i})$\\
5: \> \quad \quad compute the corresponding action distribution:\\
\> \quad \quad \quad $\forall a_i \in A_i: \d_{\s}(a_i)=\int_{\s^{-1}(\a_i)}f_i(x)dx$\\
6: \> \quad \quad update beliefs of player $i$:\\
 \> \quad \quad \quad  $\forall a_i \in A_i: \d_i^{t+1}(a_i)=\kappa(t)\d_i^t(a_i)+(1-\kappa(t))\d_{\s}(a_i)$\\
7: \> \quad {\bf end for}\\
8: \> \quad set $t = t+1$\\
9:  \> {\bf until} $\converged$\\
10:  \> {\bf return} $\{\btos(h_i^{t+1})\}_{i \in N}$
\end{tabbing}
} \\ \hline
\end{tabular} \\}
\caption{\label{fig:afp}Fictitious play algorithm for asymmetric games of incomplete information.}
\end{figure}


\subsection{Multi-dimensional types and non-linear utility}
\label{sec:multtype}
\noindent First, taking a closer look at the \fp\ algorithm, depicted in Figure~\ref{fig:fp}, it is easy to see that it does not directly depend on the type space dimensionality. Rather, it was in the best response calculation and the $\btos$ procedure, where we made explicit use of our assumptions. Therefore, it is for these procedures that we need to relax our assumptions of type-space dimensionality and utility linearity. 

Second, both the calculation of the best response and the \btos\ procedure are based on a particular division of the type space: specifically, type space breakdown to action-equivalent subsets with respect to the best response upper envelope. Formally, for a belief $\d$, we define for every action $a\in A$ a set $\widetilde{N}_a(\d)=\{\type\in\types|\uet(\type,a,\d)\geq\uet(\type,a',\d)\ \forall a'\in A\}$. Given a particular lexicographic ordering $\preceq$ of actions, we can refine these sets into a collection of disjoint sets $\{I_a\}_{a\in \widetilde{A}\subseteq A}$ (e.g., by setting $I_a=\widetilde{N}_a\setminus\bigcup\limits_{a'\preceq a} \widetilde{N}_{a'}$, and purging empty $I_a$'s). Notice, that the collection $\{I_a\}_{a\in\widetilde{A}}$ is a cover of the type space, so that $\bigcup\limits_{a\in \widetilde{A}}I_a=\types$. In fact, a specific collection would fully characterise the best response to a belief $\d$. In particular, in the case of single-dimensional utilities, this led to an interval structure. Notice, that \btos\ simply utilises the collection $\{I_a\}_{a\in\widetilde{A}}$ to define a policy $\s(\type)=\arg\max\limits_{a\in \widetilde{A}}\mathbb{1}_{I_a}(\type)$.

Furthermore, formally the construction of the collection $\{I_a\}$ needs no assumption on the dimensionality, nor linearity of the utility function. Rather, these properties effect {\em only} the efficiency of that collection's representation. For instance, in the case of single-dimensional type space, the ordering of actions (by slope of their utility) created a linear fully ordered structure -- the interval structure. On the other hand, for a two dimensional type space, such an ordering is infeasible. However, alternative representations of such collections are possible. In fact, the field of computational geometry provides an extensive arsenal of such representations and algorithms ranging from envelopes of piecewise linear functions (see, e.g.,~\cite{edelsbrunner_etal_89}) to complex analytical curves (see, e.g.,~\cite{sethian_99}). 

Finally, notice that only the proof of Theorem~\ref{b2s_is_epsilonning} has made any use of the interval structure. Specifically, it relied on the fact that the {\em geometry} of the interval structure is similar for similar beliefs $\d^t$ and $\d^*$. This statement, however, can be reproduced for any representation of the collection $\{I_a\}$, be that a set of intervals or a Delaunay triangulation, as long as it is consistent with some partial transitive ordering of actions for all beliefs $\d$. Hence, by augmenting the representation of $\{I_a\}$, both the \btos\ procedure and the  Theorem~\ref{b2s_is_epsilonning} can be adapted to hold for any dimensionality of the type space (or a non-linear utility).

\input{appendix.tex}
%\newpage
%\input{reviewresponses2.tex}

\end{document}
