id
int64 28.8k
36k
| category
stringclasses 3
values | text
stringlengths 44
3.03k
| title
stringlengths 10
236
| published
stringlengths 19
19
| author
stringlengths 6
943
| link
stringlengths 66
127
| primary_category
stringclasses 62
values |
---|---|---|---|---|---|---|---|
35,699 | th | We consider descending price auctions for selling $m$ units of a good to unit
demand i.i.d. buyers where there is an exogenous bound of $k$ on the number of
price levels the auction clock can take. The auctioneer's problem is to choose
price levels $p_1 > p_2 > \cdots > p_{k}$ for the auction clock such that
auction expected revenue is maximized. The prices levels are announced prior to
the auction. We reduce this problem to a new variant of prophet inequality,
which we call \emph{batched prophet inequality}, where a decision-maker chooses
$k$ (decreasing) thresholds and then sequentially collects rewards (up to $m$)
that are above the thresholds with ties broken uniformly at random. For the
special case of $m=1$ (i.e., selling a single item), we show that the resulting
descending auction with $k$ price levels achieves $1- 1/e^k$ of the
unrestricted (without the bound of $k$) optimal revenue. That means a
descending auction with just 4 price levels can achieve more than 98\% of the
optimal revenue. We then extend our results for $m>1$ and provide a closed-form
bound on the competitive ratio of our auction as a function of the number of
units $m$ and the number of price levels $k$. | Descending Price Auctions with Bounded Number of Price Levels and Batched Prophet Inequality | 2022-03-02 22:59:15 | Saeed Alaei, Ali Makhdoumi, Azarakhsh Malekian, Rad Niazadeh | http://arxiv.org/abs/2203.01384v1, http://arxiv.org/pdf/2203.01384v1 | cs.GT |
35,700 | th | A group of players are supposed to follow a prescribed profile of strategies.
If they follow this profile, they will reach a given target. We show that if
the target is not reached because some player deviates, then an outside
observer can identify the deviator. We also construct identification methods in
two nontrivial cases. | Identifying the Deviator | 2022-03-08 01:11:07 | Noga Alon, Benjamin Gunby, Xiaoyu He, Eran Shmaya, Eilon Solan | http://arxiv.org/abs/2203.03744v1, http://arxiv.org/pdf/2203.03744v1 | math.PR |
35,701 | th | In the classical version of online bipartite matching, there is a given set
of offline vertices (aka agents) and another set of vertices (aka items) that
arrive online. When each item arrives, its incident edges -- the agents who
like the item -- are revealed and the algorithm must irrevocably match the item
to such agents. We initiate the study of class fairness in this setting, where
agents are partitioned into a set of classes and the matching is required to be
fair with respect to the classes. We adopt popular fairness notions from the
fair division literature such as envy-freeness (up to one item),
proportionality, and maximin share fairness to our setting. Our class versions
of these notions demand that all classes, regardless of their sizes, receive a
fair treatment. We study deterministic and randomized algorithms for matching
indivisible items (leading to integral matchings) and for matching divisible
items (leading to fractional matchings). We design and analyze three novel
algorithms. For matching indivisible items, we propose an
adaptive-priority-based algorithm, MATCH-AND-SHIFT, prove that it achieves
1/2-approximation of both class envy-freeness up to one item and class maximin
share fairness, and show that each guarantee is tight. For matching divisible
items, we design a water-filling-based algorithm, EQUAL-FILLING, that achieves
(1-1/e)-approximation of class envy-freeness and class proportionality; we
prove (1-1/e) to be tight for class proportionality and establish a 3/4 upper
bound on class envy-freeness. Finally, we build upon EQUAL-FILLING to design a
randomized algorithm for matching indivisible items, EQAUL-FILLING-OCS, which
achieves 0.593-approximation of class proportionality. The algorithm and its
analysis crucially leverage the recently introduced technique of online
correlated selection (OCS) [Fahrbach et al., 2020]. | Class Fairness in Online Matching | 2022-03-08 01:26:11 | Hadi Hosseini, Zhiyi Huang, Ayumi Igarashi, Nisarg Shah | http://arxiv.org/abs/2203.03751v1, http://arxiv.org/pdf/2203.03751v1 | cs.GT |
35,702 | th | Operant keypress tasks, where each action has a consequence, have been
analogized to the construct of "wanting" and produce lawful relationships in
humans that quantify preferences for approach and avoidance behavior. It is
unknown if rating tasks without an operant framework, which can be analogized
to "liking", show similar lawful relationships. We studied three independent
cohorts of participants (N = 501, 506, and 4,019 participants) collected by two
distinct organizations, using the same 7-point Likert scale to rate negative to
positive preferences for pictures from the International Affective Picture Set.
Picture ratings without an operant framework produced similar value functions,
limit functions, and trade-off functions to those reported in the literature
for operant keypress tasks, all with goodness of fits above 0.75. These value,
limit, and trade-off functions were discrete in their mathematical formulation,
recurrent across all three independent cohorts, and demonstrated scaling
between individual and group curves. In all three experiments, the computation
of loss aversion showed 95% confidence intervals below the value of 2, arguing
against a strong overweighting of losses relative to gains, as has previously
been reported for keypress tasks or games of chance with calibrated
uncertainty. Graphed features from the three cohorts were similar and argue
that preference assessments meet three of four criteria for lawfulness,
providing a simple, short, and low-cost method for the quantitative assessment
of preference without forced choice decisions, games of chance, or operant
keypressing. This approach can easily be implemented on any digital device with
a screen (e.g., cellphones). | Discrete, recurrent, and scalable patterns in human judgement underlie affective picture ratings | 2022-03-12 17:40:11 | Emanuel A. Azcona, Byoung-Woo Kim, Nicole L. Vike, Sumra Bari, Shamal Lalvani, Leandros Stefanopoulos, Sean Woodward, Martin Block, Aggelos K. Katsaggelos, Hans C. Breiter | http://arxiv.org/abs/2203.06448v1, http://arxiv.org/pdf/2203.06448v1 | cs.HC |
35,704 | th | In the classic scoring rule setting, a principal incentivizes an agent to
truthfully report their probabilistic belief about some future outcome. This
paper addresses the situation when this private belief, rather than a classical
probability distribution, is instead a quantum mixed state. In the resulting
quantum scoring rule setting, the principal chooses both a scoring function and
a measurement function, and the agent responds with their reported density
matrix. Several characterizations of quantum scoring rules are presented, which
reveal a familiar structure based on convex analysis. Spectral scores, where
the measurement function is given by the spectral decomposition of the reported
density matrix, have particularly elegant structure and connect to quantum
information theory. Turning to property elicitation, eigenvectors of the belief
are elicitable, whereas eigenvalues and entropy have maximal elicitation
complexity. The paper concludes with a discussion of other quantum information
elicitation settings and connections to the literature. | Quantum Information Elicitation | 2022-03-14 23:07:47 | Rafael Frongillo | http://arxiv.org/abs/2203.07469v1, http://arxiv.org/pdf/2203.07469v1 | cs.GT |
35,705 | th | Fictitious play has recently emerged as the most accurate scalable algorithm
for approximating Nash equilibrium strategies in multiplayer games. We show
that the degree of equilibrium approximation error of fictitious play can be
significantly reduced by carefully selecting the initial strategies. We present
several new procedures for strategy initialization and compare them to the
classic approach, which initializes all pure strategies to have equal
probability. The best-performing approach, called maximin, solves a nonconvex
quadratic program to compute initial strategies and results in a nearly 75%
reduction in approximation error compared to the classic approach when 5
initializations are used. | Fictitious Play with Maximin Initialization | 2022-03-21 10:34:20 | Sam Ganzfried | http://arxiv.org/abs/2203.10774v5, http://arxiv.org/pdf/2203.10774v5 | cs.GT |
35,706 | th | Rotating savings and credit associations (roscas) are informal financial
organizations common in settings where communities have reduced access to
formal financial institutions. In a rosca, a fixed group of participants
regularly contribute sums of money to a pot. This pot is then allocated
periodically using lottery, aftermarket, or auction mechanisms. Roscas are
empirically well-studied in economics. They are, however, challenging to study
theoretically due to their dynamic nature. Typical economic analyses of roscas
stop at coarse ordinal welfare comparisons to other credit allocation
mechanisms, leaving much of roscas' ubiquity unexplained. In this work, we take
an algorithmic perspective on the study of roscas. Building on techniques from
the price of anarchy literature, we present worst-case welfare approximation
guarantees. We further experimentally compare the welfare of outcomes as key
features of the environment vary. These cardinal welfare analyses further
rationalize the prevalence of roscas. We conclude by discussing several other
promising avenues. | An Algorithmic Introduction to Savings Circles | 2022-03-23 18:27:30 | Rediet Abebe, Adam Eck, Christian Ikeokwu, Samuel Taggart | http://arxiv.org/abs/2203.12486v1, http://arxiv.org/pdf/2203.12486v1 | cs.GT |
35,707 | th | The behavior of no-regret learning algorithms is well understood in
two-player min-max (i.e, zero-sum) games. In this paper, we investigate the
behavior of no-regret learning in min-max games with dependent strategy sets,
where the strategy of the first player constrains the behavior of the second.
Such games are best understood as sequential, i.e., min-max Stackelberg, games.
We consider two settings, one in which only the first player chooses their
actions using a no-regret algorithm while the second player best responds, and
one in which both players use no-regret algorithms. For the former case, we
show that no-regret dynamics converge to a Stackelberg equilibrium. For the
latter case, we introduce a new type of regret, which we call Lagrangian
regret, and show that if both players minimize their Lagrangian regrets, then
play converges to a Stackelberg equilibrium. We then observe that online mirror
descent (OMD) dynamics in these two settings correspond respectively to a known
nested (i.e., sequential) gradient descent-ascent (GDA) algorithm and a new
simultaneous GDA-like algorithm, thereby establishing convergence of these
algorithms to Stackelberg equilibrium. Finally, we analyze the robustness of
OMD dynamics to perturbations by investigating online min-max Stackelberg
games. We prove that OMD dynamics are robust for a large class of online
min-max games with independent strategy sets. In the dependent case, we
demonstrate the robustness of OMD dynamics experimentally by simulating them in
online Fisher markets, a canonical example of a min-max Stackelberg game with
dependent strategy sets. | Robust No-Regret Learning in Min-Max Stackelberg Games | 2022-03-26 21:12:40 | Denizalp Goktas, Jiayi Zhao, Amy Greenwald | http://arxiv.org/abs/2203.14126v2, http://arxiv.org/pdf/2203.14126v2 | cs.GT |
35,708 | th | Under what conditions do the behaviors of players, who play a game
repeatedly, converge to a Nash equilibrium? If one assumes that the players'
behavior is a discrete-time or continuous-time rule whereby the current mixed
strategy profile is mapped to the next, this becomes a problem in the theory of
dynamical systems. We apply this theory, and in particular the concepts of
chain recurrence, attractors, and Conley index, to prove a general
impossibility result: there exist games for which any dynamics is bound to have
starting points that do not end up at a Nash equilibrium. We also prove a
stronger result for $\epsilon$-approximate Nash equilibria: there are games
such that no game dynamics can converge (in an appropriate sense) to
$\epsilon$-Nash equilibria, and in fact the set of such games has positive
measure. Further numerical results demonstrate that this holds for any
$\epsilon$ between zero and $0.09$. Our results establish that, although the
notions of Nash equilibria (and its computation-inspired approximations) are
universally applicable in all games, they are also fundamentally incomplete as
predictors of long term behavior, regardless of the choice of dynamics. | Nash, Conley, and Computation: Impossibility and Incompleteness in Game Dynamics | 2022-03-26 21:27:40 | Jason Milionis, Christos Papadimitriou, Georgios Piliouras, Kelly Spendlove | http://arxiv.org/abs/2203.14129v1, http://arxiv.org/pdf/2203.14129v1 | cs.GT |
35,709 | th | The well-known notion of dimension for partial orders by Dushnik and Miller
allows to quantify the degree of incomparability and, thus, is regarded as a
measure of complexity for partial orders. However, despite its usefulness, its
definition is somewhat disconnected from the geometrical idea of dimension,
where, essentially, the number of dimensions indicates how many real lines are
required to represent the underlying partially ordered set.
Here, we introduce a variation of the Dushnik-Miller notion of dimension that
is closer to geometry, the Debreu dimension, and show the following main
results: (i) how to construct its building blocks under some countability
restrictions, (ii) its relation to other notions of dimension in the
literature, and (iii), as an application of the above, we improve on the
classification of preordered spaces through real-valued monotones. | On a geometrical notion of dimension for partially ordered sets | 2022-03-30 16:05:10 | Pedro Hack, Daniel A. Braun, Sebastian Gottwald | http://arxiv.org/abs/2203.16272v3, http://arxiv.org/pdf/2203.16272v3 | math.CO |
35,710 | th | We introduce the notion of performative power, which measures the ability of
a firm operating an algorithmic system, such as a digital content
recommendation platform, to cause change in a population of participants. We
relate performative power to the economic study of competition in digital
economies. Traditional economic concepts struggle with identifying
anti-competitive patterns in digital platforms not least due to the complexity
of market definition. In contrast, performative power is a causal notion that
is identifiable with minimal knowledge of the market, its internals,
participants, products, or prices.
Low performative power implies that a firm can do no better than to optimize
their objective on current data. In contrast, firms of high performative power
stand to benefit from steering the population towards more profitable behavior.
We confirm in a simple theoretical model that monopolies maximize performative
power. A firm's ability to personalize increases performative power, while
competition and outside options decrease performative power. On the empirical
side, we propose an observational causal design to identify performative power
from discontinuities in how digital platforms display content. This allows to
repurpose causal effects from various studies about digital platforms as lower
bounds on performative power. Finally, we speculate about the role that
performative power might play in competition policy and antitrust enforcement
in digital marketplaces. | Performative Power | 2022-03-31 20:49:50 | Moritz Hardt, Meena Jagadeesan, Celestine Mendler-Dünner | http://arxiv.org/abs/2203.17232v2, http://arxiv.org/pdf/2203.17232v2 | cs.LG |
35,711 | th | We propose a consumption-investment decision model where past consumption
peak $h$ plays a crucial role. There are two important consumption levels: the
lowest constrained level and a reference level, at which the risk aversion in
terms of consumption rate is changed. We solve this stochastic control problem
and derive the value function, optimal consumption plan, and optimal investment
strategy in semi-explicit forms. We find five important thresholds of wealth,
all as functions of $h$, and most of them are nonlinear functions. As can be
seen from numerical results and theoretical analysis, this intuitive and simple
model has significant economic implications, and there are at least three
important predictions: the marginal propensity to consume out of wealth is
generally decreasing but can be increasing for intermediate wealth levels, and
it jumps inversely proportional to the risk aversion at the reference point;
the implied relative risk aversion is roughly a smile in wealth; the welfare of
the poor is more vulnerable to wealth shocks than the wealthy. Moreover,
locally changing the risk aversion influences the optimal strategies globally,
revealing some risk allocation behaviors. | Consumption-investment decisions with endogenous reference point and drawdown constraint | 2022-04-01 18:45:00 | Zongxia Liang, Xiaodong Luo, Fengyi Yuan | http://arxiv.org/abs/2204.00530v2, http://arxiv.org/pdf/2204.00530v2 | q-fin.PM |
35,712 | th | This paper analyses the stability of cycles within a heteroclinic network
lying in a three-dimensional manifold formed by six cycles, for a one-parameter
model developed in the context of game theory. We show the asymptotic stability
of the network for a range of parameter values compatible with the existence of
an interior equilibrium and we describe an asymptotic technique to decide which
cycle (within the network) is visible in numerics. The technique consists of
reducing the relevant dynamics to a suitable one-dimensional map, the so called
\emph{projective map}. Stability of the fixed points of the projective map
determines the stability of the associated cycles. The description of this new
asymptotic approach is applicable to more general types of networks and is
potentially useful in computational dynamics. | Stability of heteroclinic cycles: a new approach | 2022-04-02 15:18:36 | Telmo Peixe, Alexandre A. Rodrigues | http://arxiv.org/abs/2204.00848v1, http://arxiv.org/pdf/2204.00848v1 | math.DS |
35,713 | th | In social choice theory, Sen's value restriction condition is a sufficiency
condition restricted to individuals' ordinal preferences so as to obtain a
transitive social preference under the majority decision rule. In this article,
Sen's transitivity condition is described by use of inequality and equation.
First, for a triple of alternatives, an individual's preference is represented
by a preference map, whose entries are sets containing the ranking position or
positions derived from the individual's preference over that triple of those
alternatives. Second, by using the union operation of sets and the cardinality
concept, Sen's transitivity condition is described by inequalities. Finally, by
using the membership function of sets, Sen's transitivity condition is further
described by equations. | Describing Sen's Transitivity Condition in Inequalities and Equations | 2022-04-08 05:00:26 | Fujun Hou | http://arxiv.org/abs/2204.05105v1, http://arxiv.org/pdf/2204.05105v1 | econ.TH |
35,714 | th | Network effects are the added value derived solely from the popularity of a
product in an economic market. Using agent-based models inspired by statistical
physics, we propose a minimal theory of a competitive market for (nearly)
indistinguishable goods with demand-side network effects, sold by statistically
identical sellers. With weak network effects, the model reproduces conventional
microeconomics: there is a statistical steady state of (nearly) perfect
competition. Increasing network effects, we find a phase transition to a robust
non-equilibrium phase driven by the spontaneous formation and collapse of fads
in the market. When sellers update prices sufficiently quickly, an emergent
monopolist can capture the market and undercut competition, leading to a
symmetry- and ergodicity-breaking transition. The non-equilibrium phase
simultaneously exhibits three empirically established phenomena not contained
in the standard theory of competitive markets: spontaneous price fluctuations,
persistent seller profits, and broad distributions of firm market shares. | Non-equilibrium phase transitions in competitive markets caused by network effects | 2022-04-11 21:00:00 | Andrew Lucas | http://dx.doi.org/10.1073/pnas.2206702119, http://arxiv.org/abs/2204.05314v2, http://arxiv.org/pdf/2204.05314v2 | cond-mat.stat-mech |
35,715 | th | Interacting agents receive public information at no cost and flexibly acquire
private information at a cost proportional to entropy reduction. When a
policymaker provides more public information, agents acquire less private
information, thus lowering information costs. Does more public information
raise or reduce uncertainty faced by agents? Is it beneficial or detrimental to
welfare? To address these questions, we examine the impacts of public
information on flexible information acquisition in a linear-quadratic-Gaussian
game with arbitrary quadratic material welfare. More public information raises
uncertainty if and only if the game exhibits strategic complementarity, which
can be harmful to welfare. However, when agents acquire a large amount of
information, more provision of public information increases welfare through a
substantial reduction in the cost of information. We give a necessary and
sufficient condition for welfare to increase with public information and
identify optimal public information disclosure, which is either full or partial
disclosure depending upon the welfare function and the slope of the best
response. | Impacts of Public Information on Flexible Information Acquisition | 2022-04-20 09:29:37 | Takashi Ui | http://arxiv.org/abs/2204.09250v2, http://arxiv.org/pdf/2204.09250v2 | econ.TH |
35,716 | th | We study the two-agent single-item bilateral trade. Ideally, the trade should
happen whenever the buyer's value for the item exceeds the seller's cost.
However, the classical result of Myerson and Satterthwaite showed that no
mechanism can achieve this without violating one of the Bayesian incentive
compatibility, individual rationality and weakly balanced budget conditions.
This motivates the study of approximating the
trade-whenever-socially-beneficial mechanism, in terms of the expected
gains-from-trade. Recently, Deng, Mao, Sivan, and Wang showed that the
random-offerer mechanism achieves at least a 1/8.23 approximation. We improve
this lower bound to 1/3.15 in this paper. We also determine the exact
worst-case approximation ratio of the seller-pricing mechanism assuming the
distribution of the buyer's value satisfies the monotone hazard rate property. | Improved Approximation to First-Best Gains-from-Trade | 2022-04-30 06:19:41 | Yumou Fei | http://arxiv.org/abs/2205.00140v1, http://arxiv.org/pdf/2205.00140v1 | cs.GT |
35,717 | th | We all have preferences when multiple choices are available. If we insist on
satisfying our preferences only, we may suffer a loss due to conflicts with
other people's identical selections. Such a case applies when the choice cannot
be divided into multiple pieces due to the intrinsic nature of the resources.
Former studies, such as the top trading cycle, examined how to conduct fair
joint decision-making while avoiding decision conflicts from the perspective of
game theory when multiple players have their own deterministic preference
profiles. However, in reality, probabilistic preferences can naturally appear
in relation to the stochastic decision-making of humans. Here, we theoretically
derive conflict-free joint decision-making that can satisfy the probabilistic
preferences of all individual players. More specifically, we mathematically
prove the conditions wherein the deviation of the resultant chance of obtaining
each choice from the individual preference profile, which we call the loss,
becomes zero, meaning that all players' satisfaction is perfectly appreciated
while avoiding decision conflicts. Furthermore, even in situations where
zero-loss conflict-free joint decision-making is unachievable, we show how to
derive joint decision-making that accomplishes the theoretical minimum loss
while ensuring conflict-free choices. Numerical demonstrations are also shown
with several benchmarks. | Optimal preference satisfaction for conflict-free joint decisions | 2022-05-02 13:31:32 | Hiroaki Shinkawa, Nicolas Chauvet, Guillaume Bachelier, André Röhm, Ryoichi Horisaki, Makoto Naruse | http://dx.doi.org/10.1155/2023/2794839, http://arxiv.org/abs/2205.00799v1, http://arxiv.org/pdf/2205.00799v1 | econ.TH |
35,718 | th | In a Fisher market, agents (users) spend a budget of (artificial) currency to
buy goods that maximize their utilities while a central planner sets prices on
capacity-constrained goods such that the market clears. However, the efficacy
of pricing schemes in achieving an equilibrium outcome in Fisher markets
typically relies on complete knowledge of users' budgets and utilities and
requires that transactions happen in a static market wherein all users are
present simultaneously.
As a result, we study an online variant of Fisher markets, wherein
budget-constrained users with privately known utility and budget parameters,
drawn i.i.d. from a distribution $\mathcal{D}$, enter the market sequentially.
In this setting, we develop an algorithm that adjusts prices solely based on
observations of user consumption, i.e., revealed preference feedback, and
achieves a regret and capacity violation of $O(\sqrt{n})$, where $n$ is the
number of users and the good capacities scale as $O(n)$. Here, our regret
measure is the optimality gap in the objective of the Eisenberg-Gale program
between an online algorithm and an offline oracle with complete information on
users' budgets and utilities. To establish the efficacy of our approach, we
show that any uniform (static) pricing algorithm, including one that sets
expected equilibrium prices with complete knowledge of the distribution
$\mathcal{D}$, cannot achieve both a regret and constraint violation of less
than $\Omega(\sqrt{n})$. While our revealed preference algorithm requires no
knowledge of the distribution $\mathcal{D}$, we show that if $\mathcal{D}$ is
known, then an adaptive variant of expected equilibrium pricing achieves
$O(\log(n))$ regret and constant capacity violation for discrete distributions.
Finally, we present numerical experiments to demonstrate the performance of our
revealed preference algorithm relative to several benchmarks. | Stochastic Online Fisher Markets: Static Pricing Limits and Adaptive Enhancements | 2022-04-27 08:03:45 | Devansh Jalota, Yinyu Ye | http://arxiv.org/abs/2205.00825v3, http://arxiv.org/pdf/2205.00825v3 | cs.GT |
35,719 | th | In many areas of industry and society, e.g., energy, healthcare, logistics,
agents collect vast amounts of data that they deem proprietary. These data
owners extract predictive information of varying quality and relevance from
data depending on quantity, inherent information content, and their own
technical expertise. Aggregating these data and heterogeneous predictive
skills, which are distributed in terms of ownership, can result in a higher
collective value for a prediction task. In this paper, we envision a platform
for improving predictions via implicit pooling of private information in return
for possible remuneration. Specifically, we design a wagering-based forecast
elicitation market platform, where a buyer intending to improve their forecasts
posts a prediction task, and sellers respond to it with their forecast reports
and wagers. This market delivers an aggregated forecast to the buyer
(pre-event) and allocates a payoff to the sellers (post-event) for their
contribution. We propose a payoff mechanism and prove that it satisfies several
desirable economic properties, including those specific to electronic
platforms. Furthermore, we discuss the properties of the forecast aggregation
operator and scoring rules to emphasize their effect on the sellers' payoff.
Finally, we provide numerical examples to illustrate the structure and
properties of the proposed market platform. | A Market for Trading Forecasts: A Wagering Mechanism | 2022-05-05 17:19:08 | Aitazaz Ali Raja, Pierre Pinson, Jalal Kazempour, Sergio Grammatico | http://arxiv.org/abs/2205.02668v2, http://arxiv.org/pdf/2205.02668v2 | econ.TH |
35,720 | th | Agents' learning from feedback shapes economic outcomes, and many economic
decision-makers today employ learning algorithms to make consequential choices.
This note shows that a widely used learning algorithm, $\varepsilon$-Greedy,
exhibits emergent risk aversion: it prefers actions with lower variance. When
presented with actions of the same expectation, under a wide range of
conditions, $\varepsilon$-Greedy chooses the lower-variance action with
probability approaching one. This emergent preference can have wide-ranging
consequences, ranging from concerns about fairness to homogenization, and holds
transiently even when the riskier action has a strictly higher expected payoff.
We discuss two methods to correct this bias. The first method requires the
algorithm to reweight data as a function of how likely the actions were to be
chosen. The second requires the algorithm to have optimistic estimates of
actions for which it has not collected much data. We show that risk-neutrality
is restored with these corrections. | Risk Preferences of Learning Algorithms | 2022-05-10 04:30:24 | Andreas Haupt, Aroon Narayanan | http://arxiv.org/abs/2205.04619v3, http://arxiv.org/pdf/2205.04619v3 | cs.LG |
35,721 | th | I study a game of strategic exploration with private payoffs and public
actions in a Bayesian bandit setting. In particular, I look at cascade
equilibria, in which agents switch over time from the risky action to the
riskless action only when they become sufficiently pessimistic. I show that
these equilibria exist under some conditions and establish their salient
properties. Individual exploration in these equilibria can be more or less than
the single-agent level depending on whether the agents start out with a common
prior or not, but the most optimistic agent always underexplores. I also show
that allowing the agents to write enforceable ex-ante contracts will lead to
the most ex-ante optimistic agent to buy all payoff streams, providing an
explanation to the buying out of smaller start-ups by more established firms. | Social learning via actions in bandit environments | 2022-05-12 17:15:17 | Aroon Narayanan | http://arxiv.org/abs/2205.06107v1, http://arxiv.org/pdf/2205.06107v1 | econ.TH |
35,722 | th | Bayesian models of group learning are studied in Economics since the 1970s.
and more recently in computational linguistics. The models from Economics
postulate that agents maximize utility in their communication and actions. The
Economics models do not explain the ``probability matching" phenomena that are
observed in many experimental studies. To address these observations, Bayesian
models that do not formally fit into the economic utility maximization
framework were introduced. In these models individuals sample from their
posteriors in communication. In this work we study the asymptotic behavior of
such models on connected networks with repeated communication. Perhaps
surprisingly, despite the fact that individual agents are not utility
maximizers in the classical sense, we establish that the individuals ultimately
agree and furthermore show that the limiting posterior is Bayes optimal.
We explore the interpretation of our results in terms of Large Language
Models (LLMs). In the positive direction our results can be interpreted as
stating that interaction between different LLMs can lead to optimal learning.
However, we provide an example showing how misspecification may lead LLM agents
to be overconfident in their estimates. | Agreement and Statistical Efficiency in Bayesian Perception Models | 2022-05-23 21:21:07 | Yash Deshpande, Elchanan Mossel, Youngtak Sohn | http://arxiv.org/abs/2205.11561v3, http://arxiv.org/pdf/2205.11561v3 | math.ST |
35,723 | th | Demand response involves system operators using incentives to modulate
electricity consumption during peak hours or when faced with an incidental
supply shortage. However, system operators typically have imperfect information
about their customers' baselines, that is, their consumption had the incentive
been absent. The standard approach to estimate the reduction in a customer's
electricity consumption then is to estimate their counterfactual baseline.
However, this approach is not robust to estimation errors or strategic
exploitation by the customers and can potentially lead to overpayments to
customers who do not reduce their consumption and underpayments to those who
do. Moreover, optimal power consumption reductions of the customers depend on
the costs that they incur for curtailing consumption, which in general are
private knowledge of the customers, and which they could strategically
misreport in an effort to improve their own utilities even if it deteriorates
the overall system cost. The two-stage mechanism proposed in this paper
circumvents the aforementioned issues. In the day-ahead market, the
participating loads are required to submit only a probabilistic description of
their next-day consumption and costs to the system operator for day-ahead
planning. It is only in real-time, if and when called upon for demand response,
that the loads are required to report their baselines and costs. They receive
credits for reductions below their reported baselines. The mechanism for
calculating the credits guarantees incentive compatibility of truthful
reporting of the probability distribution in the day-ahead market and truthful
reporting of the baseline and cost in real-time. The mechanism can be viewed as
an extension of the celebrated Vickrey-Clarke-Groves mechanism augmented with a
carefully crafted second-stage penalty for deviations from the day-ahead bids. | A Two-Stage Mechanism for Demand Response Markets | 2022-05-24 20:44:47 | Bharadwaj Satchidanandan, Mardavij Roozbehani, Munther A. Dahleh | http://arxiv.org/abs/2205.12236v2, http://arxiv.org/pdf/2205.12236v2 | eess.SY |
35,724 | th | Stereotypes are generalized beliefs about groups of people, which are used to
make decisions and judgments about them. Although such heuristics can be useful
when decisions must be made quickly, or when information is lacking, they can
also serve as the basis for prejudice and discrimination. In this paper we
study the evolution of stereotypes through group reciprocity. We characterize
the warmth of a stereotype as the willingness to cooperate with an individual
based solely on the identity of the group they belong to. We show that when
stereotypes are coarse, such group reciprocity is less likely to evolve, and
stereotypes tend to be negative. We also show that, even when stereotypes are
broadly positive, individuals are often overly pessimistic about the
willingness of those they stereotype to cooperate. We then show that the
tendency for stereotyping itself to evolve is driven by the costs of cognition,
so that more people are stereotyped with greater coarseness as costs increase.
Finally we show that extrinsic "shocks", in which the benefits of cooperation
are suddenly reduced, can cause stereotype warmth and judgement bias to turn
sharply negative, consistent with the view that economic and other crises are
drivers of out-group animosity. | Group reciprocity and the evolution of stereotyping | 2022-05-25 13:50:25 | Alexander J. Stewart, Nichola Raihani | http://arxiv.org/abs/2205.12652v1, http://arxiv.org/pdf/2205.12652v1 | physics.soc-ph |
35,725 | th | Proportionality is an attractive fairness concept that has been applied to a
range of problems including the facility location problem, a classic problem in
social choice. In our work, we propose a concept called Strong Proportionality,
which ensures that when there are two groups of agents at different locations,
both groups incur the same total cost. We show that although Strong
Proportionality is a well-motivated and basic axiom, there is no deterministic
strategyproof mechanism satisfying the property. We then identify a randomized
mechanism called Random Rank (which uniformly selects a number $k$ between $1$
to $n$ and locates the facility at the $k$'th highest agent location) which
satisfies Strong Proportionality in expectation. Our main theorem characterizes
Random Rank as the unique mechanism that achieves universal truthfulness,
universal anonymity, and Strong Proportionality in expectation among all
randomized mechanisms. Finally, we show via the AverageOrRandomRank mechanism
that even stronger ex-post fairness guarantees can be achieved by weakening
universal truthfulness to strategyproofness in expectation. | Random Rank: The One and Only Strategyproof and Proportionally Fair Randomized Facility Location Mechanism | 2022-05-30 03:51:57 | Haris Aziz, Alexander Lam, Mashbat Suzuki, Toby Walsh | http://arxiv.org/abs/2205.14798v2, http://arxiv.org/pdf/2205.14798v2 | cs.GT |
35,758 | th | In order to identify expertise, forecasters should not be tested by their
calibration score, which can always be made arbitrarily small, but rather by
their Brier score. The Brier score is the sum of the calibration score and the
refinement score; the latter measures how good the sorting into bins with the
same forecast is, and thus attests to "expertise." This raises the question of
whether one can gain calibration without losing expertise, which we refer to as
"calibeating." We provide an easy way to calibeat any forecast, by a
deterministic online procedure. We moreover show that calibeating can be
achieved by a stochastic procedure that is itself calibrated, and then extend
the results to simultaneously calibeating multiple procedures, and to
deterministic procedures that are continuously calibrated. | "Calibeating": Beating Forecasters at Their Own Game | 2022-09-11 18:14:17 | Dean P. Foster, Sergiu Hart | http://arxiv.org/abs/2209.04892v2, http://arxiv.org/pdf/2209.04892v2 | econ.TH |
35,726 | th | In social choice theory, anonymity (all agents being treated equally) and
neutrality (all alternatives being treated equally) are widely regarded as
``minimal demands'' and ``uncontroversial'' axioms of equity and fairness.
However, the ANR impossibility -- there is no voting rule that satisfies
anonymity, neutrality, and resolvability (always choosing one winner) -- holds
even in the simple setting of two alternatives and two agents. How to design
voting rules that optimally satisfy anonymity, neutrality, and resolvability
remains an open question.
We address the optimal design question for a wide range of preferences and
decisions that include ranked lists and committees. Our conceptual contribution
is a novel and strong notion of most equitable refinements that optimally
preserves anonymity and neutrality for any irresolute rule that satisfies the
two axioms. Our technical contributions are twofold. First, we characterize the
conditions for the ANR impossibility to hold under general settings, especially
when the number of agents is large. Second, we propose the
most-favorable-permutation (MFP) tie-breaking to compute a most equitable
refinement and design a polynomial-time algorithm to compute MFP when agents'
preferences are full rankings. | Most Equitable Voting Rules | 2022-05-30 06:56:54 | Lirong Xia | http://arxiv.org/abs/2205.14838v3, http://arxiv.org/pdf/2205.14838v3 | cs.GT |
35,727 | th | We study the problem of online learning in competitive settings in the
context of two-sided matching markets. In particular, one side of the market,
the agents, must learn about their preferences over the other side, the firms,
through repeated interaction while competing with other agents for successful
matches. We propose a class of decentralized, communication- and
coordination-free algorithms that agents can use to reach to their stable match
in structured matching markets. In contrast to prior works, the proposed
algorithms make decisions based solely on an agent's own history of play and
requires no foreknowledge of the firms' preferences. Our algorithms are
constructed by splitting up the statistical problem of learning one's
preferences, from noisy observations, from the problem of competing for firms.
We show that under realistic structural assumptions on the underlying
preferences of the agents and firms, the proposed algorithms incur a regret
which grows at most logarithmically in the time horizon. Our results show that,
in the case of matching markets, competition need not drastically affect the
performance of decentralized, communication and coordination free online
learning algorithms. | Decentralized, Communication- and Coordination-free Learning in Structured Matching Markets | 2022-06-06 07:08:04 | Chinmay Maheshwari, Eric Mazumdar, Shankar Sastry | http://arxiv.org/abs/2206.02344v1, http://arxiv.org/pdf/2206.02344v1 | cs.AI |
35,728 | th | Aggregating signals from a collection of noisy sources is a fundamental
problem in many domains including crowd-sourcing, multi-agent planning, sensor
networks, signal processing, voting, ensemble learning, and federated learning.
The core question is how to aggregate signals from multiple sources (e.g.
experts) in order to reveal an underlying ground truth. While a full answer
depends on the type of signal, correlation of signals, and desired output, a
problem common to all of these applications is that of differentiating sources
based on their quality and weighting them accordingly. It is often assumed that
this differentiation and aggregation is done by a single, accurate central
mechanism or agent (e.g. judge). We complicate this model in two ways. First,
we investigate the setting with both a single judge, and one with multiple
judges. Second, given this multi-agent interaction of judges, we investigate
various constraints on the judges' reporting space. We build on known results
for the optimal weighting of experts and prove that an ensemble of sub-optimal
mechanisms can perform optimally under certain conditions. We then show
empirically that the ensemble approximates the performance of the optimal
mechanism under a broader range of conditions. | Towards Group Learning: Distributed Weighting of Experts | 2022-06-03 03:29:31 | Ben Abramowitz, Nicholas Mattei | http://arxiv.org/abs/2206.02566v1, http://arxiv.org/pdf/2206.02566v1 | cs.LG |
35,729 | th | Democratization of AI involves training and deploying machine learning models
across heterogeneous and potentially massive environments. Diversity of data
opens up a number of possibilities to advance AI systems, but also introduces
pressing concerns such as privacy, security, and equity that require special
attention. This work shows that it is theoretically impossible to design a
rational learning algorithm that has the ability to successfully learn across
heterogeneous environments, which we decoratively call collective intelligence
(CI). By representing learning algorithms as choice correspondences over a
hypothesis space, we are able to axiomatize them with essential properties.
Unfortunately, the only feasible algorithm compatible with all of the axioms is
the standard empirical risk minimization (ERM) which learns arbitrarily from a
single environment. Our impossibility result reveals informational
incomparability between environments as one of the foremost obstacles for
researchers who design novel algorithms that learn from multiple environments,
which sheds light on prerequisites for success in critical areas of machine
learning such as out-of-distribution generalization, federated learning,
algorithmic fairness, and multi-modal learning. | Impossibility of Collective Intelligence | 2022-06-05 10:58:39 | Krikamol Muandet | http://arxiv.org/abs/2206.02786v1, http://arxiv.org/pdf/2206.02786v1 | cs.LG |
35,730 | th | In physics, the wavefunctions of bosonic particles collapse when the system
undergoes a Bose-Einstein condensation. In game theory, the strategy of an
agent describes the probability to engage in a certain course of action.
Strategies are expected to differ in competitive situations, namely when there
is a penalty to do the same as somebody else. We study what happens when agents
are interested how they fare not only in absolute terms, but also relative to
others. This preference, denoted envy, is shown to induce the emergence of
distinct social classes via a collective strategy condensation transition.
Members of the lower class pursue identical strategies, in analogy to the
Bose-Einstein condensation, with the upper class remaining individualistic. | Collective strategy condensation towards class-separated societies | 2022-06-07 19:14:16 | Claudius Gros | http://dx.doi.org/10.1140/epjb/s10051-022-00362-5, http://arxiv.org/abs/2206.03421v1, http://arxiv.org/pdf/2206.03421v1 | econ.TH |
35,757 | th | In this paper, we consider a discrete-time Stackelberg mean field game with a
finite number of leaders, a finite number of major followers and an infinite
number of minor followers. The leaders and the followers each observe types
privately that evolve as conditionally independent controlled Markov processes.
The leaders are of "Stackelberg" kind which means they commit to a dynamic
policy. We consider two types of followers: major and minor, each with a
private type. All the followers best respond to the policies of the Stackelberg
leaders and each other. Knowing that the followers would play a mean field game
(with major players) based on their policy, each (Stackelberg) leader chooses a
policy that maximizes her reward. We refer to the resulting outcome as a
Stackelberg mean field equilibrium with multiple leaders (SMFE-ML). In this
paper, we provide a master equation of this game that allows one to compute all
SMFE-ML. We further extend this notion to the case when there are infinite
number of leaders. | Master equation of discrete-time Stackelberg mean field games with multiple leaders | 2022-09-07 17:30:45 | Deepanshu Vasal | http://arxiv.org/abs/2209.03186v1, http://arxiv.org/pdf/2209.03186v1 | eess.SY |
35,731 | th | The Slutsky equation, central in consumer choice theory, is derived from the
usual hypotheses underlying most standard models in Economics, such as full
rationality, homogeneity, and absence of interactions. We present a statistical
physics framework that allows us to relax such assumptions. We first derive a
general fluctuation-response formula that relates the Slutsky matrix to
spontaneous fluctuations of consumption rather than to response to changing
prices and budget. We then show that, within our hypotheses, the symmetry of
the Slutsky matrix remains valid even when agents are only boundedly rational
but non-interacting. We then propose a model where agents are influenced by the
choice of others, leading to a phase transition beyond which consumption is
dominated by herding (or `"fashion") effects. In this case, the individual
Slutsky matrix is no longer symmetric, even for fully rational agents. The
vicinity of the transition features a peak in asymmetry. | Bounded Rationality and Animal Spirits: A Fluctuation-Response Approach to Slutsky Matrices | 2022-06-09 15:51:12 | Jerome Garnier-Brun, Jean-Philippe Bouchaud, Michael Benzaquen | http://dx.doi.org/10.1088/2632-072X/acb0a7, http://arxiv.org/abs/2206.04468v1, http://arxiv.org/pdf/2206.04468v1 | econ.TH |
35,732 | th | Islamic and capitalist economies have several differences, the most
fundamental being that the Islamic economy is characterized by the prohibition
of interest (riba) and speculation (gharar) and the enforcement of
Shariah-compliant profit-loss sharing (mudaraba, murabaha, salam, etc.) and
wealth redistribution (waqf, sadaqah, and zakat). In this study, I apply new
econophysics models of wealth exchange and redistribution to quantitatively
compare these characteristics to those of capitalism and evaluate wealth
distribution and disparity using a simulation. Specifically, regarding
exchange, I propose a loan interest model representing finance capitalism and
riba and a joint venture model representing shareholder capitalism and
mudaraba; regarding redistribution, I create a transfer model representing
inheritance tax and waqf. As exchanges are repeated from an initial uniform
distribution of wealth, wealth distribution approaches a power-law distribution
more quickly for the loan interest than the joint venture model; and the Gini
index, representing disparity, rapidly increases. The joint venture model's
Gini index increases more slowly, but eventually, the wealth distribution in
both models becomes a delta distribution, and the Gini index gradually
approaches 1. Next, when both models are combined with the transfer model to
redistribute wealth in every given period, the loan interest model has a larger
Gini index than the joint venture model, but both converge to a Gini index of
less than 1. These results quantitatively reveal that in the Islamic economy,
disparity is restrained by prohibiting riba and promoting reciprocal exchange
in mudaraba and redistribution through waqf. Comparing Islamic and capitalist
economies provides insights into the benefits of economically embracing the
ethical practice of mutual aid and suggests guidelines for an alternative to
capitalism. | Islamic and capitalist economies: Comparison using econophysics models of wealth exchange and redistribution | 2022-06-11 09:47:40 | Takeshi Kato | http://dx.doi.org/10.1371/journal.pone.0275113, http://arxiv.org/abs/2206.05443v2, http://arxiv.org/pdf/2206.05443v2 | econ.TH |
35,733 | th | We formalize a framework for coordinating funding and selecting projects, the
costs of which are shared among agents with quasi-linear utility functions and
individual budgets. Our model contains the classical discrete participatory
budgeting model as a special case, while capturing other useful scenarios. We
propose several important axioms and objectives and study how well they can be
simultaneously satisfied. We show that whereas welfare maximization admits an
FPTAS, welfare maximization subject to a natural and very weak participation
requirement leads to a strong inapproximability. This result is bypassed if we
consider some natural restricted valuations, namely laminar single-minded
valuations and symmetric valuations. Our analysis for the former restriction
leads to the discovery of a new class of tractable instances for the Set Union
Knapsack problem, a classical problem in combinatorial optimization. | Coordinating Monetary Contributions in Participatory Budgeting | 2022-06-13 11:27:16 | Haris Aziz, Sujit Gujar, Manisha Padala, Mashbat Suzuki, Jeremy Vollen | http://arxiv.org/abs/2206.05966v3, http://arxiv.org/pdf/2206.05966v3 | cs.GT |
35,734 | th | We consider the diffusion of two alternatives in social networks using a
game-theoretic approach. Each individual plays a coordination game with its
neighbors repeatedly and decides which to adopt. As products are used in
conjunction with others and through repeated interactions, individuals are more
interested in their long-term benefits and tend to show trust to others to
maximize their long-term utility by choosing a suboptimal option with respect
to instantaneous payoff. To capture such trust behavior, we deploy
limited-trust equilibrium (LTE) in diffusion process. We analyze the
convergence of emerging dynamics to equilibrium points using mean-field
approximation and study the equilibrium state and the convergence rate of
diffusion using absorption probability and expected absorption time of a
reduced-size absorbing Markov chain. We also show that the diffusion model on
LTE under the best-response strategy can be converted to the well-known linear
threshold model. Simulation results show that when agents behave trustworthy,
their long-term utility will increase significantly compared to the case when
they are solely self-interested. Moreover, the Markov chain analysis provides a
good estimate of convergence properties over random networks. | Limited-Trust in Diffusion of Competing Alternatives over Social Networks | 2022-06-13 20:05:31 | Vincent Leon, S. Rasoul Etesami, Rakesh Nagi | http://arxiv.org/abs/2206.06318v3, http://arxiv.org/pdf/2206.06318v3 | cs.SI |
35,735 | th | We relax the strong rationality assumption for the agents in the paradigmatic
Kyle model of price formation, thereby reconciling the framework of
asymmetrically informed traders with the Adaptive Market Hypothesis, where
agents use inductive rather than deductive reasoning. Building on these ideas,
we propose a stylised model able to account parsimoniously for a rich
phenomenology, ranging from excess volatility to volatility clustering. While
characterising the excess-volatility dynamics, we provide a microfoundation for
GARCH models. Volatility clustering is shown to be related to the self-excited
dynamics induced by traders' behaviour, and does not rely on clustered
fundamental innovations. Finally, we propose an extension to account for the
fragile dynamics exhibited by real markets during flash crashes. | Microfounding GARCH Models and Beyond: A Kyle-inspired Model with Adaptive Agents | 2022-06-14 14:26:26 | Michele Vodret, Iacopo Mastromatteo, Bence Toth, Michael Benzaquen | http://arxiv.org/abs/2206.06764v1, http://arxiv.org/pdf/2206.06764v1 | q-fin.TR |
35,736 | th | In tug-of-war, two players compete by moving a counter along edges of a
graph, each winning the right to move at a given turn according to the flip of
a possibly biased coin. The game ends when the counter reaches the boundary, a
fixed subset of the vertices, at which point one player pays the other an
amount determined by the boundary vertex. Economists and mathematicians have
independently studied tug-of-war for many years, focussing respectively on
resource-allocation forms of the game, in which players iteratively spend
precious budgets in an effort to influence the bias of the coins that determine
the turn victors; and on PDE arising in fine mesh limits of the constant-bias
game in a Euclidean setting.
In this article, we offer a mathematical treatment of a class of tug-of-war
games with allocated budgets: each player is initially given a fixed budget
which she draws on throughout the game to offer a stake at the start of each
turn, and her probability of winning the turn is the ratio of her stake and the
sum of the two stakes. We consider the game played on a tree, with boundary
being the set of leaves, and the payment function being the indicator of a
single distinguished leaf. We find the game value and the essentially unique
Nash equilibrium of a leisurely version of the game, in which the move at any
given turn is cancelled with constant probability after stakes have been
placed. We show that the ratio of the players' remaining budgets is maintained
at its initial value $\lambda$; game value is a biased infinity harmonic
function; and the proportion of remaining budget that players stake at a given
turn is given in terms of the spatial gradient and the $\lambda$-derivative of
game value. We also indicate examples in which the solution takes a different
form in the non-leisurely game. | Stake-governed tug-of-war and the biased infinity Laplacian | 2022-06-16 20:00:49 | Alan Hammond, Gábor Pete | http://arxiv.org/abs/2206.08300v2, http://arxiv.org/pdf/2206.08300v2 | math.PR |
35,737 | th | To take advantage of strategy commitment, a useful tactic of playing games, a
leader must learn enough information about the follower's payoff function.
However, this leaves the follower a chance to provide fake information and
influence the final game outcome. Through a carefully contrived payoff function
misreported to the learning leader, the follower may induce an outcome that
benefits him more, compared to the ones when he truthfully behaves.
We study the follower's optimal manipulation via such strategic behaviors in
extensive-form games. Followers' different attitudes are taken into account. An
optimistic follower maximizes his true utility among all game outcomes that can
be induced by some payoff function. A pessimistic follower only considers
misreporting payoff functions that induce a unique game outcome. For all the
settings considered in this paper, we characterize all the possible game
outcomes that can be induced successfully. We show that it is polynomial-time
tractable for the follower to find the optimal way of misreporting his private
payoff information. Our work completely resolves this follower's optimal
manipulation problem on an extensive-form game tree. | Optimal Private Payoff Manipulation against Commitment in Extensive-form Games | 2022-06-27 11:50:28 | Yurong Chen, Xiaotie Deng, Yuhao Li | http://arxiv.org/abs/2206.13119v2, http://arxiv.org/pdf/2206.13119v2 | cs.GT |
35,738 | th | We study the diffusion of a true and a false message when agents are (i)
biased towards one of the messages and (ii) agents are able to inspect messages
for veracity. Inspection of messages implies that a higher rumor prevalence may
increase the prevalence of the truth. We employ this result to discuss how a
planner may optimally choose information inspection rates of the population. We
find that a planner who aims to maximize the prevalence of the truth may find
it optimal to allow rumors to circulate. | Optimal Inspection of Rumors in Networks | 2022-07-05 09:29:46 | Luca Paolo Merlino, Nicole Tabasso | http://arxiv.org/abs/2207.01830v1, http://arxiv.org/pdf/2207.01830v1 | econ.TH |
35,739 | th | In today's online advertising markets, a crucial requirement for an
advertiser is to control her total expenditure within a time horizon under some
budget. Among various budget control methods, throttling has emerged as a
popular choice, managing an advertiser's total expenditure by selecting only a
subset of auctions to participate in. This paper provides a theoretical
panorama of a single advertiser's dynamic budget throttling process in repeated
second-price auctions. We first establish a lower bound on the regret and an
upper bound on the asymptotic competitive ratio for any throttling algorithm,
respectively, when the advertiser's values are stochastic and adversarial.
Regarding the algorithmic side, we propose the OGD-CB algorithm, which
guarantees a near-optimal expected regret with stochastic values. On the other
hand, when values are adversarial, we prove that this algorithm also reaches
the upper bound on the asymptotic competitive ratio. We further compare
throttling with pacing, another widely adopted budget control method, in
repeated second-price auctions. In the stochastic case, we demonstrate that
pacing is generally superior to throttling for the advertiser, supporting the
well-known result that pacing is asymptotically optimal in this scenario.
However, in the adversarial case, we give an exciting result indicating that
throttling is also an asymptotically optimal dynamic bidding strategy. Our
results bridge the gaps in theoretical research of throttling in repeated
auctions and comprehensively reveal the ability of this popular
budget-smoothing strategy. | Dynamic Budget Throttling in Repeated Second-Price Auctions | 2022-07-11 11:12:02 | Zhaohua Chen, Chang Wang, Qian Wang, Yuqi Pan, Zhuming Shi, Zheng Cai, Yukun Ren, Zhihua Zhu, Xiaotie Deng | http://arxiv.org/abs/2207.04690v7, http://arxiv.org/pdf/2207.04690v7 | cs.GT |
35,740 | th | This paper aims to formulate and study the inverse problem of non-cooperative
linear quadratic games: Given a profile of control strategies, find cost
parameters for which this profile of control strategies is Nash. We formulate
the problem as a leader-followers problem, where a leader aims to implant a
desired profile of control strategies among selfish players. In this paper, we
leverage frequency-domain techniques to develop a necessary and sufficient
condition on the existence of cost parameters for a given profile of
stabilizing control strategies to be Nash under a given linear system. The
necessary and sufficient condition includes the circle criterion for each
player and a rank condition related to the transfer function of each player.
The condition provides an analytical method to check the existence of such cost
parameters, while previous studies need to solve a convex feasibility problem
numerically to answer the same question. We develop an identity in
frequency-domain representation to characterize the cost parameters, which we
refer to as the Kalman equation. The Kalman equation reduces redundancy in the
time-domain analysis that involves solving a convex feasibility problem. Using
the Kalman equation, we also show the leader can enforce the same Nash profile
by applying penalties on the shared state instead of penalizing the player for
other players' actions to avoid the impression of unfairness. | The Inverse Problem of Linear-Quadratic Differential Games: When is a Control Strategies Profile Nash? | 2022-07-12 07:26:55 | Yunhan Huang, Tao Zhang, Quanyan Zhu | http://arxiv.org/abs/2207.05303v2, http://arxiv.org/pdf/2207.05303v2 | math.OC |
35,741 | th | The emerging technology of Vehicle-to-Vehicle (V2V) communication over
vehicular ad hoc networks promises to improve road safety by allowing vehicles
to autonomously warn each other of road hazards. However, research on other
transportation information systems has shown that informing only a subset of
drivers of road conditions may have a perverse effect of increasing congestion.
In the context of a simple (yet novel) model of V2V hazard information sharing,
we ask whether partial adoption of this technology can similarly lead to
undesirable outcomes. In our model, drivers individually choose how recklessly
to behave as a function of information received from other V2V-enabled cars,
and the resulting aggregate behavior influences the likelihood of accidents
(and thus the information propagated by the vehicular network). We fully
characterize the game-theoretic equilibria of this model using our new
equilibrium concept. Our model indicates that for a wide range of the parameter
space, V2V information sharing surprisingly increases the equilibrium frequency
of accidents relative to no V2V information sharing, and that it may increase
equilibrium social cost as well. | Information Design for Vehicle-to-Vehicle Communication | 2022-07-13 00:33:06 | Brendan T. Gould, Philip N. Brown | http://arxiv.org/abs/2207.06411v1, http://arxiv.org/pdf/2207.06411v1 | cs.GT |
35,766 | th | Balancing pandemic control and economics is challenging, as the numerical
analysis assuming specific economic conditions complicates obtaining
predictable general findings. In this study, we analytically demonstrate how
adopting timely moderate measures helps reconcile medical effectiveness and
economic impact, and explain it as a consequence of the general finding of
``economic irreversibility" by comparing it with thermodynamics. A general
inequality provides the guiding principles on how such measures should be
implemented. The methodology leading to the exact solution is a novel
theoretical contribution to the econophysics literature. | Timely pandemic countermeasures reduce both health damage and economic loss: Generality of the exact solution | 2022-09-16 10:49:56 | Tsuyoshi Hondou | http://dx.doi.org/10.7566/JPSJ.92.043801, http://arxiv.org/abs/2209.12805v2, http://arxiv.org/pdf/2209.12805v2 | physics.soc-ph |
35,742 | th | We study fairness in social choice settings under single-peaked preferences.
Construction and characterization of social choice rules in the single-peaked
domain has been extensively studied in prior works. In fact, in the
single-peaked domain, it is known that unanimous and strategy-proof
deterministic rules have to be min-max rules and those that also satisfy
anonymity have to be median rules. Further, random social choice rules
satisfying these properties have been shown to be convex combinations of
respective deterministic rules. We non-trivially add to this body of results by
including fairness considerations in social choice. Our study directly
addresses fairness for groups of agents. To study group-fairness, we consider
an existing partition of the agents into logical groups, based on natural
attributes such as gender, race, and location. To capture fairness within each
group, we introduce the notion of group-wise anonymity. To capture fairness
across the groups, we propose a weak notion as well as a strong notion of
fairness. The proposed fairness notions turn out to be natural generalizations
of existing individual-fairness notions and moreover provide non-trivial
outcomes for strict ordinal preferences, unlike the existing group-fairness
notions. We provide two separate characterizations of random social choice
rules that satisfy group-fairness: (i) direct characterization (ii) extreme
point characterization (as convex combinations of fair deterministic social
choice rules). We also explore the special case where there are no groups and
provide sharper characterizations of rules that achieve individual-fairness. | Characterization of Group-Fair Social Choice Rules under Single-Peaked Preferences | 2022-07-16 20:12:54 | Gogulapati Sreedurga, Soumyarup Sadhukhan, Souvik Roy, Yadati Narahari | http://arxiv.org/abs/2207.07984v1, http://arxiv.org/pdf/2207.07984v1 | cs.GT |
35,743 | th | Cryptographic Self-Selection is a subroutine used to select a leader for
modern proof-of-stake consensus protocols, such as Algorand. In cryptographic
self-selection, each round $r$ has a seed $Q_r$. In round $r$, each account
owner is asked to digitally sign $Q_r$, hash their digital signature to produce
a credential, and then broadcast this credential to the entire network. A
publicly-known function scores each credential in a manner so that the
distribution of the lowest scoring credential is identical to the distribution
of stake owned by each account. The user who broadcasts the lowest-scoring
credential is the leader for round $r$, and their credential becomes the seed
$Q_{r+1}$. Such protocols leave open the possibility of a selfish-mining style
attack: a user who owns multiple accounts that each produce low-scoring
credentials in round $r$ can selectively choose which ones to broadcast in
order to influence the seed for round $r+1$. Indeed, the user can pre-compute
their credentials for round $r+1$ for each potential seed, and broadcast only
the credential (among those with a low enough score to be the leader) that
produces the most favorable seed.
We consider an adversary who wishes to maximize the expected fraction of
rounds in which an account they own is the leader. We show such an adversary
always benefits from deviating from the intended protocol, regardless of the
fraction of the stake controlled. We characterize the optimal strategy; first
by proving the existence of optimal positive recurrent strategies whenever the
adversary owns last than $38\%$ of the stake. Then, we provide a Markov
Decision Process formulation to compute the optimal strategy. | Optimal Strategic Mining Against Cryptographic Self-Selection in Proof-of-Stake | 2022-07-16 21:28:07 | Matheus V. X. Ferreira, Ye Lin Sally Hahn, S. Matthew Weinberg, Catherine Yu | http://dx.doi.org/10.1145/3490486.3538337, http://arxiv.org/abs/2207.07996v1, http://arxiv.org/pdf/2207.07996v1 | cs.CR |
35,744 | th | We study a general scenario of simultaneous contests that allocate prizes
based on equal sharing: each contest awards its prize to all players who
satisfy some contest-specific criterion, and the value of this prize to a
winner decreases as the number of winners increases. The players produce
outputs for a set of activities, and the winning criteria of the contests are
based on these outputs. We consider two variations of the model: (i) players
have costs for producing outputs; (ii) players do not have costs but have
generalized budget constraints. We observe that these games are exact potential
games and hence always have a pure-strategy Nash equilibrium. The price of
anarchy is $2$ for the budget model, but can be unbounded for the cost model.
Our main results are for the computational complexity of these games. We prove
that for general versions of the model exactly or approximately computing a
best response is NP-hard. For natural restricted versions where best response
is easy to compute, we show that finding a pure-strategy Nash equilibrium is
PLS-complete, and finding a mixed-strategy Nash equilibrium is
(PPAD$\cap$PLS)-complete. On the other hand, an approximate pure-strategy Nash
equilibrium can be found in pseudo-polynomial time. These games are a strict
but natural subclass of explicit congestion games, but they still have the same
equilibrium hardness results. | Simultaneous Contests with Equal Sharing Allocation of Prizes: Computational Complexity and Price of Anarchy | 2022-07-17 15:18:11 | Edith Elkind, Abheek Ghosh, Paul W. Goldberg | http://arxiv.org/abs/2207.08151v1, http://arxiv.org/pdf/2207.08151v1 | cs.GT |
35,745 | th | This paper examines whether one can learn to play an optimal action while
only knowing part of true specification of the environment. We choose the
optimal pricing problem as our laboratory, where the monopolist is endowed with
an underspecified model of the market demand, but can observe market outcomes.
In contrast to conventional learning models where the model specification is
complete and exogenously fixed, the monopolist has to learn the specification
and the parameters of the demand curve from the data. We formulate the learning
dynamics as an algorithm that forecast the optimal price based on the data,
following the machine learning literature (Shalev-Shwartz and Ben-David
(2014)). Inspired by PAC learnability, we develop a new notion of learnability
by requiring that the algorithm must produce an accurate forecast with a
reasonable amount of data uniformly over the class of models consistent with
the part of the true specification. In addition, we assume that the monopolist
has a lexicographic preference over the payoff and the complexity cost of the
algorithm, seeking an algorithm with a minimum number of parameters subject to
PAC-guaranteeing the optimal solution (Rubinstein (1986)). We show that for the
set of demand curves with strictly decreasing uniformly Lipschitz continuous
marginal revenue curve, the optimal algorithm recursively estimates the slope
and the intercept of the linear demand curve, even if the actual demand curve
is not linear. The monopolist chooses a misspecified model to save
computational cost, while learning the true optimal decision uniformly over the
set of underspecified demand curves. | Learning Underspecified Models | 2022-07-20 21:42:29 | In-Koo Cho, Jonathan Libgober | http://arxiv.org/abs/2207.10140v1, http://arxiv.org/pdf/2207.10140v1 | econ.TH |
35,746 | th | In this study, I present a theoretical social learning model to investigate
how confirmation bias affects opinions when agents exchange information over a
social network. Hence, besides exchanging opinions with friends, agents observe
a public sequence of potentially ambiguous signals and interpret it according
to a rule that includes confirmation bias. First, this study shows that
regardless of level of ambiguity both for people or networked society, only two
types of opinions can be formed, and both are biased. However, one opinion type
is less biased than the other depending on the state of the world. The size of
both biases depends on the ambiguity level and relative magnitude of the state
and confirmation biases. Hence, long-run learning is not attained even when
people impartially interpret ambiguity. Finally, analytically confirming the
probability of emergence of the less-biased consensus when people are connected
and have different priors is difficult. Hence, I used simulations to analyze
its determinants and found three main results: i) some network topologies are
more conducive to consensus efficiency, ii) some degree of partisanship
enhances consensus efficiency even under confirmation bias and iii)
open-mindedness (i.e. when partisans agree to exchange opinions with opposing
partisans) might inhibit efficiency in some cases. | Confirmation Bias in Social Networks | 2022-07-26 04:11:01 | Marcos R. Fernandes | http://arxiv.org/abs/2207.12594v3, http://arxiv.org/pdf/2207.12594v3 | econ.TH |
35,747 | th | We consider a Bayesian forecast aggregation model where $n$ experts, after
observing private signals about an unknown binary event, report their posterior
beliefs about the event to a principal, who then aggregates the reports into a
single prediction for the event. The signals of the experts and the outcome of
the event follow a joint distribution that is unknown to the principal, but the
principal has access to i.i.d. "samples" from the distribution, where each
sample is a tuple of the experts' reports (not signals) and the realization of
the event. Using these samples, the principal aims to find an
$\varepsilon$-approximately optimal aggregator, where optimality is measured in
terms of the expected squared distance between the aggregated prediction and
the realization of the event. We show that the sample complexity of this
problem is at least $\tilde \Omega(m^{n-2} / \varepsilon)$ for arbitrary
discrete distributions, where $m$ is the size of each expert's signal space.
This sample complexity grows exponentially in the number of experts $n$. But,
if the experts' signals are independent conditioned on the realization of the
event, then the sample complexity is significantly reduced, to $\tilde O(1 /
\varepsilon^2)$, which does not depend on $n$. Our results can be generalized
to non-binary events. The proof of our results uses a reduction from the
distribution learning problem and reveals the fact that forecast aggregation is
almost as difficult as distribution learning. | Sample Complexity of Forecast Aggregation | 2022-07-26 21:12:53 | Tao Lin, Yiling Chen | http://arxiv.org/abs/2207.13126v4, http://arxiv.org/pdf/2207.13126v4 | cs.LG |
35,748 | th | We consider transferable utility cooperative games with infinitely many
players and the core understood in the space of bounded additive set functions.
We show that, if a game is bounded below, then its core is non-empty if and
only if the game is balanced.
This finding is a generalization of Schmeidler's (1967) original result ``On
Balanced Games with Infinitely Many Players'', where the game is assumed to be
non-negative. We furthermore demonstrate that, if a game is not bounded below,
then its core might be empty even though the game is balanced; that is, our
result is tight.
We also generalize Schmeidler's (1967) result to the case of restricted
cooperation too. | On Balanced Games with Infinitely Many Players: Revisiting Schmeidler's Result | 2022-07-29 16:36:47 | David Bartl, Miklós Pintér | http://arxiv.org/abs/2207.14672v1, http://arxiv.org/pdf/2207.14672v1 | math.OC |
35,749 | th | We devise a theoretical model for the optimal dynamical control of an
infectious disease whose diffusion is described by the SVIR compartmental
model. The control is realized through implementing social rules to reduce the
disease's spread, which often implies substantial economic and social costs. We
model this trade-off by introducing a functional depending on three terms: a
social cost function, the cost supported by the healthcare system for the
infected population, and the cost of the vaccination campaign. Using the
Pontryagin's Maximum Principle, we give conditions for the existence of the
optimal policy, which we characterize explicitly in three instances of the
social cost function, the linear, quadratic, and exponential models,
respectively. Finally, we present a set of results on the numerical solution of
the optimally controlled system by using Italian data from the recent Covid--19
pandemic for the model calibration. | The economic cost of social distancing during a pandemic: an optimal control approach in the SVIR model | 2022-08-09 20:10:17 | Alessandro Ramponi, Maria Elisabetta Tessitore | http://arxiv.org/abs/2208.04908v1, http://arxiv.org/pdf/2208.04908v1 | math.OC |
35,750 | th | We study a game between $N$ job applicants who incur a cost $c$ (relative to
the job value) to reveal their type during interviews and an administrator who
seeks to maximize the probability of hiring the best. We define a full learning
equilibrium and prove its existence, uniqueness, and optimality. In
equilibrium, the administrator accepts the current best applicant $n$ with
probability $c$ if $n<n^*$ and with probability 1 if $n\ge n^*$ for a threshold
$n^*$ independent of $c$. In contrast to the case without cost, where the
success probability converges to $1/\mathrm{e}\approx 0.37$ as $N$ tends to
infinity, with cost the success probability decays like $N^{-c}$. | Incentivizing Hidden Types in Secretary Problem | 2022-08-11 18:56:08 | Longjian Li, Alexis Akira Toda | http://arxiv.org/abs/2208.05897v1, http://arxiv.org/pdf/2208.05897v1 | econ.TH |
35,751 | th | The productivity of a common pool of resources may degrade when overly
exploited by a number of selfish investors, a situation known as the tragedy of
the commons (TOC). Without regulations, agents optimize the size of their
individual investments into the commons by balancing incurring costs with the
returns received. The resulting Nash equilibrium involves a self-consistency
loop between individual investment decisions and the state of the commons. As a
consequence, several non-trivial properties emerge. For $N$ investing actors we
prove rigorously that typical payoffs do not scale as $1/N$, the expected
result for cooperating agents, but as $(1/N)^2$. Payoffs are hence reduced with
regard to the functional dependence on $N$, a situation denoted catastrophic
poverty. We show that catastrophic poverty results from a fine-tuned balance
between returns and costs. Additionally, a finite number of oligarchs may be
present. Oligarchs are characterized by payoffs that are finite and not
decreasing when $N$ increases. Our results hold for generic classes of models,
including convex and moderately concave cost functions. For strongly concave
cost functions the Nash equilibrium undergoes a collective reorganization,
being characterized instead by entry barriers and sudden death forced market
exits. | Generic catastrophic poverty when selfish investors exploit a degradable common resource | 2022-08-17 12:19:14 | Claudius Gros | http://arxiv.org/abs/2208.08171v2, http://arxiv.org/pdf/2208.08171v2 | econ.TH |
35,752 | th | We show the perhaps surprising inequality that the weighted average of
negatively dependent super-Pareto random variables, possibly caused by
triggering events, is larger than one such random variable in the sense of
first-order stochastic dominance. The class of super-Pareto distributions is
extremely heavy-tailed and it includes the class of infinite-mean Pareto
distributions. We discuss several implications of this result via an
equilibrium analysis in a risk exchange market. First, diversification of
super-Pareto losses increases portfolio risk, and thus a diversification
penalty exists. Second, agents with super-Pareto losses will not share risks in
a market equilibrium. Third, transferring losses from agents bearing
super-Pareto losses to external parties without any losses may arrive at an
equilibrium which benefits every party involved. The empirical studies show
that our new inequality can be observed empirically for real datasets that fit
well with extremely heavy tails. | An unexpected stochastic dominance: Pareto distributions, catastrophes, and risk exchange | 2022-08-17 21:17:01 | Yuyu Chen, Paul Embrechts, Ruodu Wang | http://arxiv.org/abs/2208.08471v3, http://arxiv.org/pdf/2208.08471v3 | q-fin.RM |
35,753 | th | We investigate Gately's solution concept for cooperative games with
transferable utilities. Gately's conception introduced a bargaining solution
that minimises the maximal quantified ``propensity to disrupt'' the negotiation
process of the players over the allocation of the generated collective payoffs.
Gately's solution concept is well-defined for a broad class of games. We also
consider a generalisation based on a parameter-based quantification of the
propensity to disrupt. Furthermore, we investigate the relationship of these
generalised Gately values with the Core and the Nucleolus and show that
Gately's solution is in the Core for all regular 3-player games. We identify
exact conditions under which generally these Gately values are Core imputations
for arbitrary regular cooperative games. Finally, we investigate the
relationship of the Gately value with the Shapley value. | Gately Values of Cooperative Games | 2022-08-22 13:19:40 | Robert P. Gilles, Lina Mallozzi | http://arxiv.org/abs/2208.10189v2, http://arxiv.org/pdf/2208.10189v2 | econ.TH |
35,754 | th | Multi-agent reinforcement learning (MARL) is a powerful tool for training
automated systems acting independently in a common environment. However, it can
lead to sub-optimal behavior when individual incentives and group incentives
diverge. Humans are remarkably capable at solving these social dilemmas. It is
an open problem in MARL to replicate such cooperative behaviors in selfish
agents. In this work, we draw upon the idea of formal contracting from
economics to overcome diverging incentives between agents in MARL. We propose
an augmentation to a Markov game where agents voluntarily agree to binding
state-dependent transfers of reward, under pre-specified conditions. Our
contributions are theoretical and empirical. First, we show that this
augmentation makes all subgame-perfect equilibria of all fully observed Markov
games exhibit socially optimal behavior, given a sufficiently rich space of
contracts. Next, we complement our game-theoretic analysis by showing that
state-of-the-art RL algorithms learn socially optimal policies given our
augmentation. Our experiments include classic static dilemmas like Stag Hunt,
Prisoner's Dilemma and a public goods game, as well as dynamic interactions
that simulate traffic, pollution management and common pool resource
management. | Get It in Writing: Formal Contracts Mitigate Social Dilemmas in Multi-Agent RL | 2022-08-22 20:42:03 | Phillip J. K. Christoffersen, Andreas A. Haupt, Dylan Hadfield-Menell | http://dx.doi.org/10.5555/3545946.3598670, http://arxiv.org/abs/2208.10469v3, http://arxiv.org/pdf/2208.10469v3 | cs.AI |
35,755 | th | Spot electricity markets are considered under a Game-Theoretic framework,
where risk averse players submit orders to the market clearing mechanism to
maximise their own utility. Consistent with the current practice in Europe, the
market clearing mechanism is modelled as a Social Welfare Maximisation problem,
with zonal pricing, and we consider inflexible demand, physical constraints of
the electricity grid, and capacity-constrained producers. A novel type of
non-parametric risk aversion based on a defined worst case scenario is
introduced, and this reduces the dimensionality of the strategy variables and
ensures boundedness of prices. By leveraging these properties we devise Jacobi
and Gauss-Seidel iterative schemes for computation of approximate global Nash
Equilibria, which are in contrast to derivative based local equilibria. Our
methodology is applied to the real world data of Central Western European (CWE)
Spot Market during the 2019-2020 period, and offers a good representation of
the historical time series of prices. By also solving for the assumption of
truthful bidding, we devise a simple method based on hypothesis testing to
infer if and when producers are bidding strategically (instead of truthfully),
and we find evidence suggesting that strategic bidding may be fairly pronounced
in the CWE region. | A fundamental Game Theoretic model and approximate global Nash Equilibria computation for European Spot Power Markets | 2022-08-30 14:32:34 | Ioan Alexandru Puiu, Raphael Andreas Hauser | http://arxiv.org/abs/2208.14164v1, http://arxiv.org/pdf/2208.14164v1 | cs.GT |
35,756 | th | Competition between traditional platforms is known to improve user utility by
aligning the platform's actions with user preferences. But to what extent is
alignment exhibited in data-driven marketplaces? To study this question from a
theoretical perspective, we introduce a duopoly market where platform actions
are bandit algorithms and the two platforms compete for user participation. A
salient feature of this market is that the quality of recommendations depends
on both the bandit algorithm and the amount of data provided by interactions
from users. This interdependency between the algorithm performance and the
actions of users complicates the structure of market equilibria and their
quality in terms of user utility. Our main finding is that competition in this
market does not perfectly align market outcomes with user utility.
Interestingly, market outcomes exhibit misalignment not only when the platforms
have separate data repositories, but also when the platforms have a shared data
repository. Nonetheless, the data sharing assumptions impact what mechanism
drives misalignment and also affect the specific form of misalignment (e.g. the
quality of the best-case and worst-case market outcomes). More broadly, our
work illustrates that competition in digital marketplaces has subtle
consequences for user utility that merit further investigation. | Competition, Alignment, and Equilibria in Digital Marketplaces | 2022-08-30 20:43:58 | Meena Jagadeesan, Michael I. Jordan, Nika Haghtalab | http://arxiv.org/abs/2208.14423v2, http://arxiv.org/pdf/2208.14423v2 | cs.GT |
35,767 | th | We consider finite two-player normal form games with random payoffs. Player
A's payoffs are i.i.d. from a uniform distribution. Given p in [0, 1], for any
action profile, player B's payoff coincides with player A's payoff with
probability p and is i.i.d. from the same uniform distribution with probability
1-p. This model interpolates the model of i.i.d. random payoff used in most of
the literature and the model of random potential games. First we study the
number of pure Nash equilibria in the above class of games. Then we show that,
for any positive p, asymptotically in the number of available actions, best
response dynamics reaches a pure Nash equilibrium with high probability. | Best-Response dynamics in two-person random games with correlated payoffs | 2022-09-26 22:04:06 | Hlafo Alfie Mimun, Matteo Quattropani, Marco Scarsini | http://arxiv.org/abs/2209.12967v2, http://arxiv.org/pdf/2209.12967v2 | cs.GT |
35,759 | th | LP-duality theory has played a central role in the study of cores of games,
right from the early days of this notion to the present time. The classic paper
of Shapley and Shubik \cite{Shapley1971assignment} introduced the "right" way
of exploiting the power of this theory, namely picking problems whose
LP-relaxations support polyhedra having integral vertices. So far, the latter
fact was established by showing that the constraint matrix of the underlying
linear system is {\em totally unimodular}.
We attempt to take this methodology to its logical next step -- {\em using
total dual integrality} -- thereby addressing new classes of games which have
their origins in two major theories within combinatorial optimization, namely
perfect graphs and polymatroids. In the former, we address the stable set and
clique games and in the latter, we address the matroid independent set game.
For each of these games, we prove that the set of core imputations is precisely
the set of optimal solutions to the dual LPs.
Another novelty is the way the worth of the game is allocated among
sub-coalitions. Previous works follow the {\em bottom-up process} of allocating
to individual agents; the allocation to a sub-coalition is simply the sum of
the allocations to its agents. The {\em natural process for our games is
top-down}. The optimal dual allocates to "objects" in the grand coalition; a
sub-coalition inherits the allocation of each object with which it has
non-empty intersection. | Cores of Games via Total Dual Integrality, with Applications to Perfect Graphs and Polymatroids | 2022-09-11 19:49:35 | Vijay V. Vazirani | http://arxiv.org/abs/2209.04903v2, http://arxiv.org/pdf/2209.04903v2 | cs.GT |
35,760 | th | A formal write-up of the simple proof (1995) of the existence of calibrated
forecasts by the minimax theorem, which moreover shows that $N^3$ periods
suffice to guarantee a calibration error of at most $1/N$. | Calibrated Forecasts: The Minimax Proof | 2022-09-13 13:24:54 | Sergiu Hart | http://arxiv.org/abs/2209.05863v2, http://arxiv.org/pdf/2209.05863v2 | econ.TH |
35,761 | th | In adversarial interactions, one is often required to make strategic
decisions over multiple periods of time, wherein decisions made earlier impact
a player's competitive standing as well as how choices are made in later
stages. In this paper, we study such scenarios in the context of General Lotto
games, which models the competitive allocation of resources over multiple
battlefields between two players. We propose a two-stage formulation where one
of the players has reserved resources that can be strategically pre-allocated
across the battlefields in the first stage. The pre-allocation then becomes
binding and is revealed to the other player. In the second stage, the players
engage by simultaneously allocating their real-time resources against each
other. The main contribution in this paper provides complete characterizations
of equilibrium payoffs in the two-stage game, revealing the interplay between
performance and the amount of resources expended in each stage of the game. We
find that real-time resources are at least twice as effective as pre-allocated
resources. We then determine the player's optimal investment when there are
linear costs associated with purchasing each type of resource before play
begins, and there is a limited monetary budget. | Strategic investments in multi-stage General Lotto games | 2022-09-13 18:39:46 | Rahul Chandan, Keith Paarporn, Mahnoosh Alizadeh, Jason R. Marden | http://arxiv.org/abs/2209.06090v1, http://arxiv.org/pdf/2209.06090v1 | cs.GT |
35,762 | th | Let $C$ be a cone in a locally convex Hausdorff topological vector space $X$
containing $0$. We show that there exists a (essentially unique) nonempty
family $\mathscr{K}$ of nonempty subsets of the topological dual $X^\prime$
such that $$ C=\{x \in X: \forall K \in \mathscr{K}, \exists f \in K, \,\, f(x)
\ge 0\}. $$ Then, we identify the additional properties on the family
$\mathscr{K}$ which characterize, among others, closed convex cones, open
convex cones, closed cones, and convex cones. For instance, if $X$ is a Banach
space, then $C$ is a closed cone if and only if the family $\mathscr{K}$ can be
chosen with nonempty convex compact sets.
These representations provide abstract versions of several recent results in
decision theory and give us the proper framework to obtain new ones. This
allows us to characterize preorders which satisfy the independence axiom over
certain probability measures, answering an open question in
[Econometrica~\textbf{87} (2019), no. 3, 933--980]. | Representations of cones and applications to decision theory | 2022-09-14 00:36:19 | Paolo Leonetti, Giulio Principi | http://arxiv.org/abs/2209.06310v2, http://arxiv.org/pdf/2209.06310v2 | math.CA |
35,763 | th | We study random-turn resource-allocation games. In the Trail of Lost Pennies,
a counter moves on $\mathbb{Z}$. At each turn, Maxine stakes $a \in [0,\infty)$
and Mina $b \in [0,\infty)$. The counter $X$ then moves adjacently, to the
right with probability $\tfrac{a}{a+b}$. If $X_i \to -\infty$ in this
infinte-turn game, Mina receives one unit, and Maxine zero; if $X_i \to
\infty$, then these receipts are zero and $x$. Thus the net receipt to a given
player is $-A+B$, where $A$ is the sum of her stakes, and $B$ is her terminal
receipt. The game was inspired by unbiased tug-of-war in~[PSSW] from 2009 but
in fact closely resembles the original version of tug-of-war, introduced
[HarrisVickers87] in the economics literature in 1987. We show that the game
has surprising features. For a natural class of strategies, Nash equilibria
exist precisely when $x$ lies in $[\lambda,\lambda^{-1}]$, for a certain
$\lambda \in (0,1)$. We indicate that $\lambda$ is remarkably close to one,
proving that $\lambda \leq 0.999904$ and presenting clear numerical evidence
that $\lambda \geq 1 - 10^{-4}$. For each $x \in [\lambda,\lambda^{-1}]$, we
find countably many Nash equilibria. Each is roughly characterized by an
integral {\em battlefield} index: when the counter is nearby, both players
stake intensely, with rapid but asymmetric decay in stakes as it moves away.
Our results advance premises [HarrisVickers87,Konrad12] for fund management and
the incentive-outcome relation that plausibly hold for many player-funded
stake-governed games. Alongside a companion treatment [HP22] of games with
allocated budgets, we thus offer a detailed mathematical treatment of an
illustrative class of tug-of-war games. We also review the separate
developments of tug-of-war in economics and mathematics in the hope that
mathematicians direct further attention to tug-of-war in its original
resource-allocation guise. | On the Trail of Lost Pennies: player-funded tug-of-war on the integers | 2022-09-15 19:54:31 | Alan Hammond | http://arxiv.org/abs/2209.07451v3, http://arxiv.org/pdf/2209.07451v3 | math.PR |
35,764 | th | The Bayesian posterior probability of the true state is stochastically
dominated by that same posterior under the probability law of the true state.
This generalizes to notions of "optimism" about posterior probabilities. | Posterior Probabilities: Dominance and Optimism | 2022-09-23 17:11:20 | Sergiu Hart, Yosef Rinott | http://dx.doi.org/10.1016/j.econlet.2020.109352, http://arxiv.org/abs/2209.11601v1, http://arxiv.org/pdf/2209.11601v1 | econ.TH |
35,765 | th | In the standard Bayesian framework data are assumed to be generated by a
distribution parametrized by $\theta$ in a parameter space $\Theta$, over which
a prior distribution $\pi$ is given. A Bayesian statistician quantifies the
belief that the true parameter is $\theta_{0}$ in $\Theta$ by its posterior
probability given the observed data. We investigate the behavior of the
posterior belief in $\theta_{0}$ when the data are generated under some
parameter $\theta_{1},$ which may or may not be the same as $\theta_{0}.$
Starting from stochastic orders, specifically, likelihood ratio dominance, that
obtain for resulting distributions of posteriors, we consider monotonicity
properties of the posterior probabilities as a function of the sample size when
data arrive sequentially. While the $\theta_{0}$-posterior is monotonically
increasing (i.e., it is a submartingale) when the data are generated under that
same $\theta_{0}$, it need not be monotonically decreasing in general, not even
in terms of its overall expectation, when the data are generated under a
different $\theta_{1}.$ In fact, it may keep going up and down many times, even
in simple cases such as iid coin tosses. We obtain precise asymptotic rates
when the data come from the wide class of exponential families of
distributions; these rates imply in particular that the expectation of the
$\theta_{0}$-posterior under $\theta_{1}\neq\theta_{0}$ is eventually strictly
decreasing. Finally, we show that in a number of interesting cases this
expectation is a log-concave function of the sample size, and thus unimodal. In
the Bernoulli case we obtain this by developing an inequality that is related
to Tur\'{a}n's inequality for Legendre polynomials. | Posterior Probabilities: Nonmonotonicity, Asymptotic Rates, Log-Concavity, and Turán's Inequality | 2022-09-23 20:12:35 | Sergiu Hart, Yosef Rinott | http://dx.doi.org/10.3150/21-BEJ1398, http://arxiv.org/abs/2209.11728v1, http://arxiv.org/pdf/2209.11728v1 | math.ST |
35,769 | th | Trading on decentralized exchanges has been one of the primary use cases for
permissionless blockchains with daily trading volume exceeding billions of
U.S.~dollars. In the status quo, users broadcast transactions and miners are
responsible for composing a block of transactions and picking an execution
ordering -- the order in which transactions execute in the exchange. Due to the
lack of a regulatory framework, it is common to observe miners exploiting their
privileged position by front-running transactions and obtaining risk-fee
profits. In this work, we propose to modify the interaction between miners and
users and initiate the study of {\em verifiable sequencing rules}. As in the
status quo, miners can determine the content of a block; however, they commit
to respecting a sequencing rule that constrains the execution ordering and is
verifiable (there is a polynomial time algorithm that can verify if the
execution ordering satisfies such constraints). Thus in the event a miner
deviates from the sequencing rule, anyone can generate a proof of
non-compliance. We ask if there are sequencing rules that limit price
manipulation from miners in a two-token liquidity pool exchange. Our first
result is an impossibility theorem: for any sequencing rule, there is an
instance of user transactions where the miner can obtain non-zero risk-free
profits. In light of this impossibility result, our main result is a verifiable
sequencing rule that provides execution price guarantees for users. In
particular, for any user transaction A, it ensures that either (1) the
execution price of A is at least as good as if A was the only transaction in
the block, or (2) the execution price of A is worse than this ``standalone''
price and the miner does not gain (or lose) when including A in the block. | Credible Decentralized Exchange Design via Verifiable Sequencing Rules | 2022-09-30 19:28:32 | Matheus V. X. Ferreira, David C. Parkes | http://dx.doi.org/10.1145/3564246.3585233, http://arxiv.org/abs/2209.15569v2, http://arxiv.org/pdf/2209.15569v2 | cs.GT |
35,770 | th | In electronic commerce (e-commerce)markets, a decision-maker faces a
sequential choice problem. Third-party intervention plays an important role in
making purchase decisions in this choice process. For instance, while
purchasing products/services online, a buyer's choice or behavior is often
affected by the overall reviewers' ratings, feedback, etc. Moreover, the
reviewer is also a decision-maker. After purchase, the decision-maker would put
forth their reviews for the product, online. Such reviews would affect the
purchase decision of another potential buyer, who would read the reviews before
conforming to his/her final purchase. The question that arises is \textit{how
trustworthy are these review reports and ratings?} The trustworthiness of these
review reports and ratings is based on whether the reviewer is a rational or an
irrational person. Indexing the reviewer's rationality could be a way to
quantify a reviewer's rationality but it does not communicate the history of
his/her behavior. In this article, the researcher aims at formally deriving a
rationality pattern function and thereby, the degree of rationality of the
decision-maker or the reviewer in the sequential choice problem in the
e-commerce markets. Applying such a rationality pattern function could make it
easier to quantify the rational behavior of an agent who participates in the
digital markets. This, in turn, is expected to minimize the information
asymmetry within the decision-making process and identify the paid reviewers or
manipulative reviews. | Measurement of Trustworthiness of the Online Reviews | 2022-10-03 13:55:47 | Dipankar Das | http://arxiv.org/abs/2210.00815v2, http://arxiv.org/pdf/2210.00815v2 | econ.TH |
35,771 | th | Let $\succsim$ be a binary relation on the set of simple lotteries over a
countable outcome set $Z$. We provide necessary and sufficient conditions on
$\succsim$ to guarantee the existence of a set $U$ of von Neumann--Morgenstern
utility functions $u: Z\to \mathbf{R}$ such that $$ p\succsim q
\,\,\,\Longleftrightarrow\,\,\, \mathbf{E}_p[u] \ge \mathbf{E}_q[u] \,\text{
for all }u \in U $$ for all simple lotteries $p,q$. In such case, the set $U$
is essentially unique. Then, we show that the analogue characterization does
not hold if $Z$ is uncountable. This provides an answer to an open question
posed by Dubra, Maccheroni, and Ok in [J. Econom. Theory~\textbf{115} (2004),
no.~1, 118--133]. Lastly, we show that different continuity requirements on
$\succsim$ allow for certain restrictions on the possible choices of the set
$U$ of utility functions (e.g., all utility functions are bounded), providing a
wide family of expected multi-utility representations. | Expected multi-utility representations of preferences over lotteries | 2022-10-10 17:49:59 | Paolo Leonetti | http://arxiv.org/abs/2210.04739v1, http://arxiv.org/pdf/2210.04739v1 | math.CA |
35,772 | th | A number of rules for resolving majority cycles in elections have been
proposed in the literature. Recently, Holliday and Pacuit (Journal of
Theoretical Politics 33 (2021) 475-524) axiomatically characterized the class
of rules refined by one such cycle-resolving rule, dubbed Split Cycle: in each
majority cycle, discard the majority preferences with the smallest majority
margin. They showed that any rule satisfying five standard axioms, plus a
weakening of Arrow's Independence of Irrelevant Alternatives (IIA) called
Coherent IIA, is refined by Split Cycle. In this paper, we go further and show
that Split Cycle is the only rule satisfying the axioms of Holliday and Pacuit
together with two additional axioms: Coherent Defeat and Positive Involvement
in Defeat. Coherent Defeat states that any majority preference not occurring in
a cycle is retained, while Positive Involvement in Defeat is closely related to
the well-known axiom of Positive Involvement (as in J. P\'{e}rez, Social Choice
and Welfare 18 (2001) 601-616). We characterize Split Cycle not only as a
collective choice rule but also as a social choice correspondence, over both
profiles of linear ballots and profiles of ballots allowing ties. | An Axiomatic Characterization of Split Cycle | 2022-10-22 20:21:15 | Yifeng Ding, Wesley H. Holliday, Eric Pacuit | http://arxiv.org/abs/2210.12503v2, http://arxiv.org/pdf/2210.12503v2 | econ.TH |
35,773 | th | While Nash equilibrium has emerged as the central game-theoretic solution
concept, many important games contain several Nash equilibria and we must
determine how to select between them in order to create real strategic agents.
Several Nash equilibrium refinement concepts have been proposed and studied for
sequential imperfect-information games, the most prominent being trembling-hand
perfect equilibrium, quasi-perfect equilibrium, and recently one-sided
quasi-perfect equilibrium. These concepts are robust to certain arbitrarily
small mistakes, and are guaranteed to always exist; however, we argue that
neither of these is the correct concept for developing strong agents in
sequential games of imperfect information. We define a new equilibrium
refinement concept for extensive-form games called observable perfect
equilibrium in which the solution is robust over trembles in
publicly-observable action probabilities (not necessarily over all action
probabilities that may not be observable by opposing players). Observable
perfect equilibrium correctly captures the assumption that the opponent is
playing as rationally as possible given mistakes that have been observed (while
previous solution concepts do not). We prove that observable perfect
equilibrium is always guaranteed to exist, and demonstrate that it leads to a
different solution than the prior extensive-form refinements in no-limit poker.
We expect observable perfect equilibrium to be a useful equilibrium refinement
concept for modeling many important imperfect-information games of interest in
artificial intelligence. | Observable Perfect Equilibrium | 2022-10-29 09:07:29 | Sam Ganzfried | http://arxiv.org/abs/2210.16506v8, http://arxiv.org/pdf/2210.16506v8 | cs.GT |
35,774 | th | We study the hidden-action principal-agent problem in an online setting. In
each round, the principal posts a contract that specifies the payment to the
agent based on each outcome. The agent then makes a strategic choice of action
that maximizes her own utility, but the action is not directly observable by
the principal. The principal observes the outcome and receives utility from the
agent's choice of action. Based on past observations, the principal dynamically
adjusts the contracts with the goal of maximizing her utility.
We introduce an online learning algorithm and provide an upper bound on its
Stackelberg regret. We show that when the contract space is $[0,1]^m$, the
Stackelberg regret is upper bounded by $\widetilde O(\sqrt{m} \cdot
T^{1-1/(2m+1)})$, and lower bounded by $\Omega(T^{1-1/(m+2)})$, where
$\widetilde O$ omits logarithmic factors. This result shows that
exponential-in-$m$ samples are sufficient and necessary to learn a near-optimal
contract, resolving an open problem on the hardness of online contract design.
Moreover, when contracts are restricted to some subset $\mathcal{F} \subset
[0,1]^m$, we define an intrinsic dimension of $\mathcal{F}$ that depends on the
covering number of the spherical code in the space and bound the regret in
terms of this intrinsic dimension. When $\mathcal{F}$ is the family of linear
contracts, we show that the Stackelberg regret grows exactly as
$\Theta(T^{2/3})$.
The contract design problem is challenging because the utility function is
discontinuous. Bounding the discretization error in this setting has been an
open problem. In this paper, we identify a limited set of directions in which
the utility function is continuous, allowing us to design a new discretization
method and bound its error. This approach enables the first upper bound with no
restrictions on the contract and action space. | The Sample Complexity of Online Contract Design | 2022-11-10 20:59:42 | Banghua Zhu, Stephen Bates, Zhuoran Yang, Yixin Wang, Jiantao Jiao, Michael I. Jordan | http://arxiv.org/abs/2211.05732v3, http://arxiv.org/pdf/2211.05732v3 | cs.GT |
35,775 | th | We consider transferable utility cooperative games with infinitely many
players. In particular, we generalize the notions of core and balancedness, and
also the Bondareva-Shapley Theorem for infinite TU-games with and without
restricted cooperation, to the cases where the core consists of
$\kappa$-additive set functions.
Our generalized Bondareva-Shapley Theorem extends previous results by
Bondareva (1963), Shapley (1967), Schmeidler (1967), Faigle (1989), Kannai
(1969), Kannai (1992), Pinter(2011) and Bartl and Pint\'er (2022). | The $κ$-core and the $κ$-balancedness of TU games | 2022-11-10 22:58:52 | David Bartl, Miklós Pintér | http://arxiv.org/abs/2211.05843v1, http://arxiv.org/pdf/2211.05843v1 | math.OC |
35,776 | th | We propose a dynamical model of price formation on a spatial market where
sellers and buyers are placed on the nodes of a graph, and the distribution of
the buyers depends on the positions and prices of the sellers. We find that,
depending on the positions of the sellers and on the level of information
available, the price dynamics of our model can either converge to fixed prices,
or produce cycles of different amplitudes and periods. We show how to measure
the strength of competition in a spatial network by extracting the exponent of
the scaling of the prices with the size of the system. As an application, we
characterize the different level of competition in street networks of real
cities across the globe. Finally, using the model dynamics we can define a
novel measure of node centrality, which quantifies the relevance of a node in a
competitive market. | Model of spatial competition on discrete markets | 2022-11-14 17:40:15 | Andrea Civilini, Vito Latora | http://arxiv.org/abs/2211.07412v1, http://arxiv.org/pdf/2211.07412v1 | physics.soc-ph |
35,777 | th | We propose a social welfare maximizing mechanism for an energy community that
aggregates individual and shared community resources under a general net energy
metering (NEM) policy. Referred to as Dynamic NEM, the proposed mechanism
adopts the standard NEM tariff model and sets NEM prices dynamically based on
the total shared renewables within the community. We show that Dynamic NEM
guarantees a higher benefit to each community member than possible outside the
community. We further show that Dynamic NEM aligns the individual member's
incentive with that of the overall community; each member optimizing individual
surplus under Dynamic NEM results in maximum community's social welfare.
Dynamic NEM is also shown to satisfy the cost-causation principle. Empirical
studies using real data on a hypothetical energy community demonstrate the
benefits to community members and grid operators. | Achieving Social Optimality for Energy Communities via Dynamic NEM Pricing | 2022-11-17 09:20:52 | Ahmed S. Alahmed, Lang Tong | http://arxiv.org/abs/2211.09360v1, http://arxiv.org/pdf/2211.09360v1 | eess.SY |
35,778 | th | Game theory largely rests on the availability of cardinal utility functions.
In contrast, only ordinal preferences are elicited in fields such as matching
under preferences. The literature focuses on mechanisms with simple dominant
strategies. However, many real-world applications do not have dominant
strategies, so intensities between preferences matter when participants
determine their strategies. Even though precise information about cardinal
utilities is unavailable, some data about the likelihood of utility functions
is typically accessible. We propose to use Bayesian games to formalize
uncertainty about decision-makers utilities by viewing them as a collection of
normal-form games where uncertainty about types persist in all game stages.
Instead of searching for the Bayes-Nash equilibrium, we consider the question
of how uncertainty in utilities is reflected in uncertainty of strategic play.
We introduce $\alpha$-Rank-collections as a solution concept that extends
$\alpha$-Rank, a new solution concept for normal-form games, to Bayesian games.
This allows us to analyze the strategic play in, for example,
(non-strategyproof) matching markets, for which we do not have appropriate
solution concepts so far. $\alpha$-Rank-collections characterize a range of
strategy-profiles emerging from replicator dynamics of the game rather than
equilibrium point. We prove that $\alpha$-Rank-collections are invariant to
positive affine transformations, and that they are efficient to approximate. An
instance of the Boston mechanism is used to illustrate the new solution
concept. | $α$-Rank-Collections: Analyzing Expected Strategic Behavior with Uncertain Utilities | 2022-11-18 19:17:27 | Fabian R. Pieroth, Martin Bichler | http://arxiv.org/abs/2211.10317v1, http://arxiv.org/pdf/2211.10317v1 | cs.GT |
35,779 | th | Using simulations between pairs of $\epsilon$-greedy q-learners with
one-period memory, this article demonstrates that the potential function of the
stochastic replicator dynamics (Foster and Young, 1990) allows it to predict
the emergence of error-proof cooperative strategies from the underlying
parameters of the repeated prisoner's dilemma. The observed cooperation rates
between q-learners are related to the ratio between the kinetic energy exerted
by the polar attractors of the replicator dynamics under the grim trigger
strategy. The frontier separating the parameter space conducive to cooperation
from the parameter space dominated by defection can be found by setting the
kinetic energy ratio equal to a critical value, which is a function of the
discount factor, $f(\delta) = \delta/(1-\delta)$, multiplied by a correction
term to account for the effect of the algorithms' exploration probability. The
gradient at the frontier increases with the distance between the game
parameters and the hyperplane that characterizes the incentive compatibility
constraint for cooperation under grim trigger.
Building on literature from the neurosciences, which suggests that
reinforcement learning is useful to understanding human behavior in risky
environments, the article further explores the extent to which the frontier
derived for q-learners also explains the emergence of cooperation between
humans. Using metadata from laboratory experiments that analyze human choices
in the infinitely repeated prisoner's dilemma, the cooperation rates between
humans are compared to those observed between q-learners under similar
conditions. The correlation coefficients between the cooperation rates observed
for humans and those observed for q-learners are consistently above $0.8$. The
frontier derived from the simulations between q-learners is also found to
predict the emergence of cooperation between humans. | On the Emergence of Cooperation in the Repeated Prisoner's Dilemma | 2022-11-24 20:27:29 | Maximilian Schaefer | http://arxiv.org/abs/2211.15331v2, http://arxiv.org/pdf/2211.15331v2 | econ.TH |
35,780 | th | In many real-world settings agents engage in strategic interactions with
multiple opposing agents who can employ a wide variety of strategies. The
standard approach for designing agents for such settings is to compute or
approximate a relevant game-theoretic solution concept such as Nash equilibrium
and then follow the prescribed strategy. However, such a strategy ignores any
observations of opponents' play, which may indicate shortcomings that can be
exploited. We present an approach for opponent modeling in multiplayer
imperfect-information games where we collect observations of opponents' play
through repeated interactions. We run experiments against a wide variety of
real opponents and exact Nash equilibrium strategies in three-player Kuhn poker
and show that our algorithm significantly outperforms all of the agents,
including the exact Nash equilibrium strategies. | Bayesian Opponent Modeling in Multiplayer Imperfect-Information Games | 2022-12-12 19:48:53 | Sam Ganzfried, Kevin A. Wang, Max Chiswick | http://arxiv.org/abs/2212.06027v3, http://arxiv.org/pdf/2212.06027v3 | cs.GT |
35,781 | th | The quality of consequences in a decision making problem under (severe)
uncertainty must often be compared among different targets (goals, objectives)
simultaneously. In addition, the evaluations of a consequence's performance
under the various targets often differ in their scale of measurement,
classically being either purely ordinal or perfectly cardinal. In this paper,
we transfer recent developments from abstract decision theory with incomplete
preferential and probabilistic information to this multi-target setting and
show how -- by exploiting the (potentially) partial cardinal and partial
probabilistic information -- more informative orders for comparing decisions
can be given than the Pareto order. We discuss some interesting properties of
the proposed orders between decision options and show how they can be
concretely computed by linear optimization. We conclude the paper by
demonstrating our framework in an artificial (but quite real-world) example in
the context of comparing algorithms under different performance measures. | Multi-Target Decision Making under Conditions of Severe Uncertainty | 2022-12-13 14:47:02 | Christoph Jansen, Georg Schollmeyer, Thomas Augustin | http://arxiv.org/abs/2212.06832v1, http://arxiv.org/pdf/2212.06832v1 | cs.AI |
35,782 | th | This paper is intended to investigate the dynamics of heterogeneous Cournot
duopoly games, where the first players adopt identical gradient adjustment
mechanisms but the second players are endowed with distinct rationality levels.
Based on tools of symbolic computations, we introduce a new approach and use it
to establish rigorous conditions of the local stability for these models. We
analytically investigate the bifurcations and prove that the period-doubling
bifurcation is the only possible bifurcation that may occur for all the
considered models. The most important finding of our study is regarding the
influence of players' rational levels on the stability of heterogeneous
duopolistic competition. It is derived that the stability region of the model
where the second firm is rational is the smallest, while that of the one where
the second firm is boundedly rational is the largest. This fact is
counterintuitive and contrasts with relative conclusions in the existing
literature. Furthermore, we also provide numerical simulations to demonstrate
the emergence of complex dynamics such as periodic solutions with different
orders and strange attractors. | Influence of rationality levels on dynamics of heterogeneous Cournot duopolists with quadratic costs | 2022-12-14 12:27:06 | Xiaoliang Li, Yihuo Jiang | http://arxiv.org/abs/2212.07128v1, http://arxiv.org/pdf/2212.07128v1 | econ.TH |
35,783 | th | We study various novel complexity measures for two-sided matching mechanisms,
applied to the two canonical strategyproof matching mechanisms, Deferred
Acceptance (DA) and Top Trading Cycles (TTC). Our metrics are designed to
capture the complexity of various structural (rather than computational)
concerns, in particular ones of recent interest from economics. We consider a
canonical, flexible approach to formalizing our questions: define a protocol or
data structure performing some task, and bound the number of bits that it
requires. Our results apply this approach to four questions of general
interest; for matching applicants to institutions, we ask:
(1) How can one applicant affect the outcome matching?
(2) How can one applicant affect another applicant's set of options?
(3) How can the outcome matching be represented / communicated?
(4) How can the outcome matching be verified?
We prove that DA and TTC are comparable in complexity under questions (1) and
(4), giving new tight lower-bound constructions and new verification protocols.
Under questions (2) and (3), we prove that TTC is more complex than DA. For
question (2), we prove this by giving a new characterization of which
institutions are removed from each applicant's set of options when a new
applicant is added in DA; this characterization may be of independent interest.
For question (3), our result gives lower bounds proving the tightness of
existing constructions for TTC. This shows that the relationship between the
matching and the priorities is more complex in TTC than in DA, formalizing
previous intuitions from the economics literature. Together, our results
complement recent work that models the complexity of observing
strategyproofness and shows that DA is more complex than TTC. This emphasizes
that diverse considerations must factor into gauging the complexity of matching
mechanisms. | Structural Complexities of Matching Mechanisms | 2022-12-16 23:53:30 | Yannai A. Gonczarowski, Clayton Thomas | http://arxiv.org/abs/2212.08709v2, http://arxiv.org/pdf/2212.08709v2 | cs.GT |
35,784 | th | Given the wealth inequality worldwide, there is an urgent need to identify
the mode of wealth exchange through which it arises. To address the research
gap regarding models that combine equivalent exchange and redistribution, this
study compares an equivalent market exchange with redistribution based on power
centers and a nonequivalent exchange with mutual aid using the Polanyi,
Graeber, and Karatani modes of exchange. Two new exchange models based on
multi-agent interactions are reconstructed following an econophysics approach
for evaluating the Gini index (inequality) and total exchange (economic flow).
Exchange simulations indicate that the evaluation parameter of the total
exchange divided by the Gini index can be expressed by the same saturated
curvilinear approximate equation using the wealth transfer rate and time period
of redistribution and the surplus contribution rate of the wealthy and the
saving rate. However, considering the coercion of taxes and its associated
costs and independence based on the morality of mutual aid, a nonequivalent
exchange without return obligation is preferred. This is oriented toward
Graeber's baseline communism and Karatani's mode of exchange D, with
implications for alternatives to the capitalist economy. | Wealth Redistribution and Mutual Aid: Comparison using Equivalent/Nonequivalent Exchange Models of Econophysics | 2022-12-31 04:37:26 | Takeshi Kato | http://dx.doi.org/10.3390/e25020224, http://arxiv.org/abs/2301.00091v1, http://arxiv.org/pdf/2301.00091v1 | econ.TH |
35,830 | th | Cryptocurrencies come with a variety of tokenomic policies as well as
aspirations of desirable monetary characteristics that have been described by
proponents as 'sound money' or even 'ultra sound money.' These propositions are
typically devoid of economic analysis so it is a pertinent question how such
aspirations fit in the wider context of monetary economic theory. In this work,
we develop a framework that determines the optimal token supply policy of a
cryptocurrency, as well as investigate how such policy may be algorithmically
implemented. Our findings suggest that the optimal policy complies with the
Friedman rule and it is dependent on the risk free rate, as well as the growth
of the cryptocurrency platform. Furthermore, we demonstrate a wide set of
conditions under which such policy can be implemented via contractions and
expansions of token supply that can be realized algorithmically with block
rewards, taxation of consumption and burning the proceeds, and blockchain
oracles. | Would Friedman Burn your Tokens? | 2023-06-29 18:19:13 | Aggelos Kiayias, Philip Lazos, Jan Christoph Schlegel | http://arxiv.org/abs/2306.17025v1, http://arxiv.org/pdf/2306.17025v1 | econ.TH |
35,785 | th | We initiate the study of statistical inference and A/B testing for
first-price pacing equilibria (FPPE). The FPPE model captures the dynamics
resulting from large-scale first-price auction markets where buyers use
pacing-based budget management. Such markets arise in the context of internet
advertising, where budgets are prevalent.
We propose a statistical framework for the FPPE model, in which a limit FPPE
with a continuum of items models the long-run steady-state behavior of the
auction platform, and an observable FPPE consisting of a finite number of items
provides the data to estimate primitives of the limit FPPE, such as revenue,
Nash social welfare (a fair metric of efficiency), and other parameters of
interest. We develop central limit theorems and asymptotically valid confidence
intervals. Furthermore, we establish the asymptotic local minimax optimality of
our estimators. We then show that the theory can be used for conducting
statistically valid A/B testing on auction platforms. Numerical simulations
verify our central limit theorems, and empirical coverage rates for our
confidence intervals agree with our theory. | Statistical Inference and A/B Testing for First-Price Pacing Equilibria | 2023-01-05 22:37:49 | Luofeng Liao, Christian Kroer | http://arxiv.org/abs/2301.02276v3, http://arxiv.org/pdf/2301.02276v3 | math.ST |
35,786 | th | We study a sufficiently general regret criterion for choosing between two
probabilistic lotteries. For independent lotteries, the criterion is consistent
with stochastic dominance and can be made transitive by a unique choice of the
regret function. Together with additional (and intuitively meaningful)
super-additivity property, the regret criterion resolves the Allais' paradox
including the cases were the paradox disappears, and the choices agree with the
expected utility. This superadditivity property is also employed for
establishing consistency between regret and stochastic dominance for dependent
lotteries. Furthermore, we demonstrate how the regret criterion can be used in
Savage's omelet, a classical decision problem in which the lottery outcomes are
not fully resolved. The expected utility cannot be used in such situations, as
it discards important aspects of lotteries. | Regret theory, Allais' Paradox, and Savage's omelet | 2023-01-06 13:10:14 | Vardan G. Bardakhchyan, Armen E. Allahverdyan | http://dx.doi.org/10.1016/j.jmp.2023.102807, http://arxiv.org/abs/2301.02447v1, http://arxiv.org/pdf/2301.02447v1 | econ.TH |
35,787 | th | The main ambition of this thesis is to contribute to the development of
cooperative game theory towards combinatorics, algorithmics and discrete
geometry. Therefore, the first chapter of this manuscript is devoted to
highlighting the geometric nature of the coalition functions of transferable
utility games and spotlights the existing connections with the theory of
submodular set functions and polyhedral geometry.
To deepen the links with polyhedral geometry, we define a new family of
polyhedra, called the basic polyhedra, on which we can apply a generalized
version of the Bondareva-Shapley Theorem to check their nonemptiness. To allow
a practical use of these computational tools, we present an algorithmic
procedure generating the minimal balanced collections, based on Peleg's method.
Subsequently, we apply the generalization of the Bondareva-Shapley Theorem to
design a collection of algorithmic procedures able to check properties or
generate specific sets of coalitions.
In the next chapter, the connections with combinatorics are investigated.
First, we prove that the balanced collections form a combinatorial species, and
we construct the one of k-uniform hypergraphs of size p, as an intermediary
step to construct the species of balanced collections. Afterwards, a few
results concerning resonance arrangements distorted by games are introduced,
which gives new information about the space of preimputations and the facial
configuration of the core.
Finally, we address the question of core stability using the results from the
previous chapters. Firstly, we present an algorithm based on Grabisch and
Sudh\"olter's nested balancedness characterization of games with a stable core,
which extensively uses the generalization of the Bondareva-Shapley Theorem
introduced in the second chapter. Secondly, a new necessary condition for core
stability is described, based on the application ... | Geometry of Set Functions in Game Theory: Combinatorial and Computational Aspects | 2023-01-08 03:19:51 | Dylan Laplace Mermoud | http://arxiv.org/abs/2301.02950v2, http://arxiv.org/pdf/2301.02950v2 | cs.GT |
35,788 | th | We consider the obnoxious facility location problem (in which agents prefer
the facility location to be far from them) and propose a hierarchy of
distance-based proportional fairness concepts for the problem. These fairness
axioms ensure that groups of agents at the same location are guaranteed to be a
distance from the facility proportional to their group size. We consider
deterministic and randomized mechanisms, and compute tight bounds on the price
of proportional fairness. In the deterministic setting, not only are our
proportional fairness axioms incompatible with strategyproofness, the Nash
equilibria may not guarantee welfare within a constant factor of the optimal
welfare. On the other hand, in the randomized setting, we identify
proportionally fair and strategyproof mechanisms that give an expected welfare
within a constant factor of the optimal welfare. | Proportional Fairness in Obnoxious Facility Location | 2023-01-11 10:30:35 | Haris Aziz, Alexander Lam, Bo Li, Fahimeh Ramezani, Toby Walsh | http://arxiv.org/abs/2301.04340v1, http://arxiv.org/pdf/2301.04340v1 | cs.GT |
35,789 | th | Cake cutting is a classic fair division problem, with the cake serving as a
metaphor for a heterogeneous divisible resource. Recently, it was shown that
for any number of players with arbitrary preferences over a cake, it is
possible to partition the players into groups of any desired size and divide
the cake among the groups so that each group receives a single contiguous piece
and every player is envy-free. For two groups, we characterize the group sizes
for which such an assignment can be computed by a finite algorithm, showing
that the task is possible exactly when one of the groups is a singleton. We
also establish an analogous existence result for chore division, and show that
the result does not hold for a mixed cake. | Cutting a Cake Fairly for Groups Revisited | 2023-01-22 08:29:42 | Erel Segal-Halevi, Warut Suksompong | http://dx.doi.org/10.1080/00029890.2022.2153566, http://arxiv.org/abs/2301.09061v1, http://arxiv.org/pdf/2301.09061v1 | econ.TH |
35,790 | th | Pen testing is the problem of selecting high-capacity resources when the only
way to measure the capacity of a resource expends its capacity. We have a set
of $n$ pens with unknown amounts of ink and our goal is to select a feasible
subset of pens maximizing the total ink in them. We are allowed to gather more
information by writing with them, but this uses up ink that was previously in
the pens. Algorithms are evaluated against the standard benchmark, i.e, the
optimal pen testing algorithm, and the omniscient benchmark, i.e, the optimal
selection if the quantity of ink in the pens are known.
We identify optimal and near optimal pen testing algorithms by drawing
analogues to auction theoretic frameworks of deferred-acceptance auctions and
virtual values. Our framework allows the conversion of any near optimal
deferred-acceptance mechanism into a near optimal pen testing algorithm.
Moreover, these algorithms guarantee an additional overhead of at most
$(1+o(1)) \ln n$ in the approximation factor of the omniscient benchmark. We
use this framework to give pen testing algorithms for various combinatorial
constraints like matroid, knapsack, and general downward-closed constraints and
also for online environments. | Combinatorial Pen Testing (or Consumer Surplus of Deferred-Acceptance Auctions) | 2023-01-29 18:19:37 | Aadityan Ganesh, Jason Hartline | http://arxiv.org/abs/2301.12462v2, http://arxiv.org/pdf/2301.12462v2 | cs.GT |
35,791 | th | The core is a dominant solution concept in economics and cooperative game
theory; it is predominantly used for profit, equivalently cost or utility,
sharing. This paper demonstrates the versatility of this notion by proposing a
completely different use: in a so-called investment management game, which is a
game against nature rather than a cooperative game. This game has only one
agent whose strategy set is all possible ways of distributing her money among
investment firms. The agent wants to pick a strategy such that in each of
exponentially many future scenarios, sufficient money is available in the right
firms so she can buy an optimal investment for that scenario. Such a strategy
constitutes a core imputation under a broad interpretation, though traditional
formal framework, of the core. Our game is defined on perfect graphs, since the
maximum stable set problem can be solved in polynomial time for such graphs. We
completely characterize the core of this game, analogous to Shapley and Shubik
characterization of the core of the assignment game. A key difference is the
following technical novelty: whereas their characterization follows from total
unimodularity, ours follows from total dual integrality | The Investment Management Game: Extending the Scope of the Notion of Core | 2023-02-01 20:17:16 | Vijay V. Vazirani | http://arxiv.org/abs/2302.00608v5, http://arxiv.org/pdf/2302.00608v5 | econ.TH |
35,792 | th | The classic Bayesian persuasion model assumes a Bayesian and best-responding
receiver. We study a relaxation of the Bayesian persuasion model where the
receiver can approximately best respond to the sender's signaling scheme. We
show that, under natural assumptions, (1) the sender can find a signaling
scheme that guarantees itself an expected utility almost as good as its optimal
utility in the classic model, no matter what approximately best-responding
strategy the receiver uses; (2) on the other hand, there is no signaling scheme
that gives the sender much more utility than its optimal utility in the classic
model, even if the receiver uses the approximately best-responding strategy
that is best for the sender. Together, (1) and (2) imply that the approximately
best-responding behavior of the receiver does not affect the sender's maximal
achievable utility a lot in the Bayesian persuasion problem. The proofs of both
results rely on the idea of robustification of a Bayesian persuasion scheme:
given a pair of the sender's signaling scheme and the receiver's strategy, we
can construct another signaling scheme such that the receiver prefers to use
that strategy in the new scheme more than in the original scheme, and the two
schemes give the sender similar utilities. As an application of our main result
(1), we show that, in a repeated Bayesian persuasion model where the receiver
learns to respond to the sender by some algorithms, the sender can do almost as
well as in the classic model. Interestingly, unlike (2), with a learning
receiver the sender can sometimes do much better than in the classic model. | Persuading a Behavioral Agent: Approximately Best Responding and Learning | 2023-02-07 22:12:46 | Yiling Chen, Tao Lin | http://arxiv.org/abs/2302.03719v1, http://arxiv.org/pdf/2302.03719v1 | cs.GT |
35,793 | th | This paper uses value functions to characterize the pure-strategy
subgame-perfect equilibria of an arbitrary, possibly infinite-horizon game. It
specifies the game's extensive form as a pentaform (Streufert 2023p,
arXiv:2107.10801v4), which is a set of quintuples formalizing the abstract
relationships between nodes, actions, players, and situations (situations
generalize information sets). Because a pentaform is a set, this paper can
explicitly partition the game form into piece forms, each of which starts at a
(Selten) subroot and contains all subsequent nodes except those that follow a
subsequent subroot. Then the set of subroots becomes the domain of a value
function, and the piece-form partition becomes the framework for a value
recursion which generalizes the Bellman equation from dynamic programming. The
main results connect the value recursion with the subgame-perfect equilibria of
the original game, under the assumptions of upper- and lower-convergence.
Finally, a corollary characterizes subgame perfection as the absence of an
improving one-piece deviation. | Dynamic Programming for Pure-Strategy Subgame Perfection in an Arbitrary Game | 2023-02-08 06:16:24 | Peter A. Streufert | http://arxiv.org/abs/2302.03855v3, http://arxiv.org/pdf/2302.03855v3 | econ.TH |
35,794 | th | A powerful feature in mechanism design is the ability to irrevocably commit
to the rules of a mechanism. Commitment is achieved by public declaration,
which enables players to verify incentive properties in advance and the outcome
in retrospect. However, public declaration can reveal superfluous information
that the mechanism designer might prefer not to disclose, such as her target
function or private costs. Avoiding this may be possible via a trusted
mediator; however, the availability of a trusted mediator, especially if
mechanism secrecy must be maintained for years, might be unrealistic. We
propose a new approach to commitment, and show how to commit to, and run, any
given mechanism without disclosing it, while enabling the verification of
incentive properties and the outcome -- all without the need for any mediators.
Our framework is based on zero-knowledge proofs -- a cornerstone of modern
cryptographic theory. Applications include non-mediated bargaining with hidden
yet binding offers. | Zero-Knowledge Mechanisms | 2023-02-11 06:43:43 | Ran Canetti, Amos Fiat, Yannai A. Gonczarowski | http://arxiv.org/abs/2302.05590v1, http://arxiv.org/pdf/2302.05590v1 | econ.TH |
35,795 | th | Task allocation is a crucial process in modern systems, but it is often
challenged by incomplete information about the utilities of participating
agents. In this paper, we propose a new profit maximization mechanism for the
task allocation problem, where the task publisher seeks an optimal incentive
function to maximize its own profit and simultaneously ensure the truthful
announcing of the agent's private information (type) and its participation in
the task, while an autonomous agent aims at maximizing its own utility function
by deciding on its participation level and announced type. Our mechanism stands
out from the classical contract theory-based truthful mechanisms as it empowers
agents to make their own decisions about their level of involvement, making it
more practical for many real-world task allocation scenarios. It has been
proven that by considering a linear form of incentive function consisting of
two decision functions for the task publisher the mechanism's goals are met.
The proposed truthful mechanism is initially modeled as a non-convex functional
optimization with the double continuum of constraints, nevertheless, we
demonstrate that by deriving an equivalent form of the incentive constraints,
it can be reformulated as a tractable convex optimal control problem. Further,
we propose a numerical algorithm to obtain the solution. | A Tractable Truthful Profit Maximization Mechanism Design with Autonomous Agents | 2023-02-11 15:22:57 | Mina Montazeri, Hamed Kebriaei, Babak N. Araabi | http://arxiv.org/abs/2302.05677v1, http://arxiv.org/pdf/2302.05677v1 | econ.TH |
35,796 | th | In domains where agents interact strategically, game theory is applied widely
to predict how agents would behave. However, game-theoretic predictions are
based on the assumption that agents are fully rational and believe in
equilibrium plays, which unfortunately are mostly not true when human decision
makers are involved. To address this limitation, a number of behavioral
game-theoretic models are defined to account for the limited rationality of
human decision makers. The "quantal cognitive hierarchy" (QCH) model, which is
one of the more recent models, is demonstrated to be the state-of-art model for
predicting human behaviors in normal-form games. The QCH model assumes that
agents in games can be both non-strategic (level-0) and strategic (level-$k$).
For level-0 agents, they choose their strategies irrespective of other agents.
For level-$k$ agents, they assume that other agents would be behaving at levels
less than $k$ and best respond against them. However, an important assumption
of the QCH model is that the distribution of agents' levels follows a Poisson
distribution. In this paper, we relax this assumption and design a
learning-based method at the population level to iteratively estimate the
empirical distribution of agents' reasoning levels. By using a real-world
dataset from the Swedish lowest unique positive integer game, we demonstrate
how our refined QCH model and the iterative solution-seeking process can be
used in providing a more accurate behavioral model for agents. This leads to
better performance in fitting the real data and allows us to track an agent's
progress in learning to play strategically over multiple rounds. | Improving Quantal Cognitive Hierarchy Model Through Iterative Population Learning | 2023-02-13 03:23:26 | Yuhong Xu, Shih-Fen Cheng, Xinyu Chen | http://arxiv.org/abs/2302.06033v2, http://arxiv.org/pdf/2302.06033v2 | cs.GT |
35,797 | th | Recommendation systems are pervasive in the digital economy. An important
assumption in many deployed systems is that user consumption reflects user
preferences in a static sense: users consume the content they like with no
other considerations in mind. However, as we document in a large-scale online
survey, users do choose content strategically to influence the types of content
they get recommended in the future.
We model this user behavior as a two-stage noisy signalling game between the
recommendation system and users: the recommendation system initially commits to
a recommendation policy, presents content to the users during a cold start
phase which the users choose to strategically consume in order to affect the
types of content they will be recommended in a recommendation phase. We show
that in equilibrium, users engage in behaviors that accentuate their
differences to users of different preference profiles. In addition,
(statistical) minorities out of fear of losing their minority content
exposition may not consume content that is liked by mainstream users. We next
propose three interventions that may improve recommendation quality (both on
average and for minorities) when taking into account strategic consumption: (1)
Adopting a recommendation system policy that uses preferences from a prior, (2)
Communicating to users that universally liked ("mainstream") content will not
be used as basis of recommendation, and (3) Serving content that is
personalized-enough yet expected to be liked in the beginning. Finally, we
describe a methodology to inform applied theory modeling with survey results. | Recommending to Strategic Users | 2023-02-13 20:57:30 | Andreas Haupt, Dylan Hadfield-Menell, Chara Podimata | http://arxiv.org/abs/2302.06559v1, http://arxiv.org/pdf/2302.06559v1 | cs.CY |
35,798 | th | LP-duality theory has played a central role in the study of the core, right
from its early days to the present time. However, despite the extensive nature
of this work, basic gaps still remain. We address these gaps using the
following building blocks from LP-duality theory: 1. Total unimodularity (TUM).
2. Complementary slackness conditions and strict complementarity. Our
exploration of TUM leads to defining new games, characterizing their cores and
giving novel ways of using core imputations to enforce constraints that arise
naturally in applications of these games. The latter include: 1. Efficient
algorithms for finding min-max fair, max-min fair and equitable core
imputations. 2. Encouraging diversity and avoiding over-representation in a
generalization of the assignment game. Complementarity enables us to prove new
properties of core imputations of the assignment game and its generalizations. | LP-Duality Theory and the Cores of Games | 2023-02-15 15:46:50 | Vijay V. Vazirani | http://arxiv.org/abs/2302.07627v5, http://arxiv.org/pdf/2302.07627v5 | cs.GT |
35,799 | th | Utilities and transmission system operators (TSO) around the world implement
demand response programs for reducing electricity consumption by sending
information on the state of balance between supply demand to end-use consumers.
We construct a Bayesian persuasion model to analyse such demand response
programs. Using a simple model consisting of two time steps for contract
signing and invoking, we analyse the relation between the pricing of
electricity and the incentives of the TSO to garble information about the true
state of the generation. We show that if the electricity is priced at its
marginal cost of production, the TSO has no incentive to lie and always tells
the truth. On the other hand, we provide conditions where overpricing of
electricity leads the TSO to provide no information to the consumer. | Signalling for Electricity Demand Response: When is Truth Telling Optimal? | 2023-02-24 20:36:42 | Rene Aid, Anupama Kowli, Ankur A. Kulkarni | http://arxiv.org/abs/2302.12770v3, http://arxiv.org/pdf/2302.12770v3 | eess.SY |