id
int64 28.8k
36k
| category
stringclasses 3
values | text
stringlengths 44
3.03k
| title
stringlengths 10
236
| published
stringlengths 19
19
| author
stringlengths 6
943
| link
stringlengths 66
127
| primary_category
stringclasses 62
values |
---|---|---|---|---|---|---|---|
35,599 | th | In this paper monetary risk measures that are positively superhomogeneous,
called star-shaped risk measures, are characterized and their properties
studied. The measures in this class, which arise when the controversial
subadditivity property of coherent risk measures is dispensed with and positive
homogeneity is weakened, include all practically used risk measures, in
particular, both convex risk measures and Value-at-Risk. From a financial
viewpoint, our relaxation of convexity is necessary to quantify the capital
requirements for risk exposure in the presence of liquidity risk, competitive
delegation, or robust aggregation mechanisms. From a decision theoretical
perspective, star-shaped risk measures emerge from variational preferences when
risk mitigation strategies can be adopted by a rational decision maker. | Star-shaped Risk Measures | 2021-03-29 20:33:10 | Erio Castagnoli, Giacomo Cattelan, Fabio Maccheroni, Claudio Tebaldi, Ruodu Wang | http://arxiv.org/abs/2103.15790v3, http://arxiv.org/pdf/2103.15790v3 | econ.TH |
35,600 | th | In this study, we consider the real-world problem of assigning students to
classes, where each student has a preference list, ranking a subset of classes
in order of preference. Though we use existing approaches to include the daily
class assignment of Gunma University, new concepts and adjustments are required
to find improved results depending on real instances in the field. Thus, we
propose minimax-rank constrained maximum-utility matchings and a compromise
between maximum-utility matchings and fair matchings, where a matching is said
to be fair if it lexicographically minimizes the number of students assigned to
classes not included in their choices, the number of students assigned to their
last choices, and so on. In addition, we also observe the potential
inefficiency of the student proposing deferred acceptance mechanism with single
tie-breaking, which a hot topic in the literature on the school choice problem. | Optimal class assignment problem: a case study at Gunma University | 2021-03-31 11:02:38 | Akifumi Kira, Kiyohito Nagano, Manabu Sugiyama, Naoyuki Kamiyama | http://arxiv.org/abs/2103.16879v1, http://arxiv.org/pdf/2103.16879v1 | cs.GT |
35,601 | th | We study the experimentation dynamics of a decision maker (DM) in a two-armed
bandit setup (Bolton and Harris (1999)), where the agent holds ambiguous
beliefs regarding the distribution of the return process of one arm and is
certain about the other one. The DM entertains Multiplier preferences a la
Hansen and Sargent (2001), thus we frame the decision making environment as a
two-player differential game against nature in continuous time. We characterize
the DM value function and her optimal experimentation strategy that turns out
to follow a cut-off rule with respect to her belief process. The belief
threshold for exploring the ambiguous arm is found in closed form and is shown
to be increasing with respect to the ambiguity aversion index. We then study
the effect of provision of an unambiguous information source about the
ambiguous arm. Interestingly, we show that the exploration threshold rises
unambiguously as a result of this new information source, thereby leading to
more conservatism. This analysis also sheds light on the efficient time to
reach for an expert opinion. | Robust Experimentation in the Continuous Time Bandit Problem | 2021-03-31 23:42:39 | Farzad Pourbabaee | http://dx.doi.org/10.1007/s00199-020-01328-3, http://arxiv.org/abs/2104.00102v1, http://arxiv.org/pdf/2104.00102v1 | econ.TH |
35,602 | th | We consider a general tractable model for default contagion and systemic risk
in a heterogeneous financial network, subject to an exogenous macroeconomic
shock. We show that, under some regularity assumptions, the default cascade
model could be transferred to a death process problem represented by
balls-and-bins model. We also reduce the dimension of the problem by
classifying banks according to different types, in an appropriate type space.
These types may be calibrated to real-world data by using machine learning
techniques. We then state various limit theorems regarding the final size of
default cascade over different types. In particular, under suitable assumptions
on the degree and threshold distributions, we show that the final size of
default cascade has asymptotically Gaussian fluctuations. We next state limit
theorems for different system-wide wealth aggregation functions and show how
the systemic risk measure, in a given stress test scenario, could be related to
the structure and heterogeneity of financial networks. We finally show how
these results could be used by a social planner to optimally target
interventions during a financial crisis, with a budget constraint and under
partial information of the financial network. | Limit Theorems for Default Contagion and Systemic Risk | 2021-04-01 07:18:44 | Hamed Amini, Zhongyuan Cao, Agnes Sulem | http://arxiv.org/abs/2104.00248v1, http://arxiv.org/pdf/2104.00248v1 | q-fin.RM |
35,603 | th | In real life auctions, a widely observed phenomenon is the winner's curse --
the winner's high bid implies that the winner often over-estimates the value of
the good for sale, resulting in an incurred negative utility. The seminal work
of Eyster and Rabin [Econometrica'05] introduced a behavioral model aimed to
explain this observed anomaly. We term agents who display this bias "cursed
agents". We adopt their model in the interdependent value setting, and aim to
devise mechanisms that prevent the cursed agents from obtaining negative
utility. We design mechanisms that are cursed ex-post IC, that is, incentivize
agents to bid their true signal even though they are cursed, while ensuring
that the outcome is individually rational -- the price the agents pay is no
more than the agents' true value.
Since the agents might over-estimate the good's value, such mechanisms might
require the seller to make positive transfers to the agents to prevent agents
from over-paying. For revenue maximization, we give the optimal deterministic
and anonymous mechanism. For welfare maximization, we require ex-post budget
balance (EPBB), as positive transfers might lead to negative revenue. We
propose a masking operation that takes any deterministic mechanism, and imposes
that the seller would not make positive transfers, enforcing EPBB. We show that
in typical settings, EPBB implies that the mechanism cannot make any positive
transfers, implying that applying the masking operation on the fully efficient
mechanism results in a socially optimal EPBB mechanism. This further implies
that if the valuation function is the maximum of agents' signals, the optimal
EPBB mechanism obtains zero welfare. In contrast, we show that for sum-concave
valuations, which include weighted-sum valuations and l_p-norms, the welfare
optimal EPBB mechanism obtains half of the optimal welfare as the number of
agents grows large. | Cursed yet Satisfied Agents | 2021-04-02 04:15:53 | Yiling Chen, Alon Eden, Juntao Wang | http://arxiv.org/abs/2104.00835v4, http://arxiv.org/pdf/2104.00835v4 | cs.GT |
35,604 | th | We use the topology of simplicial complexes to model political structures
following [1]. Simplicial complexes are a natural tool to encode interactions
in the structures since a simplex can be used to represent a subset of
compatible agents. We translate the wedge, cone, and suspension operations into
the language of political structures and show how these constructions
correspond to merging structures and introducing mediators. We introduce the
notions of the viability of an agent and the stability of a political system
and examine their interplay with the simplicial complex topology, casting their
interactions in category-theoretic language whenever possible. We introduce a
refinement of the model by assigning weights to simplices corresponding to the
number of issues the agents agree on. In addition, homology of simplicial
complexes is used to detect non-viabilities, certain cycles of incompatible
agents, and the (non)presence of mediators. Finally, we extend some results
from [1], bringing viability and stability into the language of friendly
delegations and using homology to examine the existence of R-compromises and
D-compromises. | Political structures and the topology of simplicial complexes | 2021-04-05 23:07:35 | Andrea Mock, Ismar Volic | http://arxiv.org/abs/2104.02131v3, http://arxiv.org/pdf/2104.02131v3 | physics.soc-ph |
35,605 | th | This is an introductory textbook of the history of economics of inequality
for undergraduates and genreral readers. It begins with Adam Smith's critique
of Rousseau. The first and second chapters focus on Smith and Karl Marx, in the
broad classical tradition of economics, where it is believed that there is an
inseparable relationship between production and distribution, economic growth
and inequality. Chapters 3 and 4 argue that despite the fact that the founders
of the neoclassical school had shown an active interest in social issues,
namely worker poverty, the issues of production and distribution became
discussed separately among neoclassicals. Toward the end of the 20th century,
however, there was a renewed awareness within economics of the problem of the
relationship between production and distribution. The young Piketty's
beginnings as an economist are set against this backdrop. Chapters 5 to 8
explain the circumstances of the restoration of classical concerns within the
neoclassical framework. Then, in chapters 9 and 10, I discuss the fact that
Thomas Piketty's seminal work is a new development in this "inequality
renaissance," and try to gain a perspective on future trends in the debate.
Mathematical appendix presents simple models of growth and distribution. | The Struggle with Inequality | 2021-04-15 14:18:21 | Shin-Ichiro Inaba | http://arxiv.org/abs/2104.07379v2, http://arxiv.org/pdf/2104.07379v2 | econ.GN |
35,606 | th | A common practice in many auctions is to offer bidders an opportunity to
improve their bids, known as a Best and Final Offer (BAFO) stage. This final
bid can depend on new information provided about either the asset or the
competitors. This paper examines the effects of new information regarding
competitors, seeking to determine what information the auctioneer should
provide assuming the set of allowable bids is discrete. The rational strategy
profile that maximizes the revenue of the auctioneer is the one where each
bidder makes the highest possible bid that is lower than his valuation of the
item. This strategy profile is an equilibrium for a large enough number of
bidders, regardless of the information released. We compare the number of
bidders needed for this profile to be an equilibrium under different
information settings. We find that it becomes an equilibrium with fewer bidders
when less additional information is made available to the bidders regarding the
competition. It follows that when the number of bidders is a priori unknown,
there are some advantages to the auctioneer to not reveal information. | I Want to Tell You? Maximizing Revenue in First-Price Two-Stage Auctions | 2021-04-20 16:06:59 | Galit Ashkenazi-Golan, Yevgeny Tsodikovich, Yannick Viossat | http://arxiv.org/abs/2104.09942v1, http://arxiv.org/pdf/2104.09942v1 | cs.GT |
35,818 | th | Barseghyan and Molinari (2023) give sufficient conditions for
semi-nonparametric point identification of parameters of interest in a mixture
model of decision-making under risk, allowing for unobserved heterogeneity in
utility functions and limited consideration. A key assumption in the model is
that the heterogeneity of risk preferences is unobservable but
context-independent. In this comment, we build on their insights and present
identification results in a setting where the risk preferences are allowed to
be context-dependent. | Context-Dependent Heterogeneous Preferences: A Comment on Barseghyan and Molinari (2023) | 2023-05-18 15:50:58 | Matias D. Cattaneo, Xinwei Ma, Yusufcan Masatlioglu | http://arxiv.org/abs/2305.10934v1, http://arxiv.org/pdf/2305.10934v1 | econ.TH |
35,608 | th | We develop a rational inattention theory of echo chamber, whereby players
gather information about an uncertain state by allocating limited attention
capacities across biased primary sources and other players. The resulting
Poisson attention network transmits information from the primary source to a
player either directly, or indirectly through the other players. Rational
inattention generates heterogeneous demands for information among players who
are biased toward different decisions. In an echo chamber equilibrium, each
player restricts attention to his own-biased source and like-minded friends, as
the latter attend to the same primary source as his, and so could serve as
secondary sources in case the information transmission from the primary source
to him is disrupted. We provide sufficient conditions that give rise to echo
chamber equilibria, characterize the attention networks inside echo chambers,
and use our results to inform the design and regulation of information
platforms. | A Rational Inattention Theory of Echo Chamber | 2021-04-21 20:32:04 | Lin Hu, Anqi Li, Xu Tan | http://arxiv.org/abs/2104.10657v7, http://arxiv.org/pdf/2104.10657v7 | econ.TH |
35,609 | th | Decades of research suggest that information exchange in groups and
organizations can reliably improve judgment accuracy in tasks such as financial
forecasting, market research, and medical decision-making. However, we show
that improving the accuracy of numeric estimates does not necessarily improve
the accuracy of decisions. For binary choice judgments, also known as
classification tasks--e.g. yes/no or build/buy decisions--social influence is
most likely to grow the majority vote share, regardless of the accuracy of that
opinion. As a result, initially inaccurate groups become increasingly
inaccurate after information exchange even as they signal stronger support. We
term this dynamic the "crowd classification problem." Using both a novel
dataset as well as a reanalysis of three previous datasets, we study this
process in two types of information exchange: (1) when people share votes only,
and (2) when people form and exchange numeric estimates prior to voting.
Surprisingly, when people exchange numeric estimates prior to voting, the
binary choice vote can become less accurate even as the average numeric
estimate becomes more accurate. Our findings recommend against voting as a form
of decision-making when groups are optimizing for accuracy. For those cases
where voting is required, we discuss strategies for managing communication to
avoid the crowd classification problem. We close with a discussion of how our
results contribute to a broader contingency theory of collective intelligence. | The Crowd Classification Problem: Social Dynamics of Binary Choice Accuracy | 2021-04-22 23:00:52 | Joshua Becker, Douglas Guilbeault, Ned Smith | http://arxiv.org/abs/2104.11300v1, http://arxiv.org/pdf/2104.11300v1 | econ.GN |
35,610 | th | We modify the standard model of price competition with horizontally
differentiated products, imperfect information, and search frictions by
allowing consumers to flexibly acquire information about a product's match
value during their visits. We characterize a consumer's optimal search and
information acquisition protocol and analyze the pricing game between firms.
Notably, we establish that in search markets there are fundamental differences
between search frictions and information frictions, which affect market prices,
profits, and consumer welfare in markedly different ways. Although higher
search costs beget higher prices (and profits for firms), higher information
acquisition costs lead to lower prices and may benefit consumers. We discuss
implications of our findings for policies concerning disclosure rules and
hidden fees. | Search and Competition with Flexible Investigations | 2021-04-27 16:07:51 | Vasudha Jain, Mark Whitmeyer | http://arxiv.org/abs/2104.13159v1, http://arxiv.org/pdf/2104.13159v1 | econ.TH |
35,611 | th | Many real-world games contain parameters which can affect payoffs, action
spaces, and information states. For fixed values of the parameters, the game
can be solved using standard algorithms. However, in many settings agents must
act without knowing the values of the parameters that will be encountered in
advance. Often the decisions must be made by a human under time and resource
constraints, and it is unrealistic to assume that a human can solve the game in
real time. We present a new framework that enables human decision makers to
make fast decisions without the aid of real-time solvers. We demonstrate
applicability to a variety of situations including settings with multiple
players and imperfect information. | Human strategic decision making in parametrized games | 2021-04-30 06:40:27 | Sam Ganzfried | http://arxiv.org/abs/2104.14744v3, http://arxiv.org/pdf/2104.14744v3 | cs.GT |
35,612 | th | We study two-sided reputational bargaining with opportunities to issue an
ultimatum -- threats to force dispute resolution. Each player is either a
justified type, who never concedes and issues an ultimatum whenever an
opportunity arrives, or an unjustified type, who can concede, wait, or bluff
with an ultimatum. In equilibrium, the presence of ultimatum opportunities can
harm or benefit a player by decelerating or accelerating reputation building.
When only one player can issue an ultimatum, equilibrium play is unique. The
hazard rate of dispute resolution is discontinuous and piecewise monotonic in
time. As the probabilities of being justified vanish, agreement is immediate
and efficient, and if the set of justifiable demands is rich, payoffs modify
Abreu and Gul (2000), with the discount rate replaced by the ultimatum
opportunity arrival rate if the former is smaller. When both players' ultimatum
opportunities arrive sufficiently fast, there may exist multiple equilibria in
which their reputations do not build up and negotiation lasts forever. | Reputational Bargaining with Ultimatum Opportunities | 2021-05-04 18:44:48 | Mehmet Ekmekci, Hanzhe Zhang | http://arxiv.org/abs/2105.01581v1, http://arxiv.org/pdf/2105.01581v1 | econ.TH |
35,613 | th | The present work generalizes the analytical results of Petrikaite (2016) to a
market where more than two firms interact. As a consequence, for a generic
number of firms in the oligopoly model described by Janssen et al (2005), the
relationship between the critical discount factor which sustains the monopoly
collusive allocation and the share of perfectly informed buyers is
non-monotonic, reaching a unique internal point of minimum. The first section
locates the work within the proper economic framework. The second section hosts
the analytical computations and the mathematical reasoning needed to derive the
desired generalization, which mainly relies on the Leibniz rule for the
differentiation under the integral sign and the Bounded Convergence Theorem. | Sustainability of Collusion and Market Transparency in a Sequential Search Market: a Generalization | 2021-05-05 17:49:10 | Jacopo De Tullio, Giuseppe Puleio | http://arxiv.org/abs/2105.02094v1, http://arxiv.org/pdf/2105.02094v1 | econ.TH |
35,614 | th | This paper aims to develop new mathematical and computational tools for
modeling the distribution of portfolio returns across portfolios. We establish
relevant mathematical formulas and propose efficient algorithms, drawing upon
powerful techniques in computational geometry and the literature on splines, to
compute the probability density function, the cumulative distribution function,
and the k-th moment of the probability function. Our algorithmic tools and
implementations efficiently handle portfolios with 10000 assets, and compute
moments of order k up to 40 in a few seconds, thus handling real-life
scenarios. We focus on the long-only strategy which is the most common type of
investment, i.e. on portfolios whose weights are non-negative and sum up to 1;
our approach is readily generalizable. Thus, we leverage a geometric
representation of the stock market, where the investment set defines a simplex
polytope. The cumulative distribution function corresponds to a portfolio score
capturing the percentage of portfolios yielding a return not exceeding a given
value. We introduce closed-form analytic formulas for the first 4 moments of
the cross-sectional returns distribution, as well as a novel algorithm to
compute all higher moments. We show that the first 4 moments are a direct
mapping of the asset returns' moments. All of our algorithms and solutions are
fully general and include the special case of equal asset returns, which was
sometimes excluded in previous works. Finally, we apply our portfolio score in
the design of new performance measures and asset management. We found our
score-based optimal portfolios less concentrated than the mean-variance
portfolio and much less risky in terms of ranking. | The cross-sectional distribution of portfolio returns and applications | 2021-05-14 01:29:12 | Ludovic Calès, Apostolos Chalkis, Ioannis Z. Emiris | http://arxiv.org/abs/2105.06573v1, http://arxiv.org/pdf/2105.06573v1 | cs.CE |
35,615 | th | Motivated by a variety of online matching platforms, we consider demand and
supply units which are located i.i.d. in [0,1]^d, and each demand unit needs to
be matched with a supply unit. The goal is to minimize the expected average
distance between matched pairs (the "cost"). We model dynamic arrivals of one
or both of demand and supply with uncertain locations of future arrivals, and
characterize the scaling behavior of the achievable cost in terms of system
size (number of supply units), as a function of the dimension d. Our
achievability results are backed by concrete matching algorithms. Across cases,
we find that the platform can achieve cost (nearly) as low as that achievable
if the locations of future arrivals had been known beforehand. Furthermore, in
all cases except one, cost nearly as low in terms of scaling as the expected
distance to the nearest neighboring supply unit is achievable, i.e., the
matching constraint does not cause an increase in cost either. The aberrant
case is where only demand arrivals are dynamic, and d=1; excess supply
significantly reduces cost in this case. | Dynamic Spatial Matching | 2021-05-16 04:49:58 | Yash Kanoria | http://arxiv.org/abs/2105.07329v3, http://arxiv.org/pdf/2105.07329v3 | math.PR |
35,616 | th | We consider a generalization of rational inattention problems by measuring
costs of information through the information radius (Sibson, 1969; Verd\'u,
2015) of statistical experiments. We introduce a notion of attention elasticity
measuring the sensitivity of attention strategies with respect to changes in
incentives. We show how the introduced class of cost functions controls
attention elasticities while the Shannon model restricts attention elasticity
to be unity. We explore further differences and similarities relative to the
Shannon model in relation to invariance, posterior separability, consideration
sets, and the ability to learn events with certainty. Lastly, we provide an
efficient alternating minimization method -- analogous to the Blahut-Arimoto
algorithm -- to obtain optimal attention strategies. | Attention elasticities and invariant information costs | 2021-05-17 04:24:32 | Dániel Csaba | http://arxiv.org/abs/2105.07565v1, http://arxiv.org/pdf/2105.07565v1 | econ.GN |
35,617 | th | The maximin share (MMS) guarantee is a desirable fairness notion for
allocating indivisible goods. While MMS allocations do not always exist,
several approximation techniques have been developed to ensure that all agents
receive a fraction of their maximin share. We focus on an alternative
approximation notion, based on the population of agents, that seeks to
guarantee MMS for a fraction of agents. We show that no optimal approximation
algorithm can satisfy more than a constant number of agents, and discuss the
existence and computation of MMS for all but one agent and its relation to
approximate MMS guarantees. We then prove the existence of allocations that
guarantee MMS for $\frac{2}{3}$ of agents, and devise a polynomial time
algorithm that achieves this bound for up to nine agents. A key implication of
our result is the existence of allocations that guarantee
$\text{MMS}^{\lceil{3n/2}\rceil}$, i.e., the value that agents receive by
partitioning the goods into $\lceil{\frac{3}{2}n}\rceil$ bundles, improving the
best known guarantee of $\text{MMS}^{2n-2}$. Finally, we provide empirical
experiments using synthetic data. | Guaranteeing Maximin Shares: Some Agents Left Behind | 2021-05-19 23:17:42 | Hadi Hosseini, Andrew Searns | http://arxiv.org/abs/2105.09383v1, http://arxiv.org/pdf/2105.09383v1 | cs.GT |
35,618 | th | This exercise proposes a learning mechanism to model economic agent's
decision-making process using an actor-critic structure in the literature of
artificial intelligence. It is motivated by the psychology literature of
learning through reinforcing good or bad decisions. In a model of an
environment, to learn to make decisions, this AI agent needs to interact with
its environment and make explorative actions. Each action in a given state
brings a reward signal to the agent. These interactive experience is saved in
the agent's memory, which is then used to update its subjective belief of the
world. The agent's decision-making strategy is formed and adjusted based on
this evolving subjective belief. This agent does not only take an action that
it knows would bring a high reward, it also explores other possibilities. This
is the process of taking explorative actions, and it ensures that the agent
notices changes in its environment and adapt its subjective belief and
decisions accordingly. Through a model of stochastic optimal growth, I
illustrate that the economic agent under this proposed learning structure is
adaptive to changes in an underlying stochastic process of the economy. AI
agents can differ in their levels of exploration, which leads to different
experience in the same environment. This reflects on to their different
learning behaviours and welfare obtained. The chosen economic structure
possesses the fundamental decision making problems of macroeconomic models,
i.e., how to make consumption-saving decisions in a lifetime, and it can be
generalised to other decision-making processes and economic models. | Learning from zero: how to make consumption-saving decisions in a stochastic environment with an AI algorithm | 2021-05-21 05:39:12 | Rui, Shi | http://arxiv.org/abs/2105.10099v2, http://arxiv.org/pdf/2105.10099v2 | econ.TH |
35,619 | th | This paper introduces Gm, which is a category for extensive-form games. It
also provides some applications.
The category's objects are games, which are understood to be sets of nodes
which have been endowed with edges, information sets, actions, players, and
utility functions. Its arrows are functions from source nodes to target nodes
that preserve the additional structure. For instance, a game's information-set
collection is newly regarded as a topological basis for the game's
decision-node set, and thus a morphism's continuity serves to preserve
information sets. Given these definitions, a game monomorphism is characterized
by the property of not mapping two source runs (plays) to the same target run.
Further, a game isomorphism is characterized as a bijection whose restriction
to decision nodes is a homeomorphism, whose induced player transformation is
injective, and which strictly preserves the ordinal content of the utility
functions.
The category is then applied to some game-theoretic concepts beyond the
definition of a game. A Selten subgame is characterized as a special kind of
categorical subgame, and game isomorphisms are shown to preserve strategy sets,
Nash equilibria, Selten subgames, subgame-perfect equilibria,
perfect-information, and no-absentmindedness. Further, it is shown that the
full subcategory for distinguished-action sequence games is essentially wide in
the category of all games, and that the full subcategory of action-set games is
essentially wide in the full subcategory for games with no-absentmindedness. | A Category for Extensive-Form Games | 2021-05-24 19:40:42 | Peter A. Streufert | http://arxiv.org/abs/2105.11398v1, http://arxiv.org/pdf/2105.11398v1 | econ.TH |
35,620 | th | A monopolist seller of multiple goods screens a buyer whose type is initially
unknown to both but drawn from a commonly known distribution. The buyer
privately learns about his type via a signal. We derive the seller's optimal
mechanism in two different information environments. We begin by deriving the
buyer-optimal outcome. Here, an information designer first selects a signal,
and then the seller chooses an optimal mechanism in response; the designer's
objective is to maximize consumer surplus. Then, we derive the optimal
informationally robust mechanism. In this case, the seller first chooses the
mechanism, and then nature picks the signal that minimizes the seller's
profits. We derive the relation between both problems and show that the optimal
mechanism in both cases takes the form of pure bundling. | Multi-Dimensional Screening: Buyer-Optimal Learning and Informational Robustness | 2021-05-26 05:31:57 | Rahul Deb, Anne-Katrin Roesler | http://arxiv.org/abs/2105.12304v1, http://arxiv.org/pdf/2105.12304v1 | econ.TH |
35,621 | th | This paper studies how violations of structural assumptions like expected
utility and exponential discounting can be connected to basic rationality
violations caused by reference-dependent preferences, even if these assumptions
are typically regarded as independent building blocks of decision theory. A
reference-dependent generalization of behavioral postulates captures preference
changes across various choice domains. It gives rise to a linear order that
endogenously determines reference alternatives, which in turn determines the
preference parameters for a choice problem. With canonical models as
foundation, preference changes are captured using known technologies like the
concavity of utility functions and the levels of discount factors. The
framework allows us to study risk, time, and social preferences collectively,
where seemingly independent anomalies are interconnected through the lens of
reference-dependent choice. | Ordered Reference Dependent Choice | 2021-05-27 05:22:02 | Xi Zhi Lim | http://arxiv.org/abs/2105.12915v4, http://arxiv.org/pdf/2105.12915v4 | econ.TH |
35,622 | th | We propose and develop an algebraic approach to revealed preference. Our
approach dispenses with non algebraic structure, such as topological
assumptions. We provide algebraic axioms of revealed preference that subsume
previous, classical revealed preference axioms, as well as generate new axioms
for behavioral theories, and show that a data set is rationalizable if and only
if it is consistent with an algebraic axiom. | An algebraic approach to revealed preferences | 2021-05-31 20:34:06 | Mikhail Freer, Cesar Martinelli | http://arxiv.org/abs/2105.15175v1, http://arxiv.org/pdf/2105.15175v1 | econ.TH |
35,623 | th | We consider the algorithmic question of choosing a subset of candidates of a
given size $k$ from a set of $m$ candidates, with knowledge of voters' ordinal
rankings over all candidates. We consider the well-known and classic scoring
rule for achieving diverse representation: the Chamberlin-Courant (CC) or
$1$-Borda rule, where the score of a committee is the average over the voters,
of the rank of the best candidate in the committee for that voter; and its
generalization to the average of the top $s$ best candidates, called the
$s$-Borda rule.
Our first result is an improved analysis of the natural and well-studied
greedy heuristic. We show that greedy achieves a $\left(1 -
\frac{2}{k+1}\right)$-approximation to the maximization (or satisfaction)
version of CC rule, and a $\left(1 - \frac{2s}{k+1}\right)$-approximation to
the $s$-Borda score. Our result improves on the best known approximation
algorithm for this problem. We show that these bounds are almost tight.
For the dissatisfaction (or minimization) version of the problem, we show
that the score of $\frac{m+1}{k+1}$ can be viewed as an optimal benchmark for
the CC rule, as it is essentially the best achievable score of any
polynomial-time algorithm even when the optimal score is a polynomial factor
smaller (under standard computational complexity assumptions). We show that
another well-studied algorithm for this problem, called the Banzhaf rule,
attains this benchmark.
We finally show that for the $s$-Borda rule, when the optimal value is small,
these algorithms can be improved by a factor of $\tilde \Omega(\sqrt{s})$ via
LP rounding. Our upper and lower bounds are a significant improvement over
previous results, and taken together, not only enable us to perform a finer
comparison of greedy algorithms for these problems, but also provide analytic
justification for using such algorithms in practice. | Optimal Algorithms for Multiwinner Elections and the Chamberlin-Courant Rule | 2021-05-31 23:28:59 | Kamesh Munagala, Zeyu Shen, Kangning Wang | http://dx.doi.org/10.1145/3465456.3467624, http://arxiv.org/abs/2106.00091v1, http://arxiv.org/pdf/2106.00091v1 | cs.GT |
35,624 | th | The analogies between economics and classical mechanics can be extended from
constrained optimization to constrained dynamics by formalizing economic
(constraint) forces and economic power in analogy to physical (constraint)
forces in Lagrangian mechanics. In the differential-algebraic equation
framework of General Constrained Dynamics (GCD), households, firms, banks, and
the government employ forces to change economic variables according to their
desire and their power to assert their interest. These ex-ante forces are
completed by constraint forces from unanticipated system constraints to yield
the ex-post dynamics. The flexible out-of-equilibrium model can combine
Keynesian concepts such as the balance sheet approach and slow adaptation of
prices and quantities with bounded rationality (gradient climbing) and
interacting agents discussed in behavioral economics and agent-based models.
The framework integrates some elements of different schools of thought and
overcomes some restrictions inherent to optimization approaches, such as the
assumption of markets operating in or close to equilibrium. Depending on the
parameter choice for power relations and adaptation speeds, the model
nevertheless can converge to a neoclassical equilibrium, and reacts to an
austerity shock in a neoclassical or post-Keynesian way. | Modeling the out-of-equilibrium dynamics of bounded rationality and economic constraints | 2021-06-01 16:39:37 | Oliver Richters | http://dx.doi.org/10.1016/j.jebo.2021.06.005, http://arxiv.org/abs/2106.00483v2, http://arxiv.org/pdf/2106.00483v2 | econ.TH |
35,625 | th | We initiate the work towards a comprehensive picture of the smoothed
satisfaction of voting axioms, to provide a finer and more realistic foundation
for comparing voting rules. We adopt the smoothed social choice framework,
where an adversary chooses arbitrarily correlated "ground truth" preferences
for the agents, on top of which random noises are added. We focus on
characterizing the smoothed satisfaction of two well-studied voting axioms:
Condorcet criterion and participation. We prove that for any fixed number of
alternatives, when the number of voters $n$ is sufficiently large, the smoothed
satisfaction of the Condorcet criterion under a wide range of voting rules is
$1$, $1-\exp(-\Theta(n))$, $\Theta(n^{-0.5})$, $ \exp(-\Theta(n))$, or being
$\Theta(1)$ and $1-\Theta(1)$ at the same time; and the smoothed satisfaction
of participation is $1-\Theta(n^{-0.5})$. Our results address open questions by
Berg and Lepelley in 1994 for these rules, and also confirm the following
high-level message: the Condorcet criterion is a bigger concern than
participation under realistic models. | The Smoothed Satisfaction of Voting Axioms | 2021-06-03 18:55:11 | Lirong Xia | http://arxiv.org/abs/2106.01947v1, http://arxiv.org/pdf/2106.01947v1 | econ.TH |
35,626 | th | A patient seller aims to sell a good to an impatient buyer (i.e., one who
discounts utility over time). The buyer will remain in the market for a period
of time $T$, and her private value is drawn from a publicly known distribution.
What is the revenue-optimal pricing-curve (sequence of (price, time) pairs) for
the seller? Is randomization of help here? Is the revenue-optimal pricing curve
computable in polynomial time? We answer these questions in this paper. We give
an efficient algorithm for computing the revenue-optimal pricing curve. We show
that pricing curves, that post a price at each point of time and let the buyer
pick her utility maximizing time to buy, are revenue-optimal among a much
broader class of sequential lottery mechanisms. I.e., mechanisms that allow the
seller to post a menu of lotteries at each point of time cannot get any higher
revenue than pricing curves. We also show that the even broader class of
mechanisms that allow the menu of lotteries to be adaptively set, can earn
strictly higher revenue than that of pricing curves, and the revenue gap can be
as big as the support size of the buyer's value distribution. | Optimal Pricing Schemes for an Impatient Buyer | 2021-06-04 00:53:37 | Yuan Deng, Jieming Mao, Balasubramanian Sivan, Kangning Wang | http://dx.doi.org/10.1137/1.9781611977554.ch16, http://arxiv.org/abs/2106.02149v2, http://arxiv.org/pdf/2106.02149v2 | cs.GT |
35,627 | th | All economies require physical resource consumption to grow and maintain
their structure. The modern economy is additionally characterized by private
debt. The Human and Resources with MONEY (HARMONEY) economic growth model links
these features using a stock and flow consistent framework in physical and
monetary units. Via an updated version, we explore the interdependence of
growth and three major structural metrics of an economy. First, we show that
relative decoupling of gross domestic product (GDP) from resource consumption
is an expected pattern that occurs because of physical limits to growth, not a
response to avoid physical limits. While an increase in resource efficiency of
operating capital does increase the level of relative decoupling, so does a
change in pricing from one based on full costs to one based only on marginal
costs that neglects depreciation and interest payments leading to higher debt
ratios. Second, if assuming full labor bargaining power for wages, when a
previously-growing economy reaches peak resource extraction and GDP, wages
remain high but profits and debt decline to zero. By removing bargaining power,
profits can remain positive at the expense of declining wages. Third, the
distribution of intermediate transactions within the input-output table of the
model follows the same temporal pattern as in the post-World War II U.S.
economy. These results indicate that the HARMONEY framework enables realistic
investigation of interdependent structural change and trade-offs between
economic distribution, size, and resources consumption. | Interdependence of Growth, Structure, Size and Resource Consumption During an Economic Growth Cycle | 2021-06-04 17:29:50 | Carey W. King | http://arxiv.org/abs/2106.02512v1, http://arxiv.org/pdf/2106.02512v1 | econ.TH |
35,628 | th | The optimal taxation of assets requires attention to two concerns: 1) the
elasticity of the supply of assets and 2) the impact of taxing assets on
distributional objectives. The most efficient way to attend to these two
concerns is to tax assets of different types separately, rather than having one
tax on all assets. When assets are created by specialized effort rather than by
saving, as with innovations, discoveries of mineral deposits and development of
unregulated natural monopolies, it is interesting to consider a regime in which
the government awards a prize for the creation of the asset and then collects
the remaining value of the asset in taxes. Analytically, the prize is like a
wage after taxes. In this perspective, prizes are awarded based on a variation
on optimal taxation theory, while assets of different types are taxed in
divergent ways, depending on their characteristics. Some categories of assets
are abolished. | Optimal Taxation of Assets | 2021-06-05 13:27:19 | Nicolaus Tideman, Thomas Mecherikunnel | http://arxiv.org/abs/2106.02861v1, http://arxiv.org/pdf/2106.02861v1 | econ.GN |
35,629 | th | This paper studies a multi-armed bandit problem where the decision-maker is
loss averse, in particular she is risk averse in the domain of gains and risk
loving in the domain of losses. The focus is on large horizons. Consequences of
loss aversion for asymptotic (large horizon) properties are derived in a number
of analytical results. The analysis is based on a new central limit theorem for
a set of measures under which conditional variances can vary in a largely
unstructured history-dependent way subject only to the restriction that they
lie in a fixed interval. | A Central Limit Theorem, Loss Aversion and Multi-Armed Bandits | 2021-06-10 06:15:11 | Zengjing Chen, Larry G. Epstein, Guodong Zhang | http://arxiv.org/abs/2106.05472v2, http://arxiv.org/pdf/2106.05472v2 | math.PR |
35,630 | th | The spread of disinformation on social platforms is harmful to society. This
harm may manifest as a gradual degradation of public discourse; but it can also
take the form of sudden dramatic events such as the 2021 insurrection on
Capitol Hill. The platforms themselves are in the best position to prevent the
spread of disinformation, as they have the best access to relevant data and the
expertise to use it. However, mitigating disinformation is costly, not only for
implementing detection algorithms or employing manual effort, but also because
limiting such highly viral content impacts user engagement and potential
advertising revenue. Since the costs of harmful content are borne by other
entities, the platform will therefore have no incentive to exercise the
socially-optimal level of effort. This problem is similar to that of
environmental regulation, in which the costs of adverse events are not directly
borne by a firm, the mitigation effort of a firm is not observable, and the
causal link between a harmful consequence and a specific failure is difficult
to prove. For environmental regulation, one solution is to perform costly
monitoring to ensure that the firm takes adequate precautions according to a
specified rule. However, a fixed rule for classifying disinformation becomes
less effective over time, as bad actors can learn to sequentially and
strategically bypass it. Encoding our domain as a Markov decision process, we
demonstrate that no penalty based on a static rule, no matter how large, can
incentivize optimal effort. Penalties based on an adaptive rule can incentivize
optimal effort, but counter-intuitively, only if the regulator sufficiently
overreacts to harmful events by requiring a greater-than-optimal level of
effort. We offer novel insights for the effective regulation of social
platforms, highlight inherent challenges, and discuss promising avenues for
future work. | Disinformation, Stochastic Harm, and Costly Effort: A Principal-Agent Analysis of Regulating Social Media Platforms | 2021-06-18 02:27:43 | Shehroze Khan, James R. Wright | http://arxiv.org/abs/2106.09847v5, http://arxiv.org/pdf/2106.09847v5 | cs.GT |
35,631 | th | In the context of computational social choice, we study voting methods that
assign a set of winners to each profile of voter preferences. A voting method
satisfies the property of positive involvement (PI) if for any election in
which a candidate x would be among the winners, adding another voter to the
election who ranks x first does not cause x to lose. Surprisingly, a number of
standard voting methods violate this natural property. In this paper, we
investigate different ways of measuring the extent to which a voting method
violates PI, using computer simulations. We consider the probability (under
different probability models for preferences) of PI violations in randomly
drawn profiles vs. profile-coalition pairs (involving coalitions of different
sizes). We argue that in order to choose between a voting method that satisfies
PI and one that does not, we should consider the probability of PI violation
conditional on the voting methods choosing different winners. We should also
relativize the probability of PI violation to what we call voter potency, the
probability that a voter causes a candidate to lose. Although absolute
frequencies of PI violations may be low, after this conditioning and
relativization, we see that under certain voting methods that violate PI, much
of a voter's potency is turned against them - in particular, against their
desire to see their favorite candidate elected. | Measuring Violations of Positive Involvement in Voting | 2021-06-22 05:46:37 | Wesley H. Holliday, Eric Pacuit | http://dx.doi.org/10.4204/EPTCS.335.17, http://arxiv.org/abs/2106.11502v1, http://arxiv.org/pdf/2106.11502v1 | cs.GT |
35,632 | th | When reasoning about strategic behavior in a machine learning context it is
tempting to combine standard microfoundations of rational agents with the
statistical decision theory underlying classification. In this work, we argue
that a direct combination of these standard ingredients leads to brittle
solution concepts of limited descriptive and prescriptive value. First, we show
that rational agents with perfect information produce discontinuities in the
aggregate response to a decision rule that we often do not observe empirically.
Second, when any positive fraction of agents is not perfectly strategic,
desirable stable points -- where the classifier is optimal for the data it
entails -- cease to exist. Third, optimal decision rules under standard
microfoundations maximize a measure of negative externality known as social
burden within a broad class of possible assumptions about agent behavior.
Recognizing these limitations we explore alternatives to standard
microfoundations for binary classification. We start by describing a set of
desiderata that help navigate the space of possible assumptions about how
agents respond to a decision rule. In particular, we analyze a natural
constraint on feature manipulations, and discuss properties that are sufficient
to guarantee the robust existence of stable points. Building on these insights,
we then propose the noisy response model. Inspired by smoothed analysis and
empirical observations, noisy response incorporates imperfection in the agent
responses, which we show mitigates the limitations of standard
microfoundations. Our model retains analytical tractability, leads to more
robust insights about stable points, and imposes a lower social burden at
optimality. | Alternative Microfoundations for Strategic Classification | 2021-06-24 03:30:58 | Meena Jagadeesan, Celestine Mendler-Dünner, Moritz Hardt | http://arxiv.org/abs/2106.12705v1, http://arxiv.org/pdf/2106.12705v1 | cs.LG |
35,633 | th | This paper unifies two key results from economic theory, namely, revealed
rational inattention and classical revealed preference. Revealed rational
inattention tests for rationality of information acquisition for Bayesian
decision makers. On the other hand, classical revealed preference tests for
utility maximization under known budget constraints. Our first result is an
equivalence result - we unify revealed rational inattention and revealed
preference through an equivalence map over decision parameters and partial
order for payoff monotonicity over the decision space in both setups. Second,
we exploit the unification result computationally to extend robustness measures
for goodness-of-fit of revealed preference tests in the literature to revealed
rational inattention. This extension facilitates quantifying how well a
Bayesian decision maker's actions satisfy rational inattention. Finally, we
illustrate the significance of the unification result on a real-world YouTube
dataset comprising thumbnail, title and user engagement metadata from
approximately 140,000 videos. We compute the Bayesian analog of robustness
measures from revealed preference literature on YouTube metadata features
extracted from a deep auto-encoder, i.e., a deep neural network that learns
low-dimensional features of the metadata. The computed robustness values show
that YouTube user engagement fits the rational inattention model remarkably
well. All our numerical experiments are completely reproducible. | Unifying Revealed Preference and Revealed Rational Inattention | 2021-06-28 12:01:09 | Kunal Pattanayak, Vikram Krishnamurthy | http://arxiv.org/abs/2106.14486v4, http://arxiv.org/pdf/2106.14486v4 | econ.TH |
35,634 | th | The intermittent nature of renewable energy resources creates extra
challenges in the operation and control of the electricity grid. Demand
flexibility markets can help in dealing with these challenges by introducing
incentives for customers to modify their demand. Market-based demand-side
management (DSM) have garnered serious attention lately due to its promising
capability of maintaining the balance between supply and demand, while also
keeping customer satisfaction at its highest levels. Many researchers have
proposed using concepts from mechanism design theory in their approaches to
market-based DSM. In this work, we provide a review of the advances in
market-based DSM using mechanism design. We provide a categorisation of the
reviewed literature and evaluate the strengths and weaknesses of each design
criteria. We also study the utility function formulations used in the reviewed
literature and provide a critique of the proposed indirect mechanisms. We show
that despite the extensiveness of the literature on this subject, there remains
concerns and challenges that should be addressed for the realistic
implementation of such DSM approaches. We draw conclusions from our review and
discuss possible future research directions. | Applications of Mechanism Design in Market-Based Demand-Side Management | 2021-06-24 14:24:41 | Khaled Abedrabboh, Luluwah Al-Fagih | http://arxiv.org/abs/2106.14659v1, http://arxiv.org/pdf/2106.14659v1 | cs.GT |
35,642 | th | This paper attempts to analyse policymaking in the field of Intellectual
Property (IP) as an instrument of economic growth across the Global North and
South. It begins by studying the links between economic growth and IP, followed
by an understanding of Intellectual Property Rights (IPR) development in the
US, a leading proponent of robust IPR protection internationally. The next
section compares the IPR in the Global North and South and undertakes an
analysis of the diverse factors that result in these differences. The paper
uses the case study of the Indian Pharmaceutical Industry to understand how IPR
may differentially affect economies and conclude that there may not yet be a
one size fits all policy for the adoption of Intellectual Property Rights. | Comparing Intellectual property policy in the Global North and South -- A one-size-fits-all policy for economic prosperity? | 2021-07-14 20:20:43 | S Sidhartha Narayan, Malavika Ranjan, Madhumitha Raghuraman | http://arxiv.org/abs/2107.06855v2, http://arxiv.org/pdf/2107.06855v2 | econ.TH |
35,635 | th | Lloyd S. Shapley \cite{Shapley1953a, Shapley1953} introduced a set of axioms
in 1953, now called the {\em Shapley axioms}, and showed that the axioms
characterize a natural allocation among the players who are in grand coalition
of a {\em cooperative game}. Recently, \citet{StTe2019} showed that a
cooperative game can be decomposed into a sum of {\em component games}, one for
each player, whose value at the grand coalition coincides with the {\em Shapley
value}. The component games are defined by the solutions to the naturally
defined system of least squares linear equations via the framework of the {\em
Hodge decomposition} on the hypercube graph.
In this paper we propose a new set of axioms which characterizes the
component games. Furthermore, we realize them through an intriguing stochastic
path integral driven by a canonical Markov chain. The integrals are natural
representation for the expected total contribution made by the players for each
coalition, and hence can be viewed as their fair share. This allows us to
interpret the component game values for each coalition also as a valid measure
of fair allocation among the players in the coalition. Our axioms may be viewed
as a completion of Shapley axioms in view of this characterization of the
Hodge-theoretic component games, and moreover, the stochastic path integral
representation of the component games may be viewed as an extension of the {\em
Shapley formula}. | A Hodge theoretic extension of Shapley axioms | 2021-06-29 08:11:02 | Tongseok Lim | http://arxiv.org/abs/2106.15094v3, http://arxiv.org/pdf/2106.15094v3 | math.OC |
35,636 | th | The matching literature often recommends market centralization under the
assumption that agents know their own preferences and that their preferences
are fixed. We find counterevidence to this assumption in a quasi-experiment. In
Germany's university admissions, a clearinghouse implements the early stages of
the Gale-Shapley algorithm in real time. We show that early offers made in this
decentralized phase, although not more desirable, are accepted more often than
later ones. These results, together with survey evidence and a theoretical
model, are consistent with students' costly learning about universities. We
propose a hybrid mechanism to combine the advantages of decentralization and
centralization.
Published at The Journal of Political Economy under a new title, ``Preference
Discovery in University Admissions: The Case for Dynamic Multioffer
Mechanisms,'' available at https://doi.org/10.1086/718983 (Open Access). | Decentralizing Centralized Matching Markets: Implications from Early Offers in University Admissions | 2021-07-04 06:37:19 | Julien Grenet, YingHua He, Dorothea Kübler | http://dx.doi.org/10.1086/718983, http://arxiv.org/abs/2107.01532v2, http://arxiv.org/pdf/2107.01532v2 | econ.GN |
35,637 | th | We propose a model, which nests a susceptible-infected-recovered-deceased
(SIRD) epidemic model into a dynamic macroeconomic equilibrium framework with
agents' mobility. The latter affect both their income and their probability of
infecting and being infected. Strategic complementarities among individual
mobility choices drive the evolution of aggregate economic activity, while
infection externalities caused by individual mobility affect disease diffusion.
The continuum of rational forward-looking agents coordinates on the Nash
equilibrium of a discrete time, finite-state, infinite-horizon Mean Field Game.
We prove the existence of an equilibrium and provide a recursive construction
method for the search of an equilibrium(a), which also guides our numerical
investigations. We calibrate the model by using Italian experience on COVID-19
epidemic and we discuss policy implications. | Mobility decisions, economic dynamics and epidemic | 2021-07-05 01:54:40 | Giorgio Fabbri, Salvatore Federico, Davide Fiaschi, Fausto Gozzi | http://dx.doi.org/10.1007/s00199-023-01485-1, http://arxiv.org/abs/2107.01746v2, http://arxiv.org/pdf/2107.01746v2 | econ.GN |
35,638 | th | Proof-of-Stake blockchains based on a longest-chain consensus protocol are an
attractive energy-friendly alternative to the Proof-of-Work paradigm. However,
formal barriers to "getting the incentives right" were recently discovered,
driven by the desire to use the blockchain itself as a source of
pseudorandomness \cite{brown2019formal}.
We consider instead a longest-chain Proof-of-Stake protocol with perfect,
trusted, external randomness (e.g. a randomness beacon). We produce two main
results.
First, we show that a strategic miner can strictly outperform an honest miner
with just $32.5\%$ of the total stake. Note that a miner of this size {\em
cannot} outperform an honest miner in the Proof-of-Work model. This establishes
that even with access to a perfect randomness beacon, incentives in
Proof-of-Work and Proof-of-Stake longest-chain protocols are fundamentally
different.
Second, we prove that a strategic miner cannot outperform an honest miner
with $30.8\%$ of the total stake. This means that, while not quite as secure as
the Proof-of-Work regime, desirable incentive properties of Proof-of-Work
longest-chain protocols can be approximately recovered via Proof-of-Stake with
a perfect randomness beacon.
The space of possible strategies in a Proof-of-Stake mining game is {\em
significantly} richer than in a Proof-of-Work game. Our main technical
contribution is a characterization of potentially optimal strategies for a
strategic miner, and in particular, a proof that the corresponding
infinite-state MDP admits an optimal strategy that is positive recurrent. | Proof-of-Stake Mining Games with Perfect Randomness | 2021-07-08 22:01:58 | Matheus V. X. Ferreira, S. Matthew Weinberg | http://dx.doi.org/10.1145/3465456.3467636, http://arxiv.org/abs/2107.04069v2, http://arxiv.org/pdf/2107.04069v2 | cs.GT |
35,639 | th | We study the impacts of incomplete information on centralized one-to-one
matching markets. We focus on the commonly used Deferred Acceptance mechanism
(Gale and Shapley, 1962). We show that many complete-information results are
fragile to a small infusion of uncertainty about others' preferences. | Centralized Matching with Incomplete Information | 2021-07-08 23:32:22 | Marcelo Ariel Fernandez, Kirill Rudov, Leeat Yariv | http://arxiv.org/abs/2107.04098v1, http://arxiv.org/pdf/2107.04098v1 | econ.TH |
35,640 | th | The Condorcet criterion (CC) is a classical and well-accepted criterion for
voting. Unfortunately, it is incompatible with many other desiderata including
participation (Par), half-way monotonicity (HM), Maskin monotonicity (MM), and
strategy-proofness (SP). Such incompatibilities are often known as
impossibility theorems, and are proved by worst-case analysis. Previous work
has investigated the likelihood for these impossibilities to occur under
certain models, which are often criticized of being unrealistic.
We strengthen previous work by proving the first set of semi-random
impossibilities for voting rules to satisfy CC and the more general, group
versions of the four desiderata: for any sufficiently large number of voters
$n$, any size of the group $1\le B\le \sqrt n$, any voting rule $r$, and under
a large class of {\em semi-random} models that include Impartial Culture, the
likelihood for $r$ to satisfy CC and Par, CC and HM, CC and MM, or CC and SP is
$1-\Omega(\frac{B}{\sqrt n})$. This matches existing lower bounds for CC and
Par ($B=1$) and CC and SP ($B\le \sqrt n$), showing that many commonly-studied
voting rules are already asymptotically optimal in such cases. | Semi-Random Impossibilities of Condorcet Criterion | 2021-07-14 03:39:33 | Lirong Xia | http://arxiv.org/abs/2107.06435v2, http://arxiv.org/pdf/2107.06435v2 | econ.TH |
35,643 | th | With the growing use of distributed machine learning techniques, there is a
growing need for data markets that allows agents to share data with each other.
Nevertheless data has unique features that separates it from other commodities
including replicability, cost of sharing, and ability to distort. We study a
setup where each agent can be both buyer and seller of data. For this setup, we
consider two cases: bilateral data exchange (trading data with data) and
unilateral data exchange (trading data with money). We model bilateral sharing
as a network formation game and show the existence of strongly stable outcome
under the top agents property by allowing limited complementarity. We propose
ordered match algorithm which can find the stable outcome in O(N^2) (N is the
number of agents). For the unilateral sharing, under the assumption of additive
cost structure, we construct competitive prices that can implement any social
welfare maximizing outcome. Finally for this setup when agents have private
information, we propose mixed-VCG mechanism which uses zero cost data
distortion of data sharing with its isolated impact to achieve budget balance
while truthfully implementing socially optimal outcomes to the exact level of
budget imbalance of standard VCG mechanisms. Mixed-VCG uses data distortions as
data money for this purpose. We further relax zero cost data distortion
assumption by proposing distorted-mixed-VCG. We also extend our model and
results to data sharing via incremental inquiries and differential privacy
costs. | Data Sharing Markets | 2021-07-19 09:00:34 | Mohammad Rasouli, Michael I. Jordan | http://arxiv.org/abs/2107.08630v2, http://arxiv.org/pdf/2107.08630v2 | econ.TH |
35,644 | th | In economy, viewed as a quantum system working as a circuit, each process at
the microscale is a quantum gate among agents. The global configuration of
economy is addressed by optimizing the sustainability of the whole circuit.
This is done in terms of geodesics, starting from some approximations. A
similar yet somehow different approach is applied for the closed system of the
whole and for economy as an open system. Computations may partly be explicit,
especially when the reality is represented in a simplified way. The circuit can
be also optimized by minimizing its complexity, with a partly similar
formalism, yet generally not along the same paths. | Sustainability of Global Economy as a Quantum Circuit | 2021-07-13 11:40:47 | Antonino Claudio Bonan | http://arxiv.org/abs/2107.09032v1, http://arxiv.org/pdf/2107.09032v1 | econ.TH |
35,645 | th | This paper generalizes L.S. Shapley's celebrated value allocation theory on
coalition games by discovering and applying a fundamental connection between
stochastic path integration driven by canonical time-reversible Markov chains
and Hodge-theoretic discrete Poisson's equations on general weighted graphs.
More precisely, we begin by defining cooperative games on general graphs and
generalize Shapley's value allocation formula for those games in terms of
stochastic path integral driven by the associated canonical Markov chain. We
then show the value allocation operator, one for each player defined by the
path integral, turns out to be the solution to the Poisson's equation defined
via the combinatorial Hodge decomposition on general weighted graphs.
Several motivational examples and applications are presented, in particular,
a section is devoted to reinterpret and extend Nash's and Kohlberg and Neyman's
solution concept for cooperative games. This and other examples, e.g. on
revenue management, suggest that our general framework does not have to be
restricted to cooperative games setup, but may apply to broader range of
problems arising in economics, finance and other social and physical sciences. | Hodge theoretic reward allocation for generalized cooperative games on graphs | 2021-07-22 11:13:11 | Tongseok Lim | http://arxiv.org/abs/2107.10510v3, http://arxiv.org/pdf/2107.10510v3 | math.PR |
35,646 | th | This paper specifies an extensive form as a 5-ary relation (that is, as a set
of quintuples) which satisfies eight abstract axioms. Each quintuple is
understood to list a player, a situation (a concept which generalizes an
information set), a decision node, an action, and a successor node.
Accordingly, the axioms are understood to specify abstract relationships
between players, situations, nodes, and actions. Such an extensive form is
called a "pentaform". A "pentaform game" is then defined to be a pentaform
together with utility functions. The paper's main result is to construct an
intuitive bijection between pentaform games and $\mathbf{Gm}$ games (Streufert
2021, arXiv:2105.11398), which are centrally located in the literature, and
which encompass all finite-horizon or infinite-horizon discrete games. In this
sense, pentaform games equivalently formulate almost all extensive-form games.
Secondary results concern disaggregating pentaforms by subsets, constructing
pentaforms by unions, and initial applications to Selten subgames and
perfect-recall (an extensive application to dynamic programming is in Streufert
2023, arXiv:2302.03855). | Specifying a Game-Theoretic Extensive Form as an Abstract 5-ary Relation | 2021-07-22 19:56:07 | Peter A. Streufert | http://arxiv.org/abs/2107.10801v4, http://arxiv.org/pdf/2107.10801v4 | econ.TH |
35,647 | th | We study a model in which before a conflict between two parties escalates
into a war (in the form of an all-pay auction), a party can offer a
take-it-or-leave-it bribe to the other for a peaceful settlement. In contrast
to the received literature, we find that peace security is impossible in our
model. We characterize the necessary and sufficient conditions for peace
implementability. Furthermore, we find that separating equilibria do not exist
and the number of (on-path) bribes in any non-peaceful equilibria is at most
two. We also consider a requesting model and characterize the necessary and
sufficient conditions for the existence of robust peaceful equilibria, all of
which are sustained by the identical (on-path) request. Contrary to the bribing
model, peace security is possible in the requesting model. | Peace through bribing | 2021-07-24 13:12:10 | Jingfeng Lu, Zongwei Lu, Christian Riis | http://arxiv.org/abs/2107.11575v3, http://arxiv.org/pdf/2107.11575v3 | econ.TH |
35,648 | th | We propose a new single-winner voting system using ranked ballots: Stable
Voting. The motivating principle of Stable Voting is that if a candidate A
would win without another candidate B in the election, and A beats B in a
head-to-head majority comparison, then A should still win in the election with
B included (unless there is another candidate A' who has the same kind of claim
to winning, in which case a tiebreaker may choose between such candidates). We
call this principle Stability for Winners (with Tiebreaking). Stable Voting
satisfies this principle while also having a remarkable ability to avoid tied
outcomes in elections even with small numbers of voters. | Stable Voting | 2021-08-02 00:06:56 | Wesley H. Holliday, Eric Pacuit | http://arxiv.org/abs/2108.00542v9, http://arxiv.org/pdf/2108.00542v9 | econ.TH |
35,649 | th | Unlike tic-tac-toe or checkers, in which optimal play leads to a draw, it is
not known whether optimal play in chess ends in a win for White, a win for
Black, or a draw. But after White moves first in chess, if Black has a double
move followed by a double move of White and then alternating play, play is more
balanced because White does not always tie or lead in moves. Symbolically,
Balanced Alternation gives the following move sequence: After White's (W)
initial move, first Black (B) and then White each have two moves in a row
(BBWW), followed by the alternating sequence, beginning with W, which
altogether can be written as WB/BW/WB/WB/WB... (the slashes separate
alternating pairs of moves). Except for reversal of the 3rd and 4th moves from
WB to BW, this is the standard chess sequence. Because Balanced Alternation
lies between the standard sequence, which favors White, and a comparable
sequence that favors Black, it is highly likely to produce a draw with optimal
play, rendering chess fairer. This conclusion is supported by a computer
analysis of chess openings and how they would play out under Balanced
Alternation. | Fairer Chess: A Reversal of Two Opening Moves in Chess Creates Balance Between White and Black | 2021-08-05 15:14:36 | Steven J. Brams, Mehmet S. Ismail | http://arxiv.org/abs/2108.02547v2, http://arxiv.org/pdf/2108.02547v2 | econ.TH |
35,650 | th | To protect his teaching evaluations, an economics professor uses the
following exam curve: if the class average falls below a known target, $m$,
then all students will receive an equal number of free points so as to bring
the mean up to $m$. If the average is above $m$ then there is no curve; curved
grades above $100\%$ will never be truncated to $100\%$ in the gradebook. The
$n$ students in the course all have Cobb-Douglas preferences over the
grade-leisure plane; effort corresponds exactly to earned (uncurved) grades in
a $1:1$ fashion. The elasticity of each student's utility with respect to his
grade is his ability parameter, or relative preference for a high score. I
find, classify, and give complete formulas for all the pure Nash equilibria of
my own game, which my students have been playing for some eight semesters. The
game is supermodular, featuring strategic complementarities, negative
spillovers, and nonsmooth payoffs that generate non-convexities in the reaction
correspondence. The $n+2$ types of equilibria are totally ordered with respect
to effort and Pareto preference, and the lowest $n+1$ of these types are
totally ordered in grade-leisure space. In addition to the no-curve
("try-hard") and curved interior equilibria, we have the "$k$-don't care"
equilibria, whereby the $k$ lowest-ability students are no-shows. As the class
size becomes infinite in the curved interior equilibrium, all students increase
their leisure time by a fixed percentage, i.e., $14\%$, in response to the
disincentive, which amplifies any pre-existing ability differences. All
students' grades inflate by this same (endogenous) factor, say, $1.14$ times
what they would have been under the correct standard. | Grade Inflation and Stunted Effort in a Curved Economics Course | 2021-08-08 21:48:31 | Alex Garivaltis | http://arxiv.org/abs/2108.03709v3, http://arxiv.org/pdf/2108.03709v3 | econ.TH |
35,651 | th | We propose and investigate a model for mate searching and marriage in large
societies based on a stochastic matching process and simple decision rules.
Agents have preferences among themselves given by some probability
distribution. They randomly search for better mates, forming new couples and
breaking apart in the process. Marriage is implemented in the model by adding
the decision of stopping searching for a better mate when the affinity between
a couple is higher than a certain fixed amount. We show that the average
utility in the system with marriage can be higher than in the system without
it. Part of our results can be summarized in what sounds like a piece of
advice: don't marry the first person you like and don't search for the love of
your life, but get married if you like your partner more than a sigma above
average. We also find that the average utility attained in our stochastic model
is smaller than the one associated with a stable matching achieved using the
Gale-Shapley algorithm. This can be taken as a formal argument in favor of a
central planner (perhaps an app) with the information to coordinate the
marriage market in order to set a stable matching. To roughly test the adequacy
of our model to describe existent societies, we compare the evolution of the
fraction of married couples in our model with real-world data and obtain good
agreement. In the last section, we formulate the model in the limit of an
infinite number of agents and find an analytical expression for the evolution
of the system. | Benefits of marriage as a search strategy | 2021-08-10 22:24:38 | Davi B. Costa | http://arxiv.org/abs/2108.04885v2, http://arxiv.org/pdf/2108.04885v2 | econ.TH |
35,652 | th | A network game assigns a level of collectively generated wealth to every
network that can form on a given set of players. A variable network game
combines a network game with a network formation probability distribution,
describing certain restrictions on network formation. Expected levels of
collectively generated wealth and expected individual payoffs can be formulated
in this setting.
We investigate properties of the resulting expected wealth levels as well as
the expected variants of well-established network game values as allocation
rules that assign to every variable network game a payoff to the players in a
variable network game. We establish two axiomatizations of the Expected Myerson
Value, originally formulated and proven on the class of communication
situations, based on the well-established component balance, equal bargaining
power and balanced contributions properties. Furthermore, we extend an
established axiomatization of the Position Value based on the balanced link
contribution property to the Expected Position Value. | Expected Values for Variable Network Games | 2021-08-16 15:35:40 | Subhadip Chakrabarti, Loyimee Gogoi, Robert P Gilles, Surajit Borkotokey, Rajnish Kumar | http://arxiv.org/abs/2108.07047v2, http://arxiv.org/pdf/2108.07047v2 | cs.GT |
35,653 | th | This paper studies dynamic monopoly pricing for a class of settings that
includes multiple durable, multiple rental, or a mix of varieties. We show that
the driving force behind pricing dynamics is the seller's incentive to switch
consumers - buyers and non-buyers - to higher-valued consumption options by
lowering prices ("trading up"). If consumers cannot be traded up from the
static optimal allocation, pricing dynamics do not emerge in equilibrium. If
consumers can be traded up, pricing dynamics arise until all trading-up
opportunities are exhausted. We study the conditions under which pricing
dynamics end in finite time and characterize the final prices at which dynamics
end. | Dynamic Monopoly Pricing With Multiple Varieties: Trading Up | 2021-08-16 18:22:25 | Stefan Buehler, Nicolas Eschenbaum | http://arxiv.org/abs/2108.07146v2, http://arxiv.org/pdf/2108.07146v2 | econ.GN |
35,654 | th | An important but understudied question in economics is how people choose when
facing uncertainty in the timing of events. Here we study preferences over time
lotteries, in which the payment amount is certain but the payment time is
uncertain. Expected discounted utility theory (EDUT) predicts decision makers
to be risk-seeking over time lotteries. We explore a normative model of
growth-optimality, in which decision makers maximise the long-term growth rate
of their wealth. Revisiting experimental evidence on time lotteries, we find
that growth-optimality accords better with the evidence than EDUT. We outline
future experiments to scrutinise further the plausibility of growth-optimality. | Risk Preferences in Time Lotteries | 2021-08-18 22:46:55 | Yonatan Berman, Mark Kirstein | http://arxiv.org/abs/2108.08366v1, http://arxiv.org/pdf/2108.08366v1 | econ.TH |
35,655 | th | It is believed that interventions that change the media's costs of
misreporting can increase the information provided by media outlets. This paper
analyzes the validity of this claim and the welfare implications of those types
of interventions that affect misreporting costs. I study a model of
communication between an uninformed voter and a media outlet that knows the
quality of two competing candidates. The alternatives available to the voter
are endogenously championed by the two candidates. I show that higher costs may
lead to more misreporting and persuasion, whereas low costs result in full
revelation; interventions that increase misreporting costs never harm the
voter, but those that do so slightly may be wasteful of public resources. I
conclude that intuitions derived from the interaction between the media and
voters, without incorporating the candidates' strategic responses to the media
environment, do not capture properly the effects of these types of
interventions. | Influential News and Policy-making | 2021-08-25 14:00:38 | Federico Vaccari | http://arxiv.org/abs/2108.11177v1, http://arxiv.org/pdf/2108.11177v1 | econ.GN |
35,656 | th | This paper presents six theorems and ten propositions that can be read as
deconstructing and integrating the continuity postulate under the rubric of
pioneering work of Eilenberg, Wold, von Neumann-Morgenstern, Herstein-Milnor
and Debreu. Its point of departure is the fact that the adjective continuous
applied to a function or a binary relation does not acknowledge the many
meanings that can be given to the concept it names, and that under a variety of
technical mathematical structures, its many meanings can be whittled down to
novel and unexpected equivalences that have been missed in the theory of
choice. Specifically, it provides a systematic investigation of the two-way
relation between restricted and full continuity of a function and a binary
relation that, under convex, monotonic and differentiable structures, draws out
the behavioral implications of the postulate. | The Continuity Postulate in Economic Theory: A Deconstruction and an Integration | 2021-08-26 15:29:46 | Metin Uyanik, M. Ali Khan | http://arxiv.org/abs/2108.11736v2, http://arxiv.org/pdf/2108.11736v2 | econ.TH |
35,657 | th | This paper studies the design of mechanisms that are robust to
misspecification. We introduce a novel notion of robustness that connects a
variety of disparate approaches and study its implications in a wide class of
mechanism design problems. This notion is quantifiable, allowing us to
formalize and answer comparative statics questions relating the nature and
degree of misspecification to sharp predictions regarding features of feasible
mechanisms. This notion also has a behavioral foundation which reflects the
perception of ambiguity, thus allowing the degree of misspecification to emerge
endogenously. In a number of standard settings, robustness to arbitrarily small
amounts of misspecification generates a discontinuity in the set of feasible
mechanisms and uniquely selects simple, ex post incentive compatible mechanisms
such as second-price auctions. Robustness also sheds light on the value of
private information and the prevalence of full or virtual surplus extraction. | Uncertainty in Mechanism Design | 2021-08-28 14:58:45 | Giuseppe Lopomo, Luca Rigotti, Chris Shannon | http://arxiv.org/abs/2108.12633v1, http://arxiv.org/pdf/2108.12633v1 | econ.TH |
35,658 | th | This paper investigates stochastic continuous time contests with a twist: the
designer requires that contest participants incur some cost to submit their
entries. When the designer wishes to maximize the (expected) performance of the
top performer, a strictly positive submission fee is optimal. When the designer
wishes to maximize total (expected) performance, either the highest submission
fee or the lowest submission fee is optimal. | Submission Fees in Risk-Taking Contests | 2021-08-30 23:07:59 | Mark Whitmeyer | http://arxiv.org/abs/2108.13506v1, http://arxiv.org/pdf/2108.13506v1 | econ.TH |
35,659 | th | Information policies such as scores, ratings, and recommendations are
increasingly shaping society's choices in high-stakes domains. We provide a
framework to study the welfare implications of information policies on a
population of heterogeneous individuals. We define and characterize the Bayes
welfare set, consisting of the population's utility profiles that are feasible
under some information policy. The Pareto frontier of this set can be recovered
by a series of standard Bayesian persuasion problems, in which a utilitarian
planner takes the role of the information designer. We provide necessary and
sufficient conditions under which an information policy exists that Pareto
dominates the no-information policy. We illustrate our results with
applications to data leakage, price discrimination, and credit ratings. | Persuasion and Welfare | 2021-09-07 15:51:58 | Laura Doval, Alex Smolin | http://arxiv.org/abs/2109.03061v4, http://arxiv.org/pdf/2109.03061v4 | econ.TH |
35,660 | th | The inventories carried in a supply chain as a strategic tool to influence
the competing firms are considered to be strategic inventories (SI). We present
a two-period game-theoretic supply chain model, in which a singular
manufacturer supplies products to a pair of identical Cournot duopolistic
retailers. We show that the SI carried by the retailers under dynamic contract
is Pareto-dominating for the manufacturer, retailers, consumers, the channel,
and the society as well. We also find that the retailer's SI, however, can be
eliminated when the manufacturer commits wholesale contract or inventory
holding cost is too high. In comparing the cases with and without downstream
competition, we also show that the downstream Cournot duopoly undermines the
retailers in profits, but benefits all others. | Strategic Inventories in a Supply Chain with Downstream Cournot Duopoly | 2021-09-15 01:21:29 | Xiaowei Hu, Jaejin Jang, Nabeel Hamoud, Amirsaman Bajgiran | http://dx.doi.org/10.1504/IJOR.2021.119934, http://arxiv.org/abs/2109.06995v2, http://arxiv.org/pdf/2109.06995v2 | econ.GN |
35,661 | th | This paper proposes a new approach to training recommender systems called
deviation-based learning. The recommender and rational users have different
knowledge. The recommender learns user knowledge by observing what action users
take upon receiving recommendations. Learning eventually stalls if the
recommender always suggests a choice: Before the recommender completes
learning, users start following the recommendations blindly, and their choices
do not reflect their knowledge. The learning rate and social welfare improve
substantially if the recommender abstains from recommending a particular choice
when she predicts that multiple alternatives will produce a similar payoff. | Deviation-Based Learning: Training Recommender Systems Using Informed User Choice | 2021-09-20 22:51:37 | Junpei Komiyama, Shunya Noda | http://arxiv.org/abs/2109.09816v2, http://arxiv.org/pdf/2109.09816v2 | econ.TH |
35,703 | th | We consider the problem of revenue-maximizing Bayesian auction design with
several bidders having independent private values over several items. We show
that it can be reduced to the problem of continuous optimal transportation
introduced by Beckmann (1952) where the optimal transportation flow generalizes
the concept of ironed virtual valuations to the multi-item setting. We
establish the strong duality between the two problems and the existence of
solutions. The results rely on insights from majorization and optimal
transportation theories and on the characterization of feasible interim
mechanisms by Hart and Reny (2015). | Beckmann's approach to multi-item multi-bidder auctions | 2022-03-14 06:30:58 | Alexander V. Kolesnikov, Fedor Sandomirskiy, Aleh Tsyvinski, Alexander P. Zimin | http://arxiv.org/abs/2203.06837v2, http://arxiv.org/pdf/2203.06837v2 | econ.TH |
35,662 | th | We introduce NEM X, an inclusive retail tariff model that captures features
of existing net energy metering (NEM) policies. It is shown that the optimal
prosumer decision has three modes: (a) the net-consuming mode where the
prosumer consumes more than its behind-the-meter distributed energy resource
(DER) production when the DER production is below a predetermined lower
threshold, (b) the net-producing mode where the prosumer consumes less than its
DER production when the DER production is above a predetermined upper
threshold, and (c) the net-zero energy mode where the prosumer's consumption
matches to its DER generation when its DER production is between the lower and
upper thresholds. Both thresholds are obtained in closed-form. Next, we analyze
the regulator's rate-setting process that determines NEM X parameters such as
retail/sell rates, fixed charges, and price differentials in time-of-use
tariffs' on and off-peak periods. A stochastic Ramsey pricing program that
maximizes social welfare subject to the revenue break-even constraint for the
regulated utility is formulated. Performance of several NEM X policies is
evaluated using real and synthetic data to illuminate impacts of NEM policy
designs on social welfare, cross-subsidies of prosumers by consumers, and
payback time of DER investments that affect long-run DER adoptions. | On Net Energy Metering X: Optimal Prosumer Decisions, Social Welfare, and Cross-Subsidies | 2021-09-21 08:58:59 | Ahmed S. Alahmed, Lang Tong | http://arxiv.org/abs/2109.09977v4, http://arxiv.org/pdf/2109.09977v4 | eess.SY |
35,663 | th | An individual can only experience regret if she learns about an unchosen
alternative. In many situations, learning about an unchosen alternative is
possible only if someone else chose it. We develop a model where the ex-post
information available to each regret averse individual depends both on their
own choice and on the choices of others, as others can reveal ex-post
information about what might have been. This implies that what appears to be a
series of isolated single-person decision problems is in fact a rich
multi-player behavioural game, the regret game, where the psychological payoffs
that depend on ex-post information are interconnected. For an open set of
parameters, the regret game is a coordination game with multiple equilibria,
despite the fact that all individuals possess a uniquely optimal choice in
isolation. We experimentally test this prediction and find support for it. | Ignorance is Bliss: A Game of Regret | 2021-09-22 21:34:55 | Claudia Cerrone, Francesco Feri, Philip R. Neary | http://arxiv.org/abs/2109.10968v3, http://arxiv.org/pdf/2109.10968v3 | econ.TH |
35,664 | th | I extend the concept of absorptive capacity, used in the analysis of firms,
to a framework applicable to the national level. First, employing confirmatory
factor analyses on 47 variables, I build 13 composite factors crucial to
measuring six national level capacities: technological capacity, financial
capacity, human capacity, infrastructural capacity, public policy capacity, and
social capacity. My data cover most low- and middle-income- economies (LMICs),
eligible for the World Bank's International Development Association (IDA)
support between 2005 and 2019. Second, I analyze the relationship between the
estimated capacity factors and economic growth while controlling for some of
the incoming flows from abroad and other confounders that might influence the
relationship. Lastly, I conduct K-means cluster analysis and then analyze the
results alongside regression estimates to glean patterns and classifications
within the LMICs. Results indicate that enhancing infrastructure (ICT, energy,
trade, and transport), financial (apparatus and environment), and public policy
capacities is a prerequisite for attaining economic growth. Similarly, I find
improving human capital with specialized skills positively impacts economic
growth. Finally, by providing a ranking of which capacity is empirically more
important for economic growth, I offer suggestions to governments with limited
budgets to make wise investments. Likewise, my findings inform international
policy and monetary bodies on how they could better channel their funding in
LMICs to achieve sustainable development goals and boost shared prosperity. | Absorptive capacities and economic growth in low and middle income economies | 2021-09-23 20:49:51 | Muhammad Salar Khan | http://arxiv.org/abs/2109.11550v1, http://arxiv.org/pdf/2109.11550v1 | econ.GN |
35,665 | th | Fair division is a significant, long-standing problem and is closely related
to social and economic justice. The conventional division methods such as
cut-and-choose are hardly applicable to realworld problems because of their
complexity and unrealistic assumptions about human behaviors. Here we propose a
fair division method from a completely different perspective, using the
Boltzmann distribution. The Boltzmann distribution adopted from the physical
sciences gives the most probable and unbiased distribution derived from a
goods-centric, rather than a player-centric, division process. The mathematical
model of the Boltzmann fair division was developed for both homogeneous and
heterogeneous division problems, and the players' key factors (contributions,
needs, and preferences) could be successfully integrated. We show that the
Boltzmann fair division is a well-balanced division method maximizing the
players' total utility, and it could be easily finetuned and applicable to
complex real-world problems such as income/wealth redistribution or
international negotiations on fighting climate change. | The Boltzmann fair division for distributive justice | 2021-09-24 15:17:04 | Ji-Won Park, Jaeup U. Kim, Cheol-Min Ghim, Chae Un Kim | http://arxiv.org/abs/2109.11917v2, http://arxiv.org/pdf/2109.11917v2 | econ.GN |
35,666 | th | Bilateral trade, a fundamental topic in economics, models the problem of
intermediating between two strategic agents, a seller and a buyer, willing to
trade a good for which they hold private valuations. In this paper, we cast the
bilateral trade problem in a regret minimization framework over $T$ rounds of
seller/buyer interactions, with no prior knowledge on their private valuations.
Our main contribution is a complete characterization of the regret regimes for
fixed-price mechanisms with different feedback models and private valuations,
using as a benchmark the best fixed-price in hindsight. More precisely, we
prove the following tight bounds on the regret:
- $\Theta(\sqrt{T})$ for full-feedback (i.e., direct revelation mechanisms).
- $\Theta(T^{2/3})$ for realistic feedback (i.e., posted-price mechanisms)
and independent seller/buyer valuations with bounded densities.
- $\Theta(T)$ for realistic feedback and seller/buyer valuations with bounded
densities.
- $\Theta(T)$ for realistic feedback and independent seller/buyer valuations.
- $\Theta(T)$ for the adversarial setting. | Bilateral Trade: A Regret Minimization Perspective | 2021-09-09 01:11:48 | Nicolò Cesa-Bianchi, Tommaso Cesari, Roberto Colomboni, Federico Fusco, Stefano Leonardi | http://arxiv.org/abs/2109.12974v1, http://arxiv.org/pdf/2109.12974v1 | cs.GT |
35,667 | th | Matching markets are of particular interest in computer science and economics
literature as they are often used to model real-world phenomena where we aim to
equitably distribute a limited amount of resources to multiple agents and
determine these distributions efficiently. Although it has been shown that
finding market clearing prices for Fisher markets with indivisible goods is
NP-hard, there exist polynomial-time algorithms able to compute these prices
and allocations when the goods are divisible and the utility functions are
linear. We provide a promising research direction toward the development of a
market that simulates buyers' preferences that vary according to the bundles of
goods allocated to other buyers. Our research aims to elucidate unique ways in
which the theory of matching markets can be extended to account for more
complex and often counterintuitive microeconomic phenomena. | Matching Markets | 2021-09-30 08:13:27 | Andrew Yang, Bruce Changlong Xu, Ivan Villa-Renteria | http://arxiv.org/abs/2109.14850v1, http://arxiv.org/pdf/2109.14850v1 | cs.GT |
35,668 | th | We study an information design problem for a non-atomic service scheduling
game. The service starts at a random time and there is a continuum of agent
population who have a prior belief about the service start time but do not
observe the actual realization of it. The agents want to make decisions of when
to join the queue in order to avoid long waits in the queue or not to arrive
earlier than the service has started. There is a planner who knows when the
service starts and makes suggestions to the agents about when to join the queue
through an obedient direct signaling strategy, in order to minimize the average
social cost. We characterize the full information and the no information
equilibria and we show in what conditions it is optimal for the planner to
reveal the full information to the agents. Further, by imposing appropriate
assumptions on the model, we formulate the information design problem as a
generalized problem of moments (GPM) and use computational tools developed for
such problems to solve the problem numerically. | Information Design for a Non-atomic Service Scheduling Game | 2021-10-01 00:18:24 | Nasimeh Heydaribeni, Ketan Savla | http://arxiv.org/abs/2110.00090v1, http://arxiv.org/pdf/2110.00090v1 | eess.SY |
35,669 | th | In the setting where we want to aggregate people's subjective evaluations,
plurality vote may be meaningless when a large amount of low-effort people
always report "good" regardless of the true quality. "Surprisingly popular"
method, picking the most surprising answer compared to the prior, handle this
issue to some extent. However, it is still not fully robust to people's
strategies. Here in the setting where a large number of people are asked to
answer a small number of multi-choice questions (multi-task, large group), we
propose an information aggregation method that is robust to people's
strategies. Interestingly, this method can be seen as a rotated "surprisingly
popular". It is based on a new clustering method, Determinant MaxImization
(DMI)-clustering, and a key conceptual idea that information elicitation
without ground-truth can be seen as a clustering problem. Of independent
interest, DMI-clustering is a general clustering method that aims to maximize
the volume of the simplex consisting of each cluster's mean multiplying the
product of the cluster sizes. We show that DMI-clustering is invariant to any
non-degenerate affine transformation for all data points. When the data point's
dimension is a constant, DMI-clustering can be solved in polynomial time. In
general, we present a simple heuristic for DMI-clustering which is very similar
to Lloyd's algorithm for k-means. Additionally, we also apply the clustering
idea in the single-task setting and use the spectral method to propose a new
aggregation method that utilizes the second-moment information elicited from
the crowds. | Information Elicitation Meets Clustering | 2021-10-03 11:47:55 | Yuqing Kong | http://arxiv.org/abs/2110.00952v1, http://arxiv.org/pdf/2110.00952v1 | cs.GT |
35,670 | th | Condorcet's jury theorem states that the correct outcome is reached in direct
majority voting systems with sufficiently large electorates as long as each
voter's independent probability of voting for that outcome is greater than 0.5.
Yet, in situations where direct voting systems are infeasible, such as due to
high implementation and infrastructure costs, hierarchical voting systems
provide a reasonable alternative. We study differences in outcome precision
between hierarchical and direct voting systems for varying group sizes,
abstention rates, and voter competencies. Using asymptotic expansions of the
derivative of the reliability function (or Banzhaf number), we first prove that
indirect systems differ most from their direct counterparts when group size and
number are equal to each other, and therefore to $\sqrt{N_{\rm d}}$, where
$N_{\rm d}$ is the total number of voters in the direct system. In multitier
systems, we prove that this difference is maximized when group size equals
$\sqrt[n]{N_{\rm d}}$, where $n$ is the number of hierarchical levels. Second,
we show that while direct majority rule always outperforms hierarchical voting
for homogeneous electorates that vote with certainty, as group numbers and size
increase, hierarchical majority voting gains in its ability to represent all
eligible voters. Furthermore, when voter abstention and competency are
correlated within groups, hierarchical systems often outperform direct voting,
which we show by using a generating function approach that is able to
analytically characterize heterogeneous voting systems. | Tradeoffs in Hierarchical Voting Systems | 2021-10-05 22:01:52 | Lucas Böttcher, Georgia Kernell | http://dx.doi.org/10.1177/26339137221133401, http://arxiv.org/abs/2110.02298v1, http://arxiv.org/pdf/2110.02298v1 | math.CO |
35,671 | th | In constructing an econometric or statistical model, we pick relevant
features or variables from many candidates. A coalitional game is set up to
study the selection problem where the players are the candidates and the payoff
function is a performance measurement in all possible modeling scenarios. Thus,
in theory, an irrelevant feature is equivalent to a dummy player in the game,
which contributes nothing to all modeling situations. The hypothesis test of
zero mean contribution is the rule to decide a feature is irrelevant or not. In
our mechanism design, the end goal perfectly matches the expected model
performance with the expected sum of individual marginal effects. Within a
class of noninformative likelihood among all modeling opportunities, the
matching equation results in a specific valuation for each feature. After
estimating the valuation and its standard deviation, we drop any candidate
feature if its valuation is not significantly different from zero. In the
simulation studies, our new approach significantly outperforms several popular
methods used in practice, and its accuracy is robust to the choice of the
payoff function. | Feature Selection by a Mechanism Design | 2021-10-06 02:53:14 | Xingwei Hu | http://arxiv.org/abs/2110.02419v1, http://arxiv.org/pdf/2110.02419v1 | stat.ML |
35,672 | th | A modelling framework, based on the theory of signal processing, for
characterising the dynamics of systems driven by the unravelling of information
is outlined, and is applied to describe the process of decision making. The
model input of this approach is the specification of the flow of information.
This enables the representation of (i) reliable information, (ii) noise, and
(iii) disinformation, in a unified framework. Because the approach is designed
to characterise the dynamics of the behaviour of people, it is possible to
quantify the impact of information control, including those resulting from the
dissemination of disinformation. It is shown that if a decision maker assigns
an exceptionally high weight on one of the alternative realities, then under
the Bayesian logic their perception hardly changes in time even if evidences
presented indicate that this alternative corresponds to a false reality. Thus
confirmation bias need not be incompatible with Bayesian updating. By observing
the role played by noise in other areas of natural sciences, where noise is
used to excite the system away from false attractors, a new approach to tackle
the dark forces of fake news is proposed. | Noise, fake news, and tenacious Bayesians | 2021-10-05 19:11:08 | Dorje C. Brody | http://arxiv.org/abs/2110.03432v3, http://arxiv.org/pdf/2110.03432v3 | econ.TH |
35,673 | th | The Shapley value, one of the well-known allocation rules in game theory,
does not take into account information about the structure of the graph, so by
using the Shapley value for each hyperedge, we introduce a new allocation rule
by considering their first-order combination. We proved that some of the
properties that hold for Shapley and Myerson values also hold for our
allocation rule. In addition, we found the relationship between our allocation
rule and the Forman curvature, which plays an important role in discrete
geometry. | New allocation rule of directed hypergraphs | 2021-10-13 08:29:47 | Taiki Yamada | http://arxiv.org/abs/2110.06506v3, http://arxiv.org/pdf/2110.06506v3 | cs.GT |
35,674 | th | In this paper, we study a matching market model on a bipartite network where
agents on each side arrive and depart stochastically by a Poisson process. For
such a dynamic model, we design a mechanism that decides not only which agents
to match, but also when to match them, to minimize the expected number of
unmatched agents. The main contribution of this paper is to achieve theoretical
bounds on the performance of local mechanisms with different timing properties.
We show that an algorithm that waits to thicken the market, called the
$\textit{Patient}$ algorithm, is exponentially better than the
$\textit{Greedy}$ algorithm, i.e., an algorithm that matches agents greedily.
This means that waiting has substantial benefits on maximizing a matching over
a bipartite network. We remark that the Patient algorithm requires the planner
to identify agents who are about to leave the market, and, under the
requirement, the Patient algorithm is shown to be an optimal algorithm. We also
show that, without the requirement, the Greedy algorithm is almost optimal. In
addition, we consider the $\textit{1-sided algorithms}$ where only an agent on
one side can attempt to match. This models a practical matching market such as
a freight exchange market and a labor market where only agents on one side can
make a decision. For this setting, we prove that the Greedy and Patient
algorithms admit the same performance, that is, waiting to thicken the market
is not valuable. This conclusion is in contrast to the case where agents on
both sides can make a decision and the non-bipartite case by [Akbarpour et
al.,$~\textit{Journal of Political Economy}$, 2020]. | Dynamic Bipartite Matching Market with Arrivals and Departures | 2021-10-21 02:44:04 | Naonori Kakimura, Donghao Zhu | http://arxiv.org/abs/2110.10824v1, http://arxiv.org/pdf/2110.10824v1 | cs.DS |
35,675 | th | We consider a two-player game of war of attrition under complete information.
It is well-known that this class of games admits equilibria in pure, as well as
mixed strategies, and much of the literature has focused on the latter. We show
that if the players' payoffs whilst in "war" vary stochastically and their exit
payoffs are heterogeneous, then the game admits Markov Perfect equilibria in
pure strategies only. This is true irrespective of the degree of randomness and
heterogeneity, thus highlighting the fragility of mixed-strategy equilibria to
a natural perturbation of the canonical model. In contrast, when the players'
flow payoffs are deterministic or their exit payoffs are homogeneous, the game
admits equilibria in pure and mixed strategies. | The Absence of Attrition in a War of Attrition under Complete Information | 2021-10-22 21:55:39 | George Georgiadis, Youngsoo Kim, H. Dharma Kwon | http://dx.doi.org/10.1016/j.geb.2021.11.004, http://arxiv.org/abs/2110.12013v2, http://arxiv.org/pdf/2110.12013v2 | math.OC |
35,676 | th | Motivated by civic problems such as participatory budgeting and multiwinner
elections, we consider the problem of public good allocation: Given a set of
indivisible projects (or candidates) of different sizes, and voters with
different monotone utility functions over subsets of these candidates, the goal
is to choose a budget-constrained subset of these candidates (or a committee)
that provides fair utility to the voters. The notion of fairness we adopt is
that of core stability from cooperative game theory: No subset of voters should
be able to choose another blocking committee of proportionally smaller size
that provides strictly larger utility to all voters that deviate. The core
provides a strong notion of fairness, subsuming other notions that have been
widely studied in computational social choice.
It is well-known that an exact core need not exist even when utility
functions of the voters are additive across candidates. We therefore relax the
problem to allow approximation: Voters can only deviate to the blocking
committee if after they choose any extra candidate (called an additament),
their utility still increases by an $\alpha$ factor. If no blocking committee
exists under this definition, we call this an $\alpha$-core.
Our main result is that an $\alpha$-core, for $\alpha < 67.37$, always exists
when utilities of the voters are arbitrary monotone submodular functions, and
this can be computed in polynomial time. This result improves to $\alpha <
9.27$ for additive utilities, albeit without the polynomial time guarantee. Our
results are a significant improvement over prior work that only shows
logarithmic approximations for the case of additive utilities. We complement
our results with a lower bound of $\alpha > 1.015$ for submodular utilities,
and a lower bound of any function in the number of voters and candidates for
general monotone utilities. | Approximate Core for Committee Selection via Multilinear Extension and Market Clearing | 2021-10-24 20:40:20 | Kamesh Munagala, Yiheng Shen, Kangning Wang, Zhiyi Wang | http://arxiv.org/abs/2110.12499v1, http://arxiv.org/pdf/2110.12499v1 | cs.GT |
35,677 | th | In this paper I investigate a Bayesian inverse problem in the specific
setting of a price setting monopolist facing a randomly growing demand in
multiple possibly interconnected markets. Investigating the Value of
Information of a signal to the monopolist in a fully dynamic discrete model
employing the Kalman-Bucy-Stratonovich filter, we find that it may be
non-monotonic in the variance of the signal. In the classical static settings
of the Value of Information literature this relationship may be convex or
concave, but is always monotonic. The existence of the non-monotonicity depends
critically on the exogenous growth rate of the system. | Expanding Multi-Market Monopoly and Nonconcavity in the Value of Information | 2021-11-01 14:20:09 | Stefan Behringer | http://arxiv.org/abs/2111.00839v1, http://arxiv.org/pdf/2111.00839v1 | econ.TH |
35,678 | th | Although expected utility theory has proven a fruitful and elegant theory in
the finite realm, attempts to generalize it to infinite values have resulted in
many paradoxes. In this paper, we argue that the use of John Conway's surreal
numbers shall provide a firm mathematical foundation for transfinite decision
theory. To that end, we prove a surreal representation theorem and show that
our surreal decision theory respects dominance reasoning even in the case of
infinite values. We then bring our theory to bear on one of the more venerable
decision problems in the literature: Pascal's Wager. Analyzing the wager
showcases our theory's virtues and advantages. To that end, we analyze two
objections against the wager: Mixed Strategies and Many Gods. After formulating
the two objections in the framework of surreal utilities and probabilities, our
theory correctly predicts that (1) the pure Pascalian strategy beats all mixed
strategies, and (2) what one should do in a Pascalian decision problem depends
on what one's credence function is like. Our analysis therefore suggests that
although Pascal's Wager is mathematically coherent, it does not deliver what it
purports to, a rationally compelling argument that people should lead a
religious life regardless of how confident they are in theism and its
alternatives. | Surreal Decisions | 2021-10-23 21:37:20 | Eddy Keming Chen, Daniel Rubio | http://dx.doi.org/10.1111/phpr.12510, http://arxiv.org/abs/2111.00862v1, http://arxiv.org/pdf/2111.00862v1 | cs.AI |
35,679 | th | We focus on a simple, one-dimensional collective decision problem (often
referred to as the facility location problem) and explore issues of
strategyproofness and proportionality-based fairness. We introduce and analyze
a hierarchy of proportionality-based fairness axioms of varying strength:
Individual Fair Share (IFS), Unanimous Fair Share (UFS), Proportionality (as in
Freeman et al, 2021), and Proportional Fairness (PF). For each axiom, we
characterize the family of mechanisms that satisfy the axiom and
strategyproofness. We show that imposing strategyproofness renders many of the
axioms to be equivalent: the family of mechanisms that satisfy proportionality,
unanimity, and strategyproofness is equivalent to the family of mechanisms that
satisfy UFS and strategyproofness, which, in turn, is equivalent to the family
of mechanisms that satisfy PF and strategyproofness. Furthermore, there is a
unique such mechanism: the Uniform Phantom mechanism, which is studied in
Freeman et al. (2021). We also characterize the outcomes of the Uniform Phantom
mechanism as the unique (pure) equilibrium outcome for any mechanism that
satisfies continuity, strict monotonicity, and UFS. Finally, we analyze the
approximation guarantees, in terms of optimal social welfare and minimum total
cost, obtained by mechanisms that are strategyproof and satisfy each
proportionality-based fairness axiom. We show that the Uniform Phantom
mechanism provides the best approximation of the optimal social welfare (and
also minimum total cost) among all mechanisms that satisfy UFS. | Strategyproof and Proportionally Fair Facility Location | 2021-11-02 15:41:32 | Haris Aziz, Alexander Lam, Barton E. Lee, Toby Walsh | http://arxiv.org/abs/2111.01566v3, http://arxiv.org/pdf/2111.01566v3 | cs.GT |
35,680 | th | Govindan and Klumpp [7] provided a characterization of perfect equilibria
using Lexicographic Probability Systems (LPSs). Their characterization was
essentially finite in that they showed that there exists a finite bound on the
number of levels in the LPS, but they did not compute it explicitly. In this
note, we draw on two recent developments in Real Algebraic Geometry to obtain a
formula for this bound. | A Finite Characterization of Perfect Equilibria | 2021-11-02 17:58:06 | Ivonne Callejas, Srihari Govindan, Lucas Pahl | http://arxiv.org/abs/2111.01638v1, http://arxiv.org/pdf/2111.01638v1 | econ.TH |
35,681 | th | The Gibbard-Satterthwaite theorem states that no unanimous and
non-dictatorial voting rule is strategyproof. We revisit voting rules and
consider a weaker notion of strategyproofness called not obvious manipulability
that was proposed by Troyan and Morrill (2020). We identify several classes of
voting rules that satisfy this notion. We also show that several voting rules
including k-approval fail to satisfy this property. We characterize conditions
under which voting rules are obviously manipulable. One of our insights is that
certain rules are obviously manipulable when the number of alternatives is
relatively large compared to the number of voters. In contrast to the
Gibbard-Satterthwaite theorem, many of the rules we examined are not obviously
manipulable. This reflects the relatively easier satisfiability of the notion
and the zero information assumption of not obvious manipulability, as opposed
to the perfect information assumption of strategyproofness. We also present
algorithmic results for computing obvious manipulations and report on
experiments. | Obvious Manipulability of Voting Rules | 2021-11-03 05:41:48 | Haris Aziz, Alexander Lam | http://dx.doi.org/10.1007/978-3-030-87756-9_12, http://arxiv.org/abs/2111.01983v3, http://arxiv.org/pdf/2111.01983v3 | cs.GT |
35,682 | th | We study bilateral trade between two strategic agents. The celebrated result
of Myerson and Satterthwaite states that in general, no incentive-compatible,
individually rational and weakly budget balanced mechanism can be efficient.
I.e., no mechanism with these properties can guarantee a trade whenever buyer
value exceeds seller cost. Given this, a natural question is whether there
exists a mechanism with these properties that guarantees a constant fraction of
the first-best gains-from-trade, namely a constant fraction of the
gains-from-trade attainable whenever buyer's value weakly exceeds seller's
cost. In this work, we positively resolve this long-standing open question on
constant-factor approximation, mentioned in several previous works, using a
simple mechanism. | Approximately Efficient Bilateral Trade | 2021-11-05 19:49:45 | Yuan Deng, Jieming Mao, Balasubramanian Sivan, Kangning Wang | http://arxiv.org/abs/2111.03611v1, http://arxiv.org/pdf/2111.03611v1 | cs.GT |
35,683 | th | In modeling multivariate time series for either forecast or policy analysis,
it would be beneficial to have figured out the cause-effect relations within
the data. Regression analysis, however, is generally for correlation relation,
and very few researches have focused on variance analysis for causality
discovery. We first set up an equilibrium for the cause-effect relations using
a fictitious vector autoregressive model. In the equilibrium, long-run
relations are identified from noise, and spurious ones are negligibly close to
zero. The solution, called causality distribution, measures the relative
strength causing the movement of all series or specific affected ones. If a
group of exogenous data affects the others but not vice versa, then, in theory,
the causality distribution for other variables is necessarily zero. The
hypothesis test of zero causality is the rule to decide a variable is
endogenous or not. Our new approach has high accuracy in identifying the true
cause-effect relations among the data in the simulation studies. We also apply
the approach to estimating the causal factors' contribution to climate change. | Decoding Causality by Fictitious VAR Modeling | 2021-11-15 01:43:02 | Xingwei Hu | http://arxiv.org/abs/2111.07465v2, http://arxiv.org/pdf/2111.07465v2 | stat.ML |
35,684 | th | A natural notion of rationality/consistency for aggregating models is that,
for all (possibly aggregated) models $A$ and $B$, if the output of model $A$ is
$f(A)$ and if the output model $B$ is $f(B)$, then the output of the model
obtained by aggregating $A$ and $B$ must be a weighted average of $f(A)$ and
$f(B)$. Similarly, a natural notion of rationality for aggregating preferences
of ensembles of experts is that, for all (possibly aggregated) experts $A$ and
$B$, and all possible choices $x$ and $y$, if both $A$ and $B$ prefer $x$ over
$y$, then the expert obtained by aggregating $A$ and $B$ must also prefer $x$
over $y$. Rational aggregation is an important element of uncertainty
quantification, and it lies behind many seemingly different results in economic
theory: spanning social choice, belief formation, and individual decision
making. Three examples of rational aggregation rules are as follows. (1) Give
each individual model (expert) a weight (a score) and use weighted averaging to
aggregate individual or finite ensembles of models (experts). (2) Order/rank
individual model (expert) and let the aggregation of a finite ensemble of
individual models (experts) be the highest-ranked individual model (expert) in
that ensemble. (3) Give each individual model (expert) a weight, introduce a
weak order/ranking over the set of models/experts, aggregate $A$ and $B$ as the
weighted average of the highest-ranked models (experts) in $A$ or $B$. Note
that (1) and (2) are particular cases of (3). In this paper, we show that all
rational aggregation rules are of the form (3). This result unifies aggregation
procedures across different economic environments. Following the main
representation, we show applications and extensions of our representation in
various separated economics topics such as belief formation, choice theory, and
social welfare economics. | Aggregation of Models, Choices, Beliefs, and Preferences | 2021-11-23 06:26:42 | Hamed Hamze Bajgiran, Houman Owhadi | http://arxiv.org/abs/2111.11630v1, http://arxiv.org/pdf/2111.11630v1 | econ.TH |
35,685 | th | Securities borrowing and lending are critical to proper functioning of
securities markets. To alleviate securities owners' exposure to borrower
default risk, overcollateralization and indemnification are provided by the
borrower and the lending agent respectively. Haircuts as the level of
overcollateralization and the cost of indemnification are naturally
interrelated: the higher haircut is, the lower cost shall become. This article
presents a method of quantifying their relationship. Borrower dependent
haircuts satisfying the lender's credit risk appetite are computed for US
Treasuries and main equities by applying a repo haircut model to bilateral
securities lending transactions. Indemnification is designed to fulfill a
triple-A risk appetite when the transaction haircut fails to deliver. The cost
of indemnification consists of a risk charge, a capital charge, and a funding
charge, each corresponding to the expected loss, the economic capital, and the
redundant fund needed to arrive at the triple-A haircut. | Securities Lending Haircuts and Indemnification Pricing | 2021-11-25 22:15:37 | Wujiang Lou | http://dx.doi.org/10.2139/ssrn.3682930, http://arxiv.org/abs/2111.13228v1, http://arxiv.org/pdf/2111.13228v1 | q-fin.MF |
35,686 | th | The St. Petersburg paradox is the oldest paradox in decision theory and has
played a pivotal role in the introduction of increasing concave utility
functions embodying risk aversion and decreasing marginal utility of gains. All
attempts to resolve it have considered some variants of the original set-up,
but the original paradox has remained unresolved, while the proposed variants
have introduced new complications and problems. Here a rigorous mathematical
resolution of the St. Petersburg paradox is suggested based on a probabilistic
approach to decision theory. | A Resolution of St. Petersburg Paradox | 2021-11-24 16:59:32 | V. I. Yukalov | http://arxiv.org/abs/2111.14635v1, http://arxiv.org/pdf/2111.14635v1 | math.OC |
35,687 | th | We present an index theory of equilibria for extensive form games. This
requires developing an index theory for games where the strategy sets of
players are general polytopes and their payoff functions are multiaffine in the
product of these polytopes. Such polytopes arise from identifying
(topologically) equivalent mixed strategies of a normal form game. | Polytope-form games and Index/Degree Theories for Extensive-form games | 2022-01-06 18:28:08 | Lucas Pahl | http://arxiv.org/abs/2201.02098v4, http://arxiv.org/pdf/2201.02098v4 | econ.TH |
35,688 | th | A preference profile with $m$ alternatives and $n$ voters is $d$-Manhattan
(resp. $d$-Euclidean) if both the alternatives and the voters can be placed
into the $d$-dimensional space such that between each pair of alternatives,
every voter prefers the one which has a shorter Manhattan (resp. Euclidean)
distance to the voter. Following Bogomolnaia and Laslier [Journal of
Mathematical Economics, 2007] and Chen and Grottke [Social Choice and Welfare,
2021] who look at $d$-Euclidean preference profiles, we study which preference
profiles are $d$-Manhattan depending on the values $m$ and $n$.
First, we show that each preference profile with $m$ alternatives and $n$
voters is $d$-Manhattan whenever $d$ $\geq$ min($n$, $m$-$1$). Second, for $d =
2$, we show that the smallest non $d$-Manhattan preference profile has either
three voters and six alternatives, or four voters and five alternatives, or
five voters and four alternatives. This is more complex than the case with
$d$-Euclidean preferences (see [Bogomolnaia and Laslier, 2007] and [Bulteau and
Chen, 2020]. | Multidimensional Manhattan Preferences | 2022-01-24 16:52:38 | Jiehua Chen, Martin Nöllenburg, Sofia Simola, Anaïs Villedieu, Markus Wallinger | http://arxiv.org/abs/2201.09691v1, http://arxiv.org/pdf/2201.09691v1 | cs.MA |
35,689 | th | We study the dynamics of simple congestion games with two resources where a
continuum of agents behaves according to a version of Experience-Weighted
Attraction (EWA) algorithm. The dynamics is characterized by two parameters:
the (population) intensity of choice $a>0$ capturing the economic rationality
of the total population of agents and a discount factor $\sigma\in [0,1]$
capturing a type of memory loss where past outcomes matter exponentially less
than the recent ones. Finally, our system adds a third parameter $b \in (0,1)$,
which captures the asymmetry of the cost functions of the two resources. It is
the proportion of the agents using the first resource at Nash equilibrium, with
$b=1/2$ capturing a symmetric network.
Within this simple framework, we show a plethora of bifurcation phenomena
where behavioral dynamics destabilize from global convergence to equilibrium,
to limit cycles or even (formally proven) chaos as a function of the parameters
$a$, $b$ and $\sigma$. Specifically, we show that for any discount factor
$\sigma$ the system will be destabilized for a sufficiently large intensity of
choice $a$. Although for discount factor $\sigma=0$ almost always (i.e., $b
\neq 1/2$) the system will become chaotic, as $\sigma$ increases the chaotic
regime will give place to the attracting periodic orbit of period 2. Therefore,
memory loss can simplify game dynamics and make the system predictable. We
complement our theoretical analysis with simulations and several bifurcation
diagrams that showcase the unyielding complexity of the population dynamics
(e.g., attracting periodic orbits of different lengths) even in the simplest
possible potential games. | Unpredictable dynamics in congestion games: memory loss can prevent chaos | 2022-01-26 18:07:03 | Jakub Bielawski, Thiparat Chotibut, Fryderyk Falniowski, Michal Misiurewicz, Georgios Piliouras | http://arxiv.org/abs/2201.10992v2, http://arxiv.org/pdf/2201.10992v2 | cs.GT |
35,690 | th | We propose and solve a negotiation model of multiple players facing many
alternative solutions. The model can be generalized to many relevant
circumstances where stakeholders' interests partially overlap and partially
oppose. We also show that the model can be mapped into the well-known directed
percolation and directed polymers problems. Moreover, many statistical
mechanics tools, such as the Replica method, can be fruitfully employed.
Studying our negotiation model can enlighten the links between social-economic
phenomena and traditional statistical mechanics and help to develop new
perspectives and tools in the fertile interdisciplinary field. | Negotiation problem | 2022-01-29 20:06:45 | Izat B. Baybusinov, Enrico Maria Fenoaltea, Yi-Cheng Zhang | http://dx.doi.org/10.1016/j.physa.2021.126806, http://arxiv.org/abs/2201.12619v1, http://arxiv.org/pdf/2201.12619v1 | physics.soc-ph |
35,691 | th | We give new characterizations of core imputations for the following games:
* The assignment game.
* Concurrent games, i.e., general graph matching games having non-empty core.
* The unconstrained bipartite $b$-matching game (edges can be matched
multiple times).
* The constrained bipartite $b$-matching game (edges can be matched at most
once).
The classic paper of Shapley and Shubik \cite{Shapley1971assignment} showed
that core imputations of the assignment game are precisely optimal solutions to
the dual of the LP-relaxation of the game. Building on this, Deng et al.
\cite{Deng1999algorithms} gave a general framework which yields analogous
characterizations for several fundamental combinatorial games. Interestingly
enough, their framework does not apply to the last two games stated above. In
turn, we show that some of the core imputations of these games correspond to
optimal dual solutions and others do not. This leads to the tantalizing
question of understanding the origins of the latter. We also present new
characterizations of the profits accrued by agents and teams in core
imputations of the first two games. Our characterization for the first game is
stronger than that for the second; the underlying reason is that the
characterization of vertices of the Birkhoff polytope is stronger than that of
the Balinski polytope. | New Characterizations of Core Imputations of Matching and $b$-Matching Games | 2022-02-01 21:08:50 | Vijay V. Vazirani | http://arxiv.org/abs/2202.00619v12, http://arxiv.org/pdf/2202.00619v12 | cs.GT |
35,692 | th | The Non-Fungible Token (NFT) is viewed as one of the important applications
of blockchain technology. Although NFT has a large market scale and multiple
practical standards, several limitations of the existing mechanism in NFT
markets exist. This work proposes a novel securitization and repurchase scheme
for NFT to overcome these limitations. We first provide an Asset-Backed
Securities (ABS) solution to settle the limitations of non-fungibility of NFT.
Our securitization design aims to enhance the liquidity of NFTs and enable
Oracles and Automatic Market Makers (AMMs) for NFTs. Then we propose a novel
repurchase protocol for a participant owing a portion of NFT to repurchase
other shares to obtain the complete ownership. As participants may
strategically bid during the acquisition process, our repurchase process is
formulated as a Stackelberg game to explore the equilibrium prices. We also
provide solutions to handle difficulties at market such as budget constraints
and lazy bidders. | ABSNFT: Securitization and Repurchase Scheme for Non-Fungible Tokens Based on Game Theoretical Analysis | 2022-02-04 18:46:03 | Hongyin Chen, Yukun Cheng, Xiaotie Deng, Wenhan Huang, Linxuan Rong | http://arxiv.org/abs/2202.02199v2, http://arxiv.org/pdf/2202.02199v2 | cs.GT |
35,693 | th | We develop a tractable model for studying strategic interactions between
learning algorithms. We uncover a mechanism responsible for the emergence of
algorithmic collusion. We observe that algorithms periodically coordinate on
actions that are more profitable than static Nash equilibria. This novel
collusive channel relies on an endogenous statistical linkage in the
algorithms' estimates which we call spontaneous coupling. The model's
parameters predict whether the statistical linkage will appear, and what market
structures facilitate algorithmic collusion. We show that spontaneous coupling
can sustain collusion in prices and market shares, complementing experimental
findings in the literature. Finally, we apply our results to design algorithmic
markets. | Artificial Intelligence and Spontaneous Collusion | 2022-02-12 03:50:15 | Martino Banchio, Giacomo Mantegazza | http://arxiv.org/abs/2202.05946v5, http://arxiv.org/pdf/2202.05946v5 | econ.TH |
35,694 | th | Motivated by online advertising auctions, we study auction design in repeated
auctions played by simple Artificial Intelligence algorithms (Q-learning). We
find that first-price auctions with no additional feedback lead to
tacit-collusive outcomes (bids lower than values), while second-price auctions
do not. We show that the difference is driven by the incentive in first-price
auctions to outbid opponents by just one bid increment. This facilitates
re-coordination on low bids after a phase of experimentation. We also show that
providing information about lowest bid to win, as introduced by Google at the
time of switch to first-price auctions, increases competitiveness of auctions. | Artificial Intelligence and Auction Design | 2022-02-12 03:54:40 | Martino Banchio, Andrzej Skrzypacz | http://arxiv.org/abs/2202.05947v1, http://arxiv.org/pdf/2202.05947v1 | econ.TH |
35,695 | th | For centuries, it has been widely believed that the influence of a small
coalition of voters is negligible in a large election. Consequently, there is a
large body of literature on characterizing the likelihood for an election to be
influenced when the votes follow certain distributions, especially the
likelihood of being manipulable by a single voter under the i.i.d. uniform
distribution, known as the Impartial Culture (IC).
In this paper, we extend previous studies in three aspects: (1) we propose a
more general semi-random model, where a distribution adversary chooses a
worst-case distribution and then a contamination adversary modifies up to
$\psi$ portion of the data, (2) we consider many coalitional influence
problems, including coalitional manipulation, margin of victory, and various
vote controls and bribery, and (3) we consider arbitrary and variable coalition
size $B$. Our main theorem provides asymptotically tight bounds on the
semi-random likelihood of the existence of a size-$B$ coalition that can
successfully influence the election under a wide range of voting rules.
Applications of the main theorem and its proof techniques resolve long-standing
open questions about the likelihood of coalitional manipulability under IC, by
showing that the likelihood is $\Theta\left(\min\left\{\frac{B}{\sqrt n},
1\right\}\right)$ for many commonly-studied voting rules.
The main technical contribution is a characterization of the semi-random
likelihood for a Poisson multinomial variable (PMV) to be unstable, which we
believe to be a general and useful technique with independent interest. | The Impact of a Coalition: Assessing the Likelihood of Voter Influence in Large Elections | 2022-02-14 00:27:22 | Lirong Xia | http://arxiv.org/abs/2202.06411v4, http://arxiv.org/pdf/2202.06411v4 | econ.TH |
35,696 | th | In this paper, we analyze the problem of how to adapt the concept of
proportionality to situations where several perfectly divisible resources have
to be allocated among certain set of agents that have exactly one claim which
is used for all resources. In particular, we introduce the constrained
proportional awards rule, which extend the classical proportional rule to these
situations. Moreover, we provide an axiomatic characterization of this rule. | On proportionality in multi-issue problems with crossed claims | 2022-02-20 20:57:40 | Rick K. Acosta-Vega, Encarnación Algaba, Joaquín Sánchez-Soriano | http://arxiv.org/abs/2202.09877v1, http://arxiv.org/pdf/2202.09877v1 | math.OC |
35,697 | th | In today's economy, it becomes important for Internet platforms to consider
the sequential information design problem to align its long term interest with
incentives of the gig service providers. This paper proposes a novel model of
sequential information design, namely the Markov persuasion processes (MPPs),
where a sender, with informational advantage, seeks to persuade a stream of
myopic receivers to take actions that maximizes the sender's cumulative
utilities in a finite horizon Markovian environment with varying prior and
utility functions. Planning in MPPs thus faces the unique challenge in finding
a signaling policy that is simultaneously persuasive to the myopic receivers
and inducing the optimal long-term cumulative utilities of the sender.
Nevertheless, in the population level where the model is known, it turns out
that we can efficiently determine the optimal (resp. $\epsilon$-optimal) policy
with finite (resp. infinite) states and outcomes, through a modified
formulation of the Bellman equation.
Our main technical contribution is to study the MPP under the online
reinforcement learning (RL) setting, where the goal is to learn the optimal
signaling policy by interacting with with the underlying MPP, without the
knowledge of the sender's utility functions, prior distributions, and the
Markov transition kernels. We design a provably efficient no-regret learning
algorithm, the Optimism-Pessimism Principle for Persuasion Process (OP4), which
features a novel combination of both optimism and pessimism principles. Our
algorithm enjoys sample efficiency by achieving a sublinear $\sqrt{T}$-regret
upper bound. Furthermore, both our algorithm and theory can be applied to MPPs
with large space of outcomes and states via function approximation, and we
showcase such a success under the linear setting. | Sequential Information Design: Markov Persuasion Process and Its Efficient Reinforcement Learning | 2022-02-22 08:41:43 | Jibang Wu, Zixuan Zhang, Zhe Feng, Zhaoran Wang, Zhuoran Yang, Michael I. Jordan, Haifeng Xu | http://arxiv.org/abs/2202.10678v1, http://arxiv.org/pdf/2202.10678v1 | cs.AI |
35,698 | th | Reputation-based cooperation on social networks offers a causal mechanism
between graph properties and social trust. Recent papers on the `structural
microfoundations` of the society used this insight to show how demographic
processes, such as falling fertility, urbanisation, and migration, can alter
the logic of human societies. This paper demonstrates the underlying mechanism
in a way that is accessible to scientists not specialising in networks.
Additionally, the paper shows that, when the size and degree of the network is
fixed (i.e., all graphs have the same number of agents, who all have the same
number of connections), it is the clustering coefficient that drives
differences in how cooperative social networks are. | Clustering Drives Cooperation on Reputation Networks, All Else Fixed | 2022-03-01 14:37:51 | Tamas David-Barrett | http://arxiv.org/abs/2203.00372v1, http://arxiv.org/pdf/2203.00372v1 | cs.SI |