id
int64
28.8k
36k
category
stringclasses
3 values
text
stringlengths
44
3.03k
title
stringlengths
10
236
published
stringlengths
19
19
author
stringlengths
6
943
link
stringlengths
66
127
primary_category
stringclasses
62 values
35,901
th
I introduce PRZI (Parameterised-Response Zero Intelligence), a new form of zero-intelligence trader intended for use in simulation studies of the dynamics of continuous double auction markets. Like Gode & Sunder's classic ZIC trader, PRZI generates quote-prices from a random distribution over some specified domain of allowable quote-prices. Unlike ZIC, which uses a uniform distribution to generate prices, the probability distribution in a PRZI trader is parameterised in such a way that its probability mass function (PMF) is determined by a real-valued control variable s in the range [-1.0, +1.0] that determines the _strategy_ for that trader. When s=0, a PRZI trader is identical to ZIC, with a uniform PMF; but when |s|=~1 the PRZI trader's PMF becomes maximally skewed to one extreme or the other of the price-range, thereby making its quote-prices more or less urgent, biasing the quote-price distribution toward or away from the trader's limit-price. To explore the co-evolutionary dynamics of populations of PRZI traders that dynamically adapt their strategies, I show results from long-term market experiments in which each trader uses a simple stochastic hill-climber algorithm to repeatedly evaluate alternative s-values and choose the most profitable at any given time. In these experiments the profitability of any particular s-value may be non-stationary because the profitability of one trader's strategy at any one time can depend on the mix of strategies being played by the other traders at that time, which are each themselves continuously adapting. Results from these market experiments demonstrate that the population of traders' strategies can exhibit rich dynamics, with periods of stability lasting over hundreds of thousands of trader interactions interspersed by occasional periods of change. Python source-code for the work reported here has been made publicly available on GitHub.
Parameterised-Response Zero-Intelligence Traders
2021-03-21 11:43:39
Dave Cliff
http://arxiv.org/abs/2103.11341v7, http://arxiv.org/pdf/2103.11341v7
q-fin.TR
35,902
th
Can egalitarian norms or conventions survive the presence of dominant individuals who are ensured of victory in conflicts? We investigate the interaction of power asymmetry and partner choice in games of conflict over a contested resource. We introduce three models to study the emergence and resilience of cooperation among unequals when interaction is random, when individuals can choose their partners, and where power asymmetries dynamically depend on accumulated payoffs. We find that the ability to avoid bullies with higher competitive ability afforded by partner choice mostly restores cooperative conventions and that the competitive hierarchy never forms. Partner choice counteracts the hyper dominance of bullies who are isolated in the network and eliminates the need for others to coordinate in a coalition. When competitive ability dynamically depends on cumulative payoffs, complex cycles of coupled network-strategy-rank changes emerge. Effective collaborators gain popularity (and thus power), adopt aggressive behavior, get isolated, and ultimately lose power. Neither the network nor behavior converge to a stable equilibrium. Despite the instability of power dynamics, the cooperative convention in the population remains stable overall and long-term inequality is completely eliminated. The interaction between partner choice and dynamic power asymmetry is crucial for these results: without partner choice, bullies cannot be isolated, and without dynamic power asymmetry, bullies do not lose their power even when isolated. We analytically identify a single critical point that marks a phase transition in all three iterations of our models. This critical point is where the first individual breaks from the convention and cycles start to emerge.
Avoiding the bullies: The resilience of cooperation among unequals
2021-04-17 22:55:26
Michael Foley, Rory Smead, Patrick Forber, Christoph Riedl
http://dx.doi.org/10.1371/journal.pcbi.1008847, http://arxiv.org/abs/2104.08636v1, http://arxiv.org/pdf/2104.08636v1
physics.soc-ph
35,903
th
We test the performance of deep deterministic policy gradient (DDPG), a deep reinforcement learning algorithm, able to handle continuous state and action spaces, to learn Nash equilibria in a setting where firms compete in prices. These algorithms are typically considered model-free because they do not require transition probability functions (as in e.g., Markov games) or predefined functional forms. Despite being model-free, a large set of parameters are utilized in various steps of the algorithm. These are e.g., learning rates, memory buffers, state-space dimensioning, normalizations, or noise decay rates and the purpose of this work is to systematically test the effect of these parameter configurations on convergence to the analytically derived Bertrand equilibrium. We find parameter choices that can reach convergence rates of up to 99%. The reliable convergence may make the method a useful tool to study strategic behavior of firms even in more complex settings. Keywords: Bertrand Equilibrium, Competition in Uniform Price Auctions, Deep Deterministic Policy Gradient Algorithm, Parameter Sensitivity Analysis
Computational Performance of Deep Reinforcement Learning to find Nash Equilibria
2021-04-27 01:14:17
Christoph Graf, Viktor Zobernig, Johannes Schmidt, Claude Klöckl
http://arxiv.org/abs/2104.12895v1, http://arxiv.org/pdf/2104.12895v1
cs.GT
35,904
th
In this paper, we define a new class of dynamic games played in large populations of anonymous agents. The behavior of agents in these games depends on a time-homogeneous type and a time-varying state, which are private to each agent and characterize their available actions and motifs. We consider finite type, state, and action spaces. On the individual agent level, the state evolves in discrete-time as the agent participates in interactions, in which the state transitions are affected by the agent's individual action and the distribution of other agents' states and actions. On the societal level, we consider that the agents form a continuum of mass and that interactions occur either synchronously or asynchronously, and derive models for the evolution of the agents' state distribution. We characterize the stationary equilibrium as the solution concept in our games, which is a condition where all agents are playing their best response and the state distribution is stationary. At least one stationary equilibrium is guaranteed to exist in every dynamic population game. Our approach intersects with previous works on anonymous sequential games, mean-field games, and Markov decision evolutionary games, but it is novel in how we relate the dynamic setting to a classical, static population game setting. In particular, stationary equilibria can be reduced to standard Nash equilibria in classical population games. This simplifies the analysis of these games and inspires the formulation of an evolutionary model for the coupled dynamics of both the agents' actions and states.
Dynamic population games
2021-04-30 00:13:10
Ezzat Elokda, Andrea Censi, Saverio Bolognani
http://arxiv.org/abs/2104.14662v1, http://arxiv.org/pdf/2104.14662v1
math.OC
35,905
th
Where information grows abundant, attention becomes a scarce resource. As a result, agents must plan wisely how to allocate their attention in order to achieve epistemic efficiency. Here, we present a framework for multi-agent epistemic planning with attention, based on Dynamic Epistemic Logic (DEL, a powerful formalism for epistemic planning). We identify the framework as a fragment of standard DEL, and consider its plan existence problem. While in the general case undecidable, we show that when attention is required for learning, all instances of the problem are decidable.
Epistemic Planning with Attention as a Bounded Resource
2021-05-20 21:14:41
Gaia Belardinelli, Rasmus K. Rendsvig
http://arxiv.org/abs/2105.09976v1, http://arxiv.org/pdf/2105.09976v1
cs.AI
35,906
th
Demand for blockchains such as Bitcoin and Ethereum is far larger than supply, necessitating a mechanism that selects a subset of transactions to include "on-chain" from the pool of all pending transactions. This paper investigates the problem of designing a blockchain transaction fee mechanism through the lens of mechanism design. We introduce two new forms of incentive-compatibility that capture some of the idiosyncrasies of the blockchain setting, one (MMIC) that protects against deviations by profit-maximizing miners and one (OCA-proofness) that protects against off-chain collusion between miners and users. This study is immediately applicable to a recent (August 5, 2021) and major change to Ethereum's transaction fee mechanism, based on a proposal called "EIP-1559." Historically, Ethereum's transaction fee mechanism was a first-price (pay-as-bid) auction. EIP-1559 suggested making several tightly coupled changes, including the introduction of variable-size blocks, a history-dependent reserve price, and the burning of a significant portion of the transaction fees. We prove that this new mechanism earns an impressive report card: it satisfies the MMIC and OCA-proofness conditions, and is also dominant-strategy incentive compatible (DSIC) except when there is a sudden demand spike. We also introduce an alternative design, the "tipless mechanism," which offers an incomparable slate of incentive-compatibility guarantees -- it is MMIC and DSIC, and OCA-proof unless in the midst of a demand spike.
Transaction Fee Mechanism Design
2021-06-02 20:48:32
Tim Roughgarden
http://arxiv.org/abs/2106.01340v3, http://arxiv.org/pdf/2106.01340v3
cs.CR
35,907
th
I juxtapose Cover's vaunted universal portfolio selection algorithm (Cover 1991) with the modern representation (Qian 2016; Roncalli 2013) of a portfolio as a certain allocation of risk among the available assets, rather than a mere allocation of capital. Thus, I define a Universal Risk Budgeting scheme that weights each risk budget (instead of each capital budget) by its historical performance record (a la Cover). I prove that my scheme is mathematically equivalent to a novel type of Cover and Ordentlich 1996 universal portfolio that uses a new family of prior densities that have hitherto not appeared in the literature on universal portfolio theory. I argue that my universal risk budget, so-defined, is a potentially more perspicuous and flexible type of universal portfolio; it allows the algorithmic trader to incorporate, with advantage, his prior knowledge (or beliefs) about the particular covariance structure of instantaneous asset returns. Say, if there is some dispersion in the volatilities of the available assets, then the uniform (or Dirichlet) priors that are standard in the literature will generate a dangerously lopsided prior distribution over the possible risk budgets. In the author's opinion, the proposed "Garivaltis prior" makes for a nice improvement on Cover's timeless expert system (Cover 1991), that is properly agnostic and open (from the very get-go) to different risk budgets. Inspired by Jamshidian 1992, the universal risk budget is formulated as a new kind of exotic option in the continuous time Black and Scholes 1973 market, with all the pleasure, elegance, and convenience that that entails.
Universal Risk Budgeting
2021-06-18 13:06:02
Alex Garivaltis
http://arxiv.org/abs/2106.10030v2, http://arxiv.org/pdf/2106.10030v2
q-fin.PM
35,908
th
We study a game-theoretic model of blockchain mining economies and show that griefing, a practice according to which participants harm other participants at some lesser cost to themselves, is a prevalent threat at its Nash equilibria. The proof relies on a generalization of evolutionary stability to non-homogeneous populations via griefing factors (ratios that measure network losses relative to deviator's own losses) which leads to a formal theoretical argument for the dissipation of resources, consolidation of power and high entry barriers that are currently observed in practice. A critical assumption in this type of analysis is that miners' decisions have significant influence in aggregate network outcomes (such as network hashrate). However, as networks grow larger, the miner's interaction more closely resembles a distributed production economy or Fisher market and its stability properties change. In this case, we derive a proportional response (PR) update protocol which converges to market equilibria at which griefing is irrelevant. Convergence holds for a wide range of miners risk profiles and various degrees of resource mobility between blockchains with different mining technologies. Our empirical findings in a case study with four mineable cryptocurrencies suggest that risk diversification, restricted mobility of resources (as enforced by different mining technologies) and network growth, all are contributing factors to the stability of the inherently volatile blockchain ecosystem.
From Griefing to Stability in Blockchain Mining Economies
2021-06-23 14:54:26
Yun Kuen Cheung, Stefanos Leonardos, Georgios Piliouras, Shyam Sridhar
http://arxiv.org/abs/2106.12332v1, http://arxiv.org/pdf/2106.12332v1
cs.GT
35,909
th
The literature on awareness modeling includes both syntax-free and syntax-based frameworks. Heifetz, Meier \& Schipper (HMS) propose a lattice model of awareness that is syntax-free. While their lattice approach is elegant and intuitive, it precludes the simple option of relying on formal language to induce lattices, and does not explicitly distinguish uncertainty from unawareness. Contra this, the most prominent syntax-based solution, the Fagin-Halpern (FH) model, accounts for this distinction and offers a simple representation of awareness, but lacks the intuitiveness of the lattice structure. Here, we combine these two approaches by providing a lattice of Kripke models, induced by atom subset inclusion, in which uncertainty and unawareness are separate. We show our model equivalent to both HMS and FH models by defining transformations between them which preserve satisfaction of formulas of a language for explicit knowledge, and obtain completeness through our and HMS' results. Lastly, we prove that the Kripke lattice model can be shown equivalent to the FH model (when awareness is propositionally determined) also with respect to the language of the Logic of General Awareness, for which the FH model where originally proposed.
Awareness Logic: Kripke Lattices as a Middle Ground between Syntactic and Semantic Models
2021-06-24 13:04:44
Gaia Belardinelli, Rasmus K. Rendsvig
http://arxiv.org/abs/2106.12868v1, http://arxiv.org/pdf/2106.12868v1
cs.AI
35,910
th
The interplay between exploration and exploitation in competitive multi-agent learning is still far from being well understood. Motivated by this, we study smooth Q-learning, a prototypical learning model that explicitly captures the balance between game rewards and exploration costs. We show that Q-learning always converges to the unique quantal-response equilibrium (QRE), the standard solution concept for games under bounded rationality, in weighted zero-sum polymatrix games with heterogeneous learning agents using positive exploration rates. Complementing recent results about convergence in weighted potential games, we show that fast convergence of Q-learning in competitive settings is obtained regardless of the number of agents and without any need for parameter fine-tuning. As showcased by our experiments in network zero-sum games, these theoretical results provide the necessary guarantees for an algorithmic approach to the currently open problem of equilibrium selection in competitive multi-agent settings.
Exploration-Exploitation in Multi-Agent Competition: Convergence with Bounded Rationality
2021-06-24 14:43:38
Stefanos Leonardos, Georgios Piliouras, Kelly Spendlove
http://arxiv.org/abs/2106.12928v1, http://arxiv.org/pdf/2106.12928v1
cs.GT
35,911
th
Understanding the convergence properties of learning dynamics in repeated auctions is a timely and important question in the area of learning in auctions, with numerous applications in, e.g., online advertising markets. This work focuses on repeated first price auctions where bidders with fixed values for the item learn to bid using mean-based algorithms -- a large class of online learning algorithms that include popular no-regret algorithms such as Multiplicative Weights Update and Follow the Perturbed Leader. We completely characterize the learning dynamics of mean-based algorithms, in terms of convergence to a Nash equilibrium of the auction, in two senses: (1) time-average: the fraction of rounds where bidders play a Nash equilibrium approaches 1 in the limit; (2)last-iterate: the mixed strategy profile of bidders approaches a Nash equilibrium in the limit. Specifically, the results depend on the number of bidders with the highest value: - If the number is at least three, the bidding dynamics almost surely converges to a Nash equilibrium of the auction, both in time-average and in last-iterate. - If the number is two, the bidding dynamics almost surely converges to a Nash equilibrium in time-average but not necessarily in last-iterate. - If the number is one, the bidding dynamics may not converge to a Nash equilibrium in time-average nor in last-iterate. Our discovery opens up new possibilities in the study of convergence dynamics of learning algorithms.
Nash Convergence of Mean-Based Learning Algorithms in First Price Auctions
2021-10-08 09:01:27
Xiaotie Deng, Xinyan Hu, Tao Lin, Weiqiang Zheng
http://dx.doi.org/10.1145/3485447.3512059, http://arxiv.org/abs/2110.03906v4, http://arxiv.org/pdf/2110.03906v4
cs.GT
35,912
th
In this paper, we provide a novel and simple algorithm, Clairvoyant Multiplicative Weights Updates (CMWU) for regret minimization in general games. CMWU effectively corresponds to the standard MWU algorithm but where all agents, when updating their mixed strategies, use the payoff profiles based on tomorrow's behavior, i.e. the agents are clairvoyant. CMWU achieves constant regret of $\ln(m)/\eta$ in all normal-form games with m actions and fixed step-sizes $\eta$. Although CMWU encodes in its definition a fixed point computation, which in principle could result in dynamics that are neither computationally efficient nor uncoupled, we show that both of these issues can be largely circumvented. Specifically, as long as the step-size $\eta$ is upper bounded by $\frac{1}{(n-1)V}$, where $n$ is the number of agents and $[0,V]$ is the payoff range, then the CMWU updates can be computed linearly fast via a contraction map. This implementation results in an uncoupled online learning dynamic that admits a $O (\log T)$-sparse sub-sequence where each agent experiences at most $O(nV\log m)$ regret. This implies that the CMWU dynamics converge with rate $O(nV \log m \log T / T)$ to a \textit{Coarse Correlated Equilibrium}. The latter improves on the current state-of-the-art convergence rate of \textit{uncoupled online learning dynamics} \cite{daskalakis2021near,anagnostides2021near}.
Beyond Time-Average Convergence: Near-Optimal Uncoupled Online Learning via Clairvoyant Multiplicative Weights Update
2021-11-29 20:42:24
Georgios Piliouras, Ryann Sim, Stratis Skoulakis
http://arxiv.org/abs/2111.14737v4, http://arxiv.org/pdf/2111.14737v4
cs.GT
35,914
th
In this paper, we consider a discrete-time Stackelberg mean field game with a leader and an infinite number of followers. The leader and the followers each observe types privately that evolve as conditionally independent controlled Markov processes. The leader commits to a dynamic policy and the followers best respond to that policy and each other. Knowing that the followers would play a mean field game based on her policy, the leader chooses a policy that maximizes her reward. We refer to the resulting outcome as a Stackelberg mean field equilibrium (SMFE). In this paper, we provide a master equation of this game that allows one to compute all SMFE. Based on our framework, we consider two numerical examples. First, we consider an epidemic model where the followers get infected based on the mean field population. The leader chooses subsidies for a vaccine to maximize social welfare and minimize vaccination costs. In the second example, we consider a technology adoption game where the followers decide to adopt a technology or a product and the leader decides the cost of one product that maximizes his returns, which are proportional to the people adopting that technology
Master Equation for Discrete-Time Stackelberg Mean Field Games with single leader
2022-01-16 06:43:48
Deepanshu Vasal, Randall Berry
http://arxiv.org/abs/2201.05959v1, http://arxiv.org/pdf/2201.05959v1
eess.SY
35,915
th
In selection processes such as hiring, promotion, and college admissions, implicit bias toward socially-salient attributes such as race, gender, or sexual orientation of candidates is known to produce persistent inequality and reduce aggregate utility for the decision maker. Interventions such as the Rooney Rule and its generalizations, which require the decision maker to select at least a specified number of individuals from each affected group, have been proposed to mitigate the adverse effects of implicit bias in selection. Recent works have established that such lower-bound constraints can be very effective in improving aggregate utility in the case when each individual belongs to at most one affected group. However, in several settings, individuals may belong to multiple affected groups and, consequently, face more extreme implicit bias due to this intersectionality. We consider independently drawn utilities and show that, in the intersectional case, the aforementioned non-intersectional constraints can only recover part of the total utility achievable in the absence of implicit bias. On the other hand, we show that if one includes appropriate lower-bound constraints on the intersections, almost all the utility achievable in the absence of implicit bias can be recovered. Thus, intersectional constraints can offer a significant advantage over a reductionist dimension-by-dimension non-intersectional approach to reducing inequality.
Selection in the Presence of Implicit Bias: The Advantage of Intersectional Constraints
2022-02-03 19:21:50
Anay Mehrotra, Bary S. R. Pradelski, Nisheeth K. Vishnoi
http://arxiv.org/abs/2202.01661v2, http://arxiv.org/pdf/2202.01661v2
cs.CY
35,916
th
Understanding the evolutionary stability of cooperation is a central problem in biology, sociology, and economics. There exist only a few known mechanisms that guarantee the existence of cooperation and its robustness to cheating. Here, we introduce a new mechanism for the emergence of cooperation in the presence of fluctuations. We consider agents whose wealth change stochastically in a multiplicative fashion. Each agent can share part of her wealth as public good, which is equally distributed among all the agents. We show that, when agents operate with long time-horizons, cooperation produce an advantage at the individual level, as it effectively screens agents from the deleterious effect of environmental fluctuations.
Stable cooperation emerges in stochastic multiplicative growth
2022-02-06 17:51:58
Lorenzo Fant, Onofrio Mazzarisi, Emanuele Panizon, Jacopo Grilli
http://dx.doi.org/10.1103/PhysRevE.108.L012401, http://arxiv.org/abs/2202.02787v1, http://arxiv.org/pdf/2202.02787v1
q-bio.PE
35,917
th
The study of complexity and optimization in decision theory involves both partial and complete characterizations of preferences over decision spaces in terms of real-valued monotones. With this motivation, and following the recent introduction of new classes of monotones, like injective monotones or strict monotone multi-utilities, we present the classification of preordered spaces in terms of both the existence and cardinality of real-valued monotones and the cardinality of the quotient space. In particular, we take advantage of a characterization of real-valued monotones in terms of separating families of increasing sets in order to obtain a more complete classification consisting of classes that are strictly different from each other. As a result, we gain new insight into both complexity and optimization, and clarify their interplay in preordered spaces.
The classification of preordered spaces in terms of monotones: complexity and optimization
2022-02-24 17:00:10
Pedro Hack, Daniel A. Braun, Sebastian Gottwald
http://dx.doi.org/10.1007/s11238-022-09904-w, http://arxiv.org/abs/2202.12106v3, http://arxiv.org/pdf/2202.12106v3
math.CO
35,918
th
Lloyd Shapley's cooperative value allocation theory is a central concept in game theory that is widely used in various fields to allocate resources, assess individual contributions, and determine fairness. The Shapley value formula and his four axioms that characterize it form the foundation of the theory. Shapley value can be assigned only when all cooperative game players are assumed to eventually form the grand coalition. The purpose of this paper is to extend Shapley's theory to cover value allocation at every partial coalition state. To achieve this, we first extend Shapley axioms into a new set of five axioms that can characterize value allocation at every partial coalition state, where the allocation at the grand coalition coincides with the Shapley value. Second, we present a stochastic path integral formula, where each path now represents a general coalition process. This can be viewed as an extension of the Shapley formula. We apply these concepts to provide a dynamic interpretation and extension of the value allocation schemes of Shapley, Nash, Kohlberg and Neyman. This generalization is made possible by taking into account Hodge calculus, stochastic processes, and path integration of edge flows on graphs. We recognize that such generalization is not limited to the coalition game graph. As a result, we define Hodge allocation, a general allocation scheme that can be applied to any cooperative multigraph and yield allocation values at any cooperative stage.
Hodge allocation for cooperative rewards: a generalization of Shapley's cooperative value allocation theory via Hodge theory on graphs
2022-03-14 08:10:07
Tongseok Lim
http://arxiv.org/abs/2203.06860v5, http://arxiv.org/pdf/2203.06860v5
math.PR
35,919
th
This paper studies third-degree price discrimination (3PD) based on a random sample of valuation and covariate data, where the covariate is continuous, and the distribution of the data is unknown to the seller. The main results of this paper are twofold. The first set of results is pricing strategy independent and reveals the fundamental information-theoretic limitation of any data-based pricing strategy in revenue generation for two cases: 3PD and uniform pricing. The second set of results proposes the $K$-markets empirical revenue maximization (ERM) strategy and shows that the $K$-markets ERM and the uniform ERM strategies achieve the optimal rate of convergence in revenue to that generated by their respective true-distribution 3PD and uniform pricing optima. Our theoretical and numerical results suggest that the uniform (i.e., $1$-market) ERM strategy generates a larger revenue than the $K$-markets ERM strategy when the sample size is small enough, and vice versa.
Information-theoretic limitations of data-based price discrimination
2022-04-27 09:33:37
Haitian Xie, Ying Zhu, Denis Shishkin
http://arxiv.org/abs/2204.12723v4, http://arxiv.org/pdf/2204.12723v4
cs.GT
35,920
th
Epidemics of infectious diseases posing a serious risk to human health have occurred throughout history. During the ongoing SARS-CoV-2 epidemic there has been much debate about policy, including how and when to impose restrictions on behavior. Under such circumstances policymakers must balance a complex spectrum of objectives, suggesting a need for quantitative tools. Whether health services might be 'overwhelmed' has emerged as a key consideration yet formal modelling of optimal policy has so far largely ignored this. Here we show how costly interventions, such as taxes or subsidies on behaviour, can be used to exactly align individuals' decision making with government preferences even when these are not aligned. We assume that choices made by individuals give rise to Nash equilibrium behavior. We focus on a situation in which the capacity of the healthcare system to treat patients is limited and identify conditions under which the disease dynamics respect the capacity limit. In particular we find an extremely sharp drop in peak infections as the maximum infection cost in the government's objective function is increased. This is in marked contrast to the gradual reduction without government intervention. The infection costs at which this switch occurs depend on how costly the intervention is to the government. We find optimal interventions that are quite different to the case when interventions are cost-free. Finally, we identify a novel analytic solution for the Nash equilibrium behavior for constant infection cost.
Rational social distancing policy during epidemics with limited healthcare capacity
2022-05-02 10:08:23
Simon K. Schnyder, John J. Molina, Ryoichi Yamamoto, Matthew S. Turner
http://arxiv.org/abs/2205.00684v1, http://arxiv.org/pdf/2205.00684v1
econ.TH
35,921
th
We consider a general class of multi-agent games in networks, namely the generalized vertex coloring games (G-VCGs), inspired by real-life applications of the venue selection problem in events planning. Certain utility responding to the contemporary coloring assignment will be received by each agent under some particular mechanism, who, striving to maximize his own utility, is restricted to local information thus self-organizing when choosing another color. Our focus is on maximizing some utilitarian-looking welfare objective function concerning the cumulative utilities across the network in a decentralized fashion. Firstly, we investigate on a special class of the G-VCGs, namely Identical Preference VCGs (IP-VCGs) which recovers the rudimentary work by \cite{chaudhuri2008network}. We reveal its convergence even under a completely greedy policy and completely synchronous settings, with a stochastic bound on the converging rate provided. Secondly, regarding the general G-VCGs, a greediness-preserved Metropolis-Hasting based policy is proposed for each agent to initiate with the limited information and its optimality under asynchronous settings is proved using theories from the regular perturbed Markov processes. The policy was also empirically witnessed to be robust under independently synchronous settings. Thirdly, in the spirit of ``robust coloring'', we include an expected loss term in our objective function to balance between the utilities and robustness. An optimal coloring for this robust welfare optimization would be derived through a second-stage MH-policy driven algorithm. Simulation experiments are given to showcase the efficiency of our proposed strategy.
Utilitarian Welfare Optimization in the Generalized Vertex Coloring Games: An Implication to Venue Selection in Events Planning
2022-06-18 12:21:19
Zeyi Chen
http://arxiv.org/abs/2206.09153v4, http://arxiv.org/pdf/2206.09153v4
cs.DM
35,922
th
This paper presents karma mechanisms, a novel approach to the repeated allocation of a scarce resource among competing agents over an infinite time. Examples include deciding which ride hailing trip requests to serve during peak demand, granting the right of way in intersections or lane mergers, or admitting internet content to a regulated fast channel. We study a simplified yet insightful formulation of these problems where at every instant two agents from a large population get randomly matched to compete over the resource. The intuitive interpretation of a karma mechanism is "If I give in now, I will be rewarded in the future." Agents compete in an auction-like setting where they bid units of karma, which circulates directly among them and is self-contained in the system. We demonstrate that this allows a society of self-interested agents to achieve high levels of efficiency without resorting to a (possibly problematic) monetary pricing of the resource. We model karma mechanisms as dynamic population games and guarantee the existence of a stationary Nash equilibrium. We then analyze the performance at the stationary Nash equilibrium numerically. For the case of homogeneous agents, we compare different mechanism design choices, showing that it is possible to achieve an efficient and ex-post fair allocation when the agents are future aware. Finally, we test the robustness against agent heterogeneity and propose remedies to some of the observed phenomena via karma redistribution.
A self-contained karma economy for the dynamic allocation of common resources
2022-07-01 18:32:46
Ezzat Elokda, Saverio Bolognani, Andrea Censi, Florian Dörfler, Emilio Frazzoli
http://dx.doi.org/10.1007/s13235-023-00503-0, http://arxiv.org/abs/2207.00495v3, http://arxiv.org/pdf/2207.00495v3
econ.TH
35,938
th
We propose a social welfare maximizing market mechanism for an energy community that aggregates individual and community-shared energy resources under a general net energy metering (NEM) policy. Referred to as Dynamic NEM (D-NEM), the proposed mechanism dynamically sets the community NEM prices based on aggregated community resources, including flexible consumption, storage, and renewable generation. D-NEM guarantees a higher benefit to each community member than possible outside the community, and no sub-communities would be better off departing from its parent community. D-NEM aligns each member's incentive with that of the community such that each member maximizing individual surplus under D-NEM results in maximum community social welfare. Empirical studies compare the proposed mechanism with existing benchmarks, demonstrating its welfare benefits, operational characteristics, and responsiveness to NEM rates.
Dynamic Net Metering for Energy Communities
2023-06-21 09:03:07
Ahmed S. Alahmed, Lang Tong
http://arxiv.org/abs/2306.13677v2, http://arxiv.org/pdf/2306.13677v2
eess.SY
35,923
th
We consider the problem of reforming an envy-free matching when each agent is assigned a single item. Given an envy-free matching, we consider an operation to exchange the item of an agent with an unassigned item preferred by the agent that results in another envy-free matching. We repeat this operation as long as we can. We prove that the resulting envy-free matching is uniquely determined up to the choice of an initial envy-free matching, and can be found in polynomial time. We call the resulting matching a reformist envy-free matching, and then we study a shortest sequence to obtain the reformist envy-free matching from an initial envy-free matching. We prove that a shortest sequence is computationally hard to obtain even when each agent accepts at most four items and each item is accepted by at most three agents. On the other hand, we give polynomial-time algorithms when each agent accepts at most three items or each item is accepted by at most two agents. Inapproximability and fixed-parameter (in)tractability are also discussed.
Reforming an Envy-Free Matching
2022-07-06 16:03:49
Takehiro Ito, Yuni Iwamasa, Naonori Kakimura, Naoyuki Kamiyama, Yusuke Kobayashi, Yuta Nozaki, Yoshio Okamoto, Kenta Ozeki
http://arxiv.org/abs/2207.02641v1, http://arxiv.org/pdf/2207.02641v1
cs.GT
35,924
th
Federated learning is typically considered a beneficial technology which allows multiple agents to collaborate with each other, improve the accuracy of their models, and solve problems which are otherwise too data-intensive / expensive to be solved individually. However, under the expectation that other agents will share their data, rational agents may be tempted to engage in detrimental behavior such as free-riding where they contribute no data but still enjoy an improved model. In this work, we propose a framework to analyze the behavior of such rational data generators. We first show how a naive scheme leads to catastrophic levels of free-riding where the benefits of data sharing are completely eroded. Then, using ideas from contract theory, we introduce accuracy shaping based mechanisms to maximize the amount of data generated by each agent. These provably prevent free-riding without needing any payment mechanism.
Mechanisms that Incentivize Data Sharing in Federated Learning
2022-07-11 01:36:52
Sai Praneeth Karimireddy, Wenshuo Guo, Michael I. Jordan
http://arxiv.org/abs/2207.04557v1, http://arxiv.org/pdf/2207.04557v1
cs.GT
35,925
th
During an epidemic, the information available to individuals in the society deeply influences their belief of the epidemic spread, and consequently the preventive measures they take to stay safe from the infection. In this paper, we develop a scalable framework for ascertaining the optimal information disclosure a government must make to individuals in a networked society for the purpose of epidemic containment. This problem of information design problem is complicated by the heterogeneous nature of the society, the positive externalities faced by individuals, and the variety in the public response to such disclosures. We use a networked public goods model to capture the underlying societal structure. Our first main result is a structural decomposition of the government's objectives into two independent components -- a component dependent on the utility function of individuals, and another dependent on properties of the underlying network. Since the network dependent term in this decomposition is unaffected by the signals sent by the government, this characterization simplifies the problem of finding the optimal information disclosure policies. We find explicit conditions, in terms of the risk aversion and prudence, under which no disclosure, full disclosure, exaggeration and downplay are the optimal policies. The structural decomposition results are also helpful in studying other forms of interventions like incentive design and network design.
A Scalable Bayesian Persuasion Framework for Epidemic Containment on Heterogeneous Networks
2022-07-23 21:57:39
Shraddha Pathak, Ankur A. Kulkarni
http://arxiv.org/abs/2207.11578v1, http://arxiv.org/pdf/2207.11578v1
eess.SY
35,926
th
In many multi-agent settings, participants can form teams to achieve collective outcomes that may far surpass their individual capabilities. Measuring the relative contributions of agents and allocating them shares of the reward that promote long-lasting cooperation are difficult tasks. Cooperative game theory offers solution concepts identifying distribution schemes, such as the Shapley value, that fairly reflect the contribution of individuals to the performance of the team or the Core, which reduces the incentive of agents to abandon their team. Applications of such methods include identifying influential features and sharing the costs of joint ventures or team formation. Unfortunately, using these solutions requires tackling a computational barrier as they are hard to compute, even in restricted settings. In this work, we show how cooperative game-theoretic solutions can be distilled into a learned model by training neural networks to propose fair and stable payoff allocations. We show that our approach creates models that can generalize to games far from the training distribution and can predict solutions for more players than observed during training. An important application of our framework is Explainable AI: our approach can be used to speed-up Shapley value computations on many instances.
Neural Payoff Machines: Predicting Fair and Stable Payoff Allocations Among Team Members
2022-08-18 15:33:09
Daphne Cornelisse, Thomas Rood, Mateusz Malinowski, Yoram Bachrach, Tal Kachman
http://arxiv.org/abs/2208.08798v1, http://arxiv.org/pdf/2208.08798v1
cs.LG
35,927
th
What is the purpose of pre-analysis plans, and how should they be designed? We propose a principal-agent model where a decision-maker relies on selective but truthful reports by an analyst. The analyst has data access, and non-aligned objectives. In this model, the implementation of statistical decision rules (tests, estimators) requires an incentive-compatible mechanism. We first characterize which decision rules can be implemented. We then characterize optimal statistical decision rules subject to implementability. We show that implementation requires pre-analysis plans. Focussing specifically on hypothesis tests, we show that optimal rejection rules pre-register a valid test for the case when all data is reported, and make worst-case assumptions about unreported data. Optimal tests can be found as a solution to a linear-programming problem.
Optimal Pre-Analysis Plans: Statistical Decisions Subject to Implementability
2022-08-20 11:54:39
Maximilian Kasy, Jann Spiess
http://arxiv.org/abs/2208.09638v2, http://arxiv.org/pdf/2208.09638v2
econ.EM
35,928
th
Sending and receiving signals is ubiquitous in the living world. It includes everything from individual molecules triggering complex metabolic cascades, to animals using signals to alert their group to the presence of predators. When communication involves common interest, simple sender-receiver games show how reliable signaling can emerge and evolve to transmit information effectively. These games have been analyzed extensively, with some work investigating the role of static network structure on information transfer. However, no existing work has examined the coevolution of strategy and network structure in sender-receiver games. Here we show that coevolution is sufficient to generate the endogenous formation of distinct groups from an initially homogeneous population. It also allows for the emergence of novel ``hybrid'' signaling groups that have not previously been considered or demonstrated in theory or nature. Hybrid groups are composed of different complementary signaling behaviors that rely on evolved network structure to achieve effective communication. Without this structure, such groups would normally fail to effectively communicate. Our findings pertain to all common interest signaling games, are robust across many parameters, and mitigate known problems of inefficient communication. Our work generates new insights for the theory of adaptive behavior, signaling, and group formation in natural and social systems across a wide range of environments in which changing network structure is common. We discuss implications for research on metabolic networks, among neurons, proteins, and social organisms.
Spontaneous emergence of groups and signaling diversity in dynamic networks
2022-10-22 17:35:54
Zachary Fulker, Patrick Forber, Rory Smead, Christoph Riedl
http://arxiv.org/abs/2210.17309v1, http://arxiv.org/pdf/2210.17309v1
cs.SI
35,929
th
The rise of big data analytics has automated the decision-making of companies and increased supply chain agility. In this paper, we study the supply chain contract design problem faced by a data-driven supplier who needs to respond to the inventory decisions of the downstream retailer. Both the supplier and the retailer are uncertain about the market demand and need to learn about it sequentially. The goal for the supplier is to develop data-driven pricing policies with sublinear regret bounds under a wide range of possible retailer inventory policies for a fixed time horizon. To capture the dynamics induced by the retailer's learning policy, we first make a connection to non-stationary online learning by following the notion of variation budget. The variation budget quantifies the impact of the retailer's learning strategy on the supplier's decision-making. We then propose dynamic pricing policies for the supplier for both discrete and continuous demand. We also note that our proposed pricing policy only requires access to the support of the demand distribution, but critically, does not require the supplier to have any prior knowledge about the retailer's learning policy or the demand realizations. We examine several well-known data-driven policies for the retailer, including sample average approximation, distributionally robust optimization, and parametric approaches, and show that our pricing policies lead to sublinear regret bounds in all these cases. At the managerial level, we answer affirmatively that there is a pricing policy with a sublinear regret bound under a wide range of retailer's learning policies, even though she faces a learning retailer and an unknown demand distribution. Our work also provides a novel perspective in data-driven operations management where the principal has to learn to react to the learning policies employed by other agents in the system.
Learning to Price Supply Chain Contracts against a Learning Retailer
2022-11-02 07:00:47
Xuejun Zhao, Ruihao Zhu, William B. Haskell
http://arxiv.org/abs/2211.04586v1, http://arxiv.org/pdf/2211.04586v1
cs.LG
35,930
th
Following the solution to the One-Round Voronoi Game in arXiv:2011.13275, we naturally may want to consider similar games based upon the competitive locating of points and subsequent dividing of territories. In order to appease the tears of White (the first player) after they have potentially been tricked into going first in a game of point-placement, an alternative game (or rather, an extension of the Voronoi game) is the Stackelberg game where all is not lost if Black (the second player) gains over half of the contested area. It turns out that plenty of results can be transferred from One-Round Voronoi Game and what remains to be explored for the Stackelberg game is how best White can mitigate the damage of Black's placements. Since significant weaknesses in certain arrangements were outlined in arXiv:2011.13275, we shall first consider arrangements that still satisfy these results (namely, White plays a certain grid arrangement) and then explore how Black can best exploit these positions.
The Stackelberg Game: responses to regular strategies
2022-11-11 23:17:26
Thomas Byrne
http://arxiv.org/abs/2211.06472v1, http://arxiv.org/pdf/2211.06472v1
cs.CG
35,931
th
We consider the problem of determining a binary ground truth using advice from a group of independent reviewers (experts) who express their guess about a ground truth correctly with some independent probability (competence). In this setting, when all reviewers are competent (competence greater than one-half), the Condorcet Jury Theorem tells us that adding more reviewers increases the overall accuracy, and if all competences are known, then there exists an optimal weighting of the reviewers. However, in practical settings, reviewers may be noisy or incompetent, i.e., competence below half, and the number of experts may be small, so the asymptotic Condorcet Jury Theorem is not practically relevant. In such cases we explore appointing one or more chairs (judges) who determine the weight of each reviewer for aggregation, creating multiple levels. However, these chairs may be unable to correctly identify the competence of the reviewers they oversee, and therefore unable to compute the optimal weighting. We give conditions when a set of chairs is able to weight the reviewers optimally, and depending on the competence distribution of the agents, give results about when it is better to have more chairs or more reviewers. Through numerical simulations we show that in some cases it is better to have more chairs, but in many cases it is better to have more reviewers.
Who Reviews The Reviewers? A Multi-Level Jury Problem
2022-11-15 23:47:14
Ben Abramowitz, Omer Lev, Nicholas Mattei
http://arxiv.org/abs/2211.08494v2, http://arxiv.org/pdf/2211.08494v2
cs.LG
35,932
th
It is shown in recent studies that in a Stackelberg game the follower can manipulate the leader by deviating from their true best-response behavior. Such manipulations are computationally tractable and can be highly beneficial for the follower. Meanwhile, they may result in significant payoff losses for the leader, sometimes completely defeating their first-mover advantage. A warning to commitment optimizers, the risk these findings indicate appears to be alleviated to some extent by a strict information advantage the manipulations rely on. That is, the follower knows the full information about both players' payoffs whereas the leader only knows their own payoffs. In this paper, we study the manipulation problem with this information advantage relaxed. We consider the scenario where the follower is not given any information about the leader's payoffs to begin with but has to learn to manipulate by interacting with the leader. The follower can gather necessary information by querying the leader's optimal commitments against contrived best-response behaviors. Our results indicate that the information advantage is not entirely indispensable to the follower's manipulations: the follower can learn the optimal way to manipulate in polynomial time with polynomially many queries of the leader's optimal commitment.
Learning to Manipulate a Commitment Optimizer
2023-02-23 10:39:37
Yurong Chen, Xiaotie Deng, Jiarui Gan, Yuhao Li
http://arxiv.org/abs/2302.11829v2, http://arxiv.org/pdf/2302.11829v2
cs.GT
35,933
th
We study the incentivized information acquisition problem, where a principal hires an agent to gather information on her behalf. Such a problem is modeled as a Stackelberg game between the principal and the agent, where the principal announces a scoring rule that specifies the payment, and then the agent then chooses an effort level that maximizes her own profit and reports the information. We study the online setting of such a problem from the principal's perspective, i.e., designing the optimal scoring rule by repeatedly interacting with the strategic agent. We design a provably sample efficient algorithm that tailors the UCB algorithm (Auer et al., 2002) to our model, which achieves a sublinear $T^{2/3}$-regret after $T$ iterations. Our algorithm features a delicate estimation procedure for the optimal profit of the principal, and a conservative correction scheme that ensures the desired agent's actions are incentivized. Furthermore, a key feature of our regret bound is that it is independent of the number of states of the environment.
Learning to Incentivize Information Acquisition: Proper Scoring Rules Meet Principal-Agent Model
2023-03-15 16:40:16
Siyu Chen, Jibang Wu, Yifan Wu, Zhuoran Yang
http://arxiv.org/abs/2303.08613v2, http://arxiv.org/pdf/2303.08613v2
cs.LG
35,934
th
We introduce a decentralized mechanism for pricing and exchanging alternatives constrained by transaction costs. We characterize the time-invariant solutions of a heat equation involving a (weighted) Tarski Laplacian operator, defined for max-plus matrix-weighted graphs, as approximate equilibria of the trading system. We study algebraic properties of the solution sets as well as convergence behavior of the dynamical system. We apply these tools to the "economic problem" of allocating scarce resources among competing uses. Our theory suggests differences in competitive equilibrium, bargaining, or cost-benefit analysis, depending on the context, are largely due to differences in the way that transaction costs are incorporated into the decision-making process. We present numerical simulations of the synchronization algorithm (RRAggU), demonstrating our theoretical findings.
Max-Plus Synchronization in Decentralized Trading Systems
2023-04-01 06:07:49
Hans Riess, Michael Munger, Michael M. Zavlanos
http://arxiv.org/abs/2304.00210v2, http://arxiv.org/pdf/2304.00210v2
cs.GT
35,935
th
The creator economy has revolutionized the way individuals can profit through online platforms. In this paper, we initiate the study of online learning in the creator economy by modeling the creator economy as a three-party game between the users, platform, and content creators, with the platform interacting with the content creator under a principal-agent model through contracts to encourage better content. Additionally, the platform interacts with the users to recommend new content, receive an evaluation, and ultimately profit from the content, which can be modeled as a recommender system. Our study aims to explore how the platform can jointly optimize the contract and recommender system to maximize the utility in an online learning fashion. We primarily analyze and compare two families of contracts: return-based contracts and feature-based contracts. Return-based contracts pay the content creator a fraction of the reward the platform gains. In contrast, feature-based contracts pay the content creator based on the quality or features of the content, regardless of the reward the platform receives. We show that under smoothness assumptions, the joint optimization of return-based contracts and recommendation policy provides a regret $\Theta(T^{2/3})$. For the feature-based contract, we introduce a definition of intrinsic dimension $d$ to characterize the hardness of learning the contract and provide an upper bound on the regret $\mathcal{O}(T^{(d+1)/(d+2)})$. The upper bound is tight for the linear family.
Online Learning in a Creator Economy
2023-05-19 04:58:13
Banghua Zhu, Sai Praneeth Karimireddy, Jiantao Jiao, Michael I. Jordan
http://arxiv.org/abs/2305.11381v1, http://arxiv.org/pdf/2305.11381v1
cs.GT
35,936
th
For a federated learning model to perform well, it is crucial to have a diverse and representative dataset. However, the data contributors may only be concerned with the performance on a specific subset of the population, which may not reflect the diversity of the wider population. This creates a tension between the principal (the FL platform designer) who cares about global performance and the agents (the data collectors) who care about local performance. In this work, we formulate this tension as a game between the principal and multiple agents, and focus on the linear experiment design problem to formally study their interaction. We show that the statistical criterion used to quantify the diversity of the data, as well as the choice of the federated learning algorithm used, has a significant effect on the resulting equilibrium. We leverage this to design simple optimal federated learning mechanisms that encourage data collectors to contribute data representative of the global population, thereby maximizing global performance.
Evaluating and Incentivizing Diverse Data Contributions in Collaborative Learning
2023-06-09 02:38:25
Baihe Huang, Sai Praneeth Karimireddy, Michael I. Jordan
http://arxiv.org/abs/2306.05592v1, http://arxiv.org/pdf/2306.05592v1
cs.GT
35,937
th
Cooperative dynamics are central to our understanding of many phenomena in living and complex systems, including the transition to multicellularity, the emergence of eusociality in insect colonies, and the development of full-fledged human societies. However, we lack a universal mechanism to explain the emergence of cooperation across length scales, across species, and scalable to large populations of individuals. We present a novel framework for modelling cooperation games with an arbitrary number of players by combining reaction networks, methods from quantum mechanics applied to stochastic complex systems, game theory and stochastic simulations of molecular reactions. Using this framework, we propose a novel and robust mechanism based on risk aversion that leads to cooperative behaviour in population games. Rather than individuals seeking to maximise payouts in the long run, individuals seek to obtain a minimum set of resources with a given level of confidence and in a limited time span. We explicitly show that this mechanism leads to the emergence of new Nash equilibria in a wide range of cooperation games. Our results suggest that risk aversion is a viable mechanism to explain the emergence of cooperation in a variety of contexts and with an arbitrary number of individuals greater than three.
Risk aversion promotes cooperation
2023-06-09 18:36:07
Jay Armas, Wout Merbis, Janusz Meylahn, Soroush Rafiee Rad, Mauricio J. del Razo
http://arxiv.org/abs/2306.05971v1, http://arxiv.org/pdf/2306.05971v1
physics.soc-ph
35,939
th
The incentive-compatibility properties of blockchain transaction fee mechanisms have been investigated with *passive* block producers that are motivated purely by the net rewards earned at the consensus layer. This paper introduces a model of *active* block producers that have their own private valuations for blocks (representing, for example, additional value derived from the application layer). The block producer surplus in our model can be interpreted as one of the more common colloquial meanings of the term ``MEV.'' The main results of this paper show that transaction fee mechanism design is fundamentally more difficult with active block producers than with passive ones: with active block producers, no non-trivial or approximately welfare-maximizing transaction fee mechanism can be incentive-compatible for both users and block producers. These results can be interpreted as a mathematical justification for the current interest in augmenting transaction fee mechanisms with additional components such as order flow auctions, block producer competition, trusted hardware, or cryptographic techniques.
Transaction Fee Mechanism Design with Active Block Producers
2023-07-04 15:35:42
Maryam Bahrani, Pranav Garimidi, Tim Roughgarden
http://arxiv.org/abs/2307.01686v2, http://arxiv.org/pdf/2307.01686v2
cs.GT
35,940
th
This paper introduces a simulation algorithm for evaluating the log-likelihood function of a large supermodular binary-action game. Covered examples include (certain types of) peer effect, technology adoption, strategic network formation, and multi-market entry games. More generally, the algorithm facilitates simulated maximum likelihood (SML) estimation of games with large numbers of players, $T$, and/or many binary actions per player, $M$ (e.g., games with tens of thousands of strategic actions, $TM=O(10^4)$). In such cases the likelihood of the observed pure strategy combination is typically (i) very small and (ii) a $TM$-fold integral who region of integration has a complicated geometry. Direct numerical integration, as well as accept-reject Monte Carlo integration, are computationally impractical in such settings. In contrast, we introduce a novel importance sampling algorithm which allows for accurate likelihood simulation with modest numbers of simulation draws.
Scenario Sampling for Large Supermodular Games
2023-07-21 21:51:32
Bryan S. Graham, Andrin Pelican
http://arxiv.org/abs/2307.11857v1, http://arxiv.org/pdf/2307.11857v1
econ.EM
35,941
th
We introduce generative interpretation, a new approach to estimating contractual meaning using large language models. As AI triumphalism is the order of the day, we proceed by way of grounded case studies, each illustrating the capabilities of these novel tools in distinct ways. Taking well-known contracts opinions, and sourcing the actual agreements that they adjudicated, we show that AI models can help factfinders ascertain ordinary meaning in context, quantify ambiguity, and fill gaps in parties' agreements. We also illustrate how models can calculate the probative value of individual pieces of extrinsic evidence. After offering best practices for the use of these models given their limitations, we consider their implications for judicial practice and contract theory. Using LLMs permits courts to estimate what the parties intended cheaply and accurately, and as such generative interpretation unsettles the current interpretative stalemate. Their use responds to efficiency-minded textualists and justice-oriented contextualists, who argue about whether parties will prefer cost and certainty or accuracy and fairness. Parties--and courts--would prefer a middle path, in which adjudicators strive to predict what the contract really meant, admitting just enough context to approximate reality while avoiding unguided and biased assimilation of evidence. As generative interpretation offers this possibility, we argue it can become the new workhorse of contractual interpretation.
Generative Interpretation
2023-08-14 05:59:27
Yonathan A. Arbel, David Hoffman
http://arxiv.org/abs/2308.06907v1, http://arxiv.org/pdf/2308.06907v1
cs.CL
35,942
th
This paper investigates the strategic decision-making capabilities of three Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework of game theory. Utilizing four canonical two-player games -- Prisoner's Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these models navigate social dilemmas, situations where players can either cooperate for a collective benefit or defect for individual gain. Crucially, we extend our analysis to examine the role of contextual framing, such as diplomatic relations or casual friendships, in shaping the models' decisions. Our findings reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual framing, it shows limited ability to engage in abstract strategic reasoning. Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and context, but LLaMa-2 exhibits a more nuanced understanding of the games' underlying mechanics. These results highlight the current limitations and varied proficiencies of LLMs in strategic decision-making, cautioning against their unqualified use in tasks requiring complex strategic reasoning.
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing
2023-09-12 03:54:15
Nunzio Lorè, Babak Heydari
http://arxiv.org/abs/2309.05898v1, http://arxiv.org/pdf/2309.05898v1
cs.GT
35,943
th
The practice of marriage is an understudied phenomenon in behavioural sciences despite being ubiquitous across human cultures. This modelling paper shows that replacing distant direct kin with in-laws increases the interconnectedness of the family social network graph, which allows more cooperative and larger groups. In this framing, marriage can be seen as a social technology that reduces free-riding within collaborative group. This approach offers a solution to the puzzle of why our species has this particular form of regulating mating behaviour, uniquely among pair-bonded animals.
Network Ecology of Marriage
2023-08-06 10:07:51
Tamas David-Barrett
http://arxiv.org/abs/2310.05928v1, http://arxiv.org/pdf/2310.05928v1
physics.soc-ph
35,944
th
Edge device participation in federating learning (FL) has been typically studied under the lens of device-server communication (e.g., device dropout) and assumes an undying desire from edge devices to participate in FL. As a result, current FL frameworks are flawed when implemented in real-world settings, with many encountering the free-rider problem. In a step to push FL towards realistic settings, we propose RealFM: the first truly federated mechanism which (1) realistically models device utility, (2) incentivizes data contribution and device participation, and (3) provably removes the free-rider phenomena. RealFM does not require data sharing and allows for a non-linear relationship between model accuracy and utility, which improves the utility gained by the server and participating devices compared to non-participating devices as well as devices participating in other FL mechanisms. On real-world data, RealFM improves device and server utility, as well as data contribution, by up to 3 magnitudes and 7x respectively compared to baseline mechanisms.
RealFM: A Realistic Mechanism to Incentivize Data Contribution and Device Participation
2023-10-20 20:40:39
Marco Bornstein, Amrit Singh Bedi, Anit Kumar Sahu, Furqan Khan, Furong Huang
http://arxiv.org/abs/2310.13681v1, http://arxiv.org/pdf/2310.13681v1
cs.GT
35,945
th
We present a nonparametric statistical test for determining whether an agent is following a given mixed strategy in a repeated strategic-form game given samples of the agent's play. This involves two components: determining whether the agent's frequencies of pure strategies are sufficiently close to the target frequencies, and determining whether the pure strategies selected are independent between different game iterations. Our integrated test involves applying a chi-squared goodness of fit test for the first component and a generalized Wald-Wolfowitz runs test for the second component. The results from both tests are combined using Bonferroni correction to produce a complete test for a given significance level $\alpha.$ We applied the test to publicly available data of human rock-paper-scissors play. The data consists of 50 iterations of play for 500 human players. We test with a null hypothesis that the players are following a uniform random strategy independently at each game iteration. Using a significance level of $\alpha = 0.05$, we conclude that 305 (61%) of the subjects are following the target strategy.
Nonparametric Strategy Test
2023-12-17 15:09:42
Sam Ganzfried
http://arxiv.org/abs/2312.10695v2, http://arxiv.org/pdf/2312.10695v2
stat.ME
35,946
th
Nash equilibrium is one of the most influential solution concepts in game theory. With the development of computer science and artificial intelligence, there is an increasing demand on Nash equilibrium computation, especially for Internet economics and multi-agent learning. This paper reviews various algorithms computing the Nash equilibrium and its approximation solutions in finite normal-form games from both theoretical and empirical perspectives. For the theoretical part, we classify algorithms in the literature and present basic ideas on algorithm design and analysis. For the empirical part, we present a comprehensive comparison on the algorithms in the literature over different kinds of games. Based on these results, we provide practical suggestions on implementations and uses of these algorithms. Finally, we present a series of open problems from both theoretical and practical considerations.
A survey on algorithms for Nash equilibria in finite normal-form games
2023-12-18 13:00:47
Hanyu Li, Wenhan Huang, Zhijian Duan, David Henry Mguni, Kun Shao, Jun Wang, Xiaotie Deng
http://arxiv.org/abs/2312.11063v1, http://arxiv.org/pdf/2312.11063v1
cs.GT
35,947
th
In this paper, a mathematically rigorous solution overturns existing wisdom regarding New Keynesian Dynamic Stochastic General Equilibrium. I develop a formal concept of stochastic equilibrium. I prove uniqueness and necessity, when agents are patient, across a wide class of dynamic stochastic models. Existence depends on appropriately specified eigenvalue conditions. Otherwise, no solution of any kind exists. I construct the equilibrium for the benchmark Calvo New Keynesian. I provide novel comparative statics with the non-stochastic model of independent mathematical interest. I uncover a bifurcation between neighbouring stochastic systems and approximations taken from the Zero Inflation Non-Stochastic Steady State (ZINSS). The correct Phillips curve agrees with the zero limit from the trend inflation framework. It contains a large lagged inflation coefficient and a small response to expected inflation. The response to the output gap is always muted and is zero at standard parameters. A neutrality result is presented to explain why and to align Calvo with Taylor pricing. Present and lagged demand shocks enter the Phillips curve so there is no Divine Coincidence and the system is identified from structural shocks alone. The lagged inflation slope is increasing in the inflation response, embodying substantive policy trade-offs. The Taylor principle is reversed, inactive settings are necessary for existence, pointing towards inertial policy. The observational equivalence idea of the Lucas critique is disproven. The bifurcation results from the breakdown of the constraints implied by lagged nominal rigidity, associated with cross-equation cancellation possible only at ZINSS. There is a dual relationship between restrictions on the econometrician and constraints on repricing firms. Thus if the model is correct, goodness of fit will jump.
Stochastic Equilibrium the Lucas Critique and Keynesian Economics
2023-12-24 01:59:33
David Staines
http://arxiv.org/abs/2312.16214v1, http://arxiv.org/pdf/2312.16214v1
econ.TH
35,948
th
We study team decision problems where communication is not possible, but coordination among team members can be realized via signals in a shared environment. We consider a variety of decision problems that differ in what team members know about one another's actions and knowledge. For each type of decision problem, we investigate how different assumptions on the available signals affect team performance. Specifically, we consider the cases of perfectly correlated, i.i.d., and exchangeable classical signals, as well as the case of quantum signals. We find that, whereas in perfect-recall trees (Kuhn [1950], [1953]) no type of signal improves performance, in imperfect-recall trees quantum signals may bring an improvement. Isbell [1957] proved that in non-Kuhn trees, classical i.i.d. signals may improve performance. We show that further improvement may be possible by use of classical exchangeable or quantum signals. We include an example of the effect of quantum signals in the context of high-frequency trading.
Team Decision Problems with Classical and Quantum Signals
2011-07-01 17:32:15
Adam Brandenburger, Pierfrancesco La Mura
http://dx.doi.org/10.1098/rsta.2015.0096, http://arxiv.org/abs/1107.0237v3, http://arxiv.org/pdf/1107.0237v3
quant-ph
35,949
th
This paper derives a robust on-line equity trading algorithm that achieves the greatest possible percentage of the final wealth of the best pairs rebalancing rule in hindsight. A pairs rebalancing rule chooses some pair of stocks in the market and then perpetually executes rebalancing trades so as to maintain a target fraction of wealth in each of the two. After each discrete market fluctuation, a pairs rebalancing rule will sell a precise amount of the outperforming stock and put the proceeds into the underperforming stock. Under typical conditions, in hindsight one can find pairs rebalancing rules that would have spectacularly beaten the market. Our trading strategy, which extends Ordentlich and Cover's (1998) "max-min universal portfolio," guarantees to achieve an acceptable percentage of the hindsight-optimized wealth, a percentage which tends to zero at a slow (polynomial) rate. This means that on a long enough investment horizon, the trader can enforce a compound-annual growth rate that is arbitrarily close to that of the best pairs rebalancing rule in hindsight. The strategy will "beat the market asymptotically" if there turns out to exist a pairs rebalancing rule that grows capital at a higher asymptotic rate than the market index. The advantages of our algorithm over the Ordentlich and Cover (1998) strategy are twofold. First, their strategy is impossible to compute in practice. Second, in considering the more modest benchmark (instead of the best all-stock rebalancing rule in hindsight), we reduce the "cost of universality" and achieve a higher learning rate.
Super-Replication of the Best Pairs Trade in Hindsight
2018-10-05 01:30:01
Alex Garivaltis
http://arxiv.org/abs/1810.02444v4, http://arxiv.org/pdf/1810.02444v4
q-fin.PM
35,950
th
This paper prices and replicates the financial derivative whose payoff at $T$ is the wealth that would have accrued to a $\$1$ deposit into the best continuously-rebalanced portfolio (or fixed-fraction betting scheme) determined in hindsight. For the single-stock Black-Scholes market, Ordentlich and Cover (1998) only priced this derivative at time-0, giving $C_0=1+\sigma\sqrt{T/(2\pi)}$. Of course, the general time-$t$ price is not equal to $1+\sigma\sqrt{(T-t)/(2\pi)}$. I complete the Ordentlich-Cover (1998) analysis by deriving the price at any time $t$. By contrast, I also study the more natural case of the best levered rebalancing rule in hindsight. This yields $C(S,t)=\sqrt{T/t}\cdot\,\exp\{rt+\sigma^2b(S,t)^2\cdot t/2\}$, where $b(S,t)$ is the best rebalancing rule in hindsight over the observed history $[0,t]$. I show that the replicating strategy amounts to betting the fraction $b(S,t)$ of wealth on the stock over the interval $[t,t+dt].$ This fact holds for the general market with $n$ correlated stocks in geometric Brownian motion: we get $C(S,t)=(T/t)^{n/2}\exp(rt+b'\Sigma b\cdot t/2)$, where $\Sigma$ is the covariance of instantaneous returns per unit time. This result matches the $\mathcal{O}(T^{n/2})$ "cost of universality" derived by Cover in his "universal portfolio theory" (1986, 1991, 1996, 1998), which super-replicates the same derivative in discrete-time. The replicating strategy compounds its money at the same asymptotic rate as the best levered rebalancing rule in hindsight, thereby beating the market asymptotically. Naturally enough, we find that the American-style version of Cover's Derivative is never exercised early in equilibrium.
Exact Replication of the Best Rebalancing Rule in Hindsight
2018-10-05 04:36:19
Alex Garivaltis
http://arxiv.org/abs/1810.02485v2, http://arxiv.org/pdf/1810.02485v2
q-fin.PR
35,951
th
This paper studies a two-person trading game in continuous time that generalizes Garivaltis (2018) to allow for stock prices that both jump and diffuse. Analogous to Bell and Cover (1988) in discrete time, the players start by choosing fair randomizations of the initial dollar, by exchanging it for a random wealth whose mean is at most 1. Each player then deposits the resulting capital into some continuously-rebalanced portfolio that must be adhered to over $[0,t]$. We solve the corresponding `investment $\phi$-game,' namely the zero-sum game with payoff kernel $\mathbb{E}[\phi\{\textbf{W}_1V_t(b)/(\textbf{W}_2V_t(c))\}]$, where $\textbf{W}_i$ is player $i$'s fair randomization, $V_t(b)$ is the final wealth that accrues to a one dollar deposit into the rebalancing rule $b$, and $\phi(\bullet)$ is any increasing function meant to measure relative performance. We show that the unique saddle point is for both players to use the (leveraged) Kelly rule for jump diffusions, which is ordinarily defined by maximizing the asymptotic almost-sure continuously-compounded capital growth rate. Thus, the Kelly rule for jump diffusions is the correct behavior for practically anybody who wants to outperform other traders (on any time frame) with respect to practically any measure of relative performance.
Game-Theoretic Optimal Portfolios for Jump Diffusions
2018-12-11 21:43:09
Alex Garivaltis
http://arxiv.org/abs/1812.04603v2, http://arxiv.org/pdf/1812.04603v2
econ.GN
35,952
th
We study T. Cover's rebalancing option (Ordentlich and Cover 1998) under discrete hindsight optimization in continuous time. The payoff in question is equal to the final wealth that would have accrued to a $\$1$ deposit into the best of some finite set of (perhaps levered) rebalancing rules determined in hindsight. A rebalancing rule (or fixed-fraction betting scheme) amounts to fixing an asset allocation (i.e. $200\%$ stocks and $-100\%$ bonds) and then continuously executing rebalancing trades to counteract allocation drift. Restricting the hindsight optimization to a small number of rebalancing rules (i.e. 2) has some advantages over the pioneering approach taken by Cover $\&$ Company in their brilliant theory of universal portfolios (1986, 1991, 1996, 1998), where one's on-line trading performance is benchmarked relative to the final wealth of the best unlevered rebalancing rule of any kind in hindsight. Our approach lets practitioners express an a priori view that one of the favored asset allocations ("bets") $b\in\{b_1,...,b_n\}$ will turn out to have performed spectacularly well in hindsight. In limiting our robustness to some discrete set of asset allocations (rather than all possible asset allocations) we reduce the price of the rebalancing option and guarantee to achieve a correspondingly higher percentage of the hindsight-optimized wealth at the end of the planning period. A practitioner who lives to delta-hedge this variant of Cover's rebalancing option through several decades is guaranteed to see the day that his realized compound-annual capital growth rate is very close to that of the best $b_i$ in hindsight. Hence the point of the rock-bottom option price.
Cover's Rebalancing Option With Discrete Hindsight Optimization
2019-03-03 07:36:48
Alex Garivaltis
http://arxiv.org/abs/1903.00829v2, http://arxiv.org/pdf/1903.00829v2
q-fin.PM
35,953
th
I derive practical formulas for optimal arrangements between sophisticated stock market investors (namely, continuous-time Kelly gamblers or, more generally, CRRA investors) and the brokers who lend them cash for leveraged bets on a high Sharpe asset (i.e. the market portfolio). Rather than, say, the broker posting a monopoly price for margin loans, the gambler agrees to use a greater quantity of margin debt than he otherwise would in exchange for an interest rate that is lower than the broker would otherwise post. The gambler thereby attains a higher asymptotic capital growth rate and the broker enjoys a greater rate of intermediation profit than would obtain under non-cooperation. If the threat point represents a vicious breakdown of negotiations (resulting in zero margin loans), then we get an elegant rule of thumb: $r_L^*=(3/4)r+(1/4)(\nu-\sigma^2/2)$, where $r$ is the broker's cost of funds, $\nu$ is the compound-annual growth rate of the market index, and $\sigma$ is the annual volatility. We show that, regardless of the particular threat point, the gambler will negotiate to size his bets as if he himself could borrow at the broker's call rate.
Nash Bargaining Over Margin Loans to Kelly Gamblers
2019-04-14 08:13:50
Alex Garivaltis
http://dx.doi.org/10.13140/RG.2.2.29080.65286, http://arxiv.org/abs/1904.06628v2, http://arxiv.org/pdf/1904.06628v2
econ.GN
35,954
th
This paper supplies two possible resolutions of Fortune's (2000) margin-loan pricing puzzle. Fortune (2000) noted that the margin loan interest rates charged by stock brokers are very high in relation to the actual (low) credit risk and the cost of funds. If we live in the Black-Scholes world, the brokers are presumably making arbitrage profits by shorting dynamically precise amounts of their clients' portfolios. First, we extend Fortune's (2000) application of Merton's (1974) no-arbitrage approach to allow for brokers that can only revise their hedges finitely many times during the term of the loan. We show that extremely small differences in the revision frequency can easily explain the observed variation in margin loan pricing. In fact, four additional revisions per three-day period serve to explain all of the currently observed heterogeneity. Second, we study monopolistic (or oligopolistic) margin loan pricing by brokers whose clients are continuous-time Kelly gamblers. The broker solves a general stochastic control problem that yields simple and pleasant formulas for the optimal interest rate and the net interest margin. If the author owned a brokerage, he would charge an interest rate of $(r+\nu)/2-\sigma^2/4$, where $r$ is the cost of funds, $\nu$ is the compound-annual growth rate of the S&P 500 index, and $\sigma$ is the volatility.
Two Resolutions of the Margin Loan Pricing Puzzle
2019-06-03 21:57:56
Alex Garivaltis
http://arxiv.org/abs/1906.01025v2, http://arxiv.org/pdf/1906.01025v2
econ.GN
35,955
th
We consider a two-person trading game in continuous time whereby each player chooses a constant rebalancing rule $b$ that he must adhere to over $[0,t]$. If $V_t(b)$ denotes the final wealth of the rebalancing rule $b$, then Player 1 (the `numerator player') picks $b$ so as to maximize $\mathbb{E}[V_t(b)/V_t(c)]$, while Player 2 (the `denominator player') picks $c$ so as to minimize it. In the unique Nash equilibrium, both players use the continuous-time Kelly rule $b^*=c^*=\Sigma^{-1}(\mu-r\textbf{1})$, where $\Sigma$ is the covariance of instantaneous returns per unit time, $\mu$ is the drift vector of the stock market, and $\textbf{1}$ is a vector of ones. Thus, even over very short intervals of time $[0,t]$, the desire to perform well relative to other traders leads one to adopt the Kelly rule, which is ordinarily derived by maximizing the asymptotic exponential growth rate of wealth. Hence, we find agreement with Bell and Cover's (1988) result in discrete time.
Game-Theoretic Optimal Portfolios in Continuous Time
2019-06-05 21:01:31
Alex Garivaltis
http://arxiv.org/abs/1906.02216v2, http://arxiv.org/pdf/1906.02216v2
q-fin.PM
35,956
th
I unravel the basic long run dynamics of the broker call money market, which is the pile of cash that funds margin loans to retail clients (read: continuous time Kelly gamblers). Call money is assumed to supply itself perfectly inelastically, and to continuously reinvest all principal and interest. I show that the relative size of the money market (that is, relative to the Kelly bankroll) is a martingale that nonetheless converges in probability to zero. The margin loan interest rate is a submartingale that converges in mean square to the choke price $r_\infty:=\nu-\sigma^2/2$, where $\nu$ is the asymptotic compound growth rate of the stock market and $\sigma$ is its annual volatility. In this environment, the gambler no longer beats the market asymptotically a.s. by an exponential factor (as he would under perfectly elastic supply). Rather, he beats the market asymptotically with very high probability (think 98%) by a factor (say 1.87, or 87% more final wealth) whose mean cannot exceed what the leverage ratio was at the start of the model (say, $2:1$). Although the ratio of the gambler's wealth to that of an equivalent buy-and-hold investor is a submartingale (always expected to increase), his realized compound growth rate converges in mean square to $\nu$. This happens because the equilibrium leverage ratio converges to $1:1$ in lockstep with the gradual rise of margin loan interest rates.
Long Run Feedback in the Broker Call Money Market
2019-06-24 20:01:51
Alex Garivaltis
http://arxiv.org/abs/1906.10084v2, http://arxiv.org/pdf/1906.10084v2
econ.GN
35,957
th
Supply chains are the backbone of the global economy. Disruptions to them can be costly. Centrally managed supply chains invest in ensuring their resilience. Decentralized supply chains, however, must rely upon the self-interest of their individual components to maintain the resilience of the entire chain. We examine the incentives that independent self-interested agents have in forming a resilient supply chain network in the face of production disruptions and competition. In our model, competing suppliers are subject to yield uncertainty (they deliver less than ordered) and congestion (lead time uncertainty or, "soft" supply caps). Competing retailers must decide which suppliers to link to based on both price and reliability. In the presence of yield uncertainty only, the resulting supply chain networks are sparse. Retailers concentrate their links on a single supplier, counter to the idea that they should mitigate yield uncertainty by diversifying their supply base. This happens because retailers benefit from supply variance. It suggests that competition will amplify output uncertainty. When congestion is included as well, the resulting networks are denser and resemble the bipartite expander graphs that have been proposed in the supply chain literature, thereby, providing the first example of endogenous formation of resilient supply chain networks, without resilience being explicitly encoded in payoffs. Finally, we show that a supplier's investments in improved yield can make it worse off. This happens because high production output saturates the market, which, in turn lowers prices and profits for participants.
Strategic Formation and Reliability of Supply Chain Networks
2019-09-17 21:46:03
Victor Amelkin, Rakesh Vohra
http://arxiv.org/abs/1909.08021v2, http://arxiv.org/pdf/1909.08021v2
cs.GT
35,958
th
In this paper, we consider the problem of resource congestion control for competing online learning agents. On the basis of non-cooperative game as the model for the interaction between the agents, and the noisy online mirror ascent as the model for rational behavior of the agents, we propose a novel pricing mechanism which gives the agents incentives for sustainable use of the resources. Our mechanism is distributed and resource-centric, in the sense that it is done by the resources themselves and not by a centralized instance, and that it is based rather on the congestion state of the resources than the preferences of the agents. In case that the noise is persistent, and for several choices of the intrinsic parameter of the agents, such as their learning rate, and of the mechanism parameters, such as the learning rate of -, the progressivity of the price-setters, and the extrinsic price sensitivity of the agents, we show that the accumulative violation of the resource constraints of the resulted iterates is sub-linear w.r.t. the time horizon. Moreover, we provide numerical simulations to support our theoretical findings.
Pricing Mechanism for Resource Sustainability in Competitive Online Learning Multi-Agent Systems
2019-10-21 15:49:00
Ezra Tampubolon, Holger Boche
http://arxiv.org/abs/1910.09314v1, http://arxiv.org/pdf/1910.09314v1
cs.LG
35,959
th
Bounded rationality is an important consideration stemming from the fact that agents often have limits on their processing abilities, making the assumption of perfect rationality inapplicable to many real tasks. We propose an information-theoretic approach to the inference of agent decisions under Smithian competition. The model explicitly captures the boundedness of agents (limited in their information-processing capacity) as the cost of information acquisition for expanding their prior beliefs. The expansion is measured as the Kullblack-Leibler divergence between posterior decisions and prior beliefs. When information acquisition is free, the homo economicus agent is recovered, while in cases when information acquisition becomes costly, agents instead revert to their prior beliefs. The maximum entropy principle is used to infer least-biased decisions based upon the notion of Smithian competition formalised within the Quantal Response Statistical Equilibrium framework. The incorporation of prior beliefs into such a framework allowed us to systematically explore the effects of prior beliefs on decision-making in the presence of market feedback, as well as importantly adding a temporal interpretation to the framework. We verified the proposed model using Australian housing market data, showing how the incorporation of prior knowledge alters the resulting agent decisions. Specifically, it allowed for the separation of past beliefs and utility maximisation behaviour of the agent as well as the analysis into the evolution of agent beliefs.
A maximum entropy model of bounded rational decision-making with prior beliefs and market feedback
2021-02-18 09:41:59
Benjamin Patrick Evans, Mikhail Prokopenko
http://dx.doi.org/10.3390/e23060669, http://arxiv.org/abs/2102.09180v3, http://arxiv.org/pdf/2102.09180v3
cs.IT
35,960
th
Solar Renewable Energy Certificate (SREC) markets are a market-based system that incentivizes solar energy generation. A regulatory body imposes a lower bound on the amount of energy each regulated firm must generate via solar means, providing them with a tradeable certificate for each MWh generated. Firms seek to navigate the market optimally by modulating their SREC generation and trading rates. As such, the SREC market can be viewed as a stochastic game, where agents interact through the SREC price. We study this stochastic game by solving the mean-field game (MFG) limit with sub-populations of heterogeneous agents. Market participants optimize costs accounting for trading frictions, cost of generation, non-linear non-compliance costs, and generation uncertainty. Moreover, we endogenize SREC price through market clearing. We characterize firms' optimal controls as the solution of McKean-Vlasov (MV) FBSDEs and determine the equilibrium SREC price. We establish the existence and uniqueness of a solution to this MV-FBSDE, and prove that the MFG strategies form an $\epsilon$-Nash equilibrium for the finite player game. Finally, we develop a numerical scheme for solving the MV-FBSDEs and conduct a simulation study.
A Mean-Field Game Approach to Equilibrium Pricing in Solar Renewable Energy Certificate Markets
2020-03-10 22:23:22
Arvind Shrivats, Dena Firoozi, Sebastian Jaimungal
http://arxiv.org/abs/2003.04938v5, http://arxiv.org/pdf/2003.04938v5
q-fin.MF
35,961
th
Forward invariance of a basin of attraction is often overlooked when using a Lyapunov stability theorem to prove local stability; even if the Lyapunov function decreases monotonically in a neighborhood of an equilibrium, the dynamic may escape from this neighborhood. In this note, we fix this gap by finding a smaller neighborhood that is forward invariant. This helps us to prove local stability more naturally without tracking each solution path. Similarly, we prove a transitivity theorem about basins of attractions without requiring forward invariance. Keywords: Lyapunov function, local stability, forward invariance, evolutionary dynamics.
On forward invariance in Lyapunov stability theorem for local stability
2020-06-08 01:08:33
Dai Zusai
http://arxiv.org/abs/2006.04280v1, http://arxiv.org/pdf/2006.04280v1
math.OC
35,962
th
In this work, we study the system of interacting non-cooperative two Q-learning agents, where one agent has the privilege of observing the other's actions. We show that this information asymmetry can lead to a stable outcome of population learning, which generally does not occur in an environment of general independent learners. The resulting post-learning policies are almost optimal in the underlying game sense, i.e., they form a Nash equilibrium. Furthermore, we propose in this work a Q-learning algorithm, requiring predictive observation of two subsequent opponent's actions, yielding an optimal strategy given that the latter applies a stationary strategy, and discuss the existence of the Nash equilibrium in the underlying information asymmetrical game.
On Information Asymmetry in Competitive Multi-Agent Reinforcement Learning: Convergence and Optimality
2020-10-21 14:19:53
Ezra Tampubolon, Haris Ceribasic, Holger Boche
http://arxiv.org/abs/2010.10901v2, http://arxiv.org/pdf/2010.10901v2
cs.LG
35,963
th
A common goal in the areas of secure information flow and privacy is to build effective defenses against unwanted leakage of information. To this end, one must be able to reason about potential attacks and their interplay with possible defenses. In this paper, we propose a game-theoretic framework to formalize strategies of attacker and defender in the context of information leakage, and provide a basis for developing optimal defense methods. A novelty of our games is that their utility is given by information leakage, which in some cases may behave in a non-linear way. This causes a significant deviation from classic game theory, in which utility functions are linear with respect to players' strategies. Hence, a key contribution of this paper is the establishment of the foundations of information leakage games. We consider two kinds of games, depending on the notion of leakage considered. The first kind, the QIF-games, is tailored for the theory of quantitative information flow (QIF). The second one, the DP-games, corresponds to differential privacy (DP).
Information Leakage Games: Exploring Information as a Utility Function
2020-12-22 17:51:30
Mário S. Alvim, Konstantinos Chatzikokolakis, Yusuke Kawamoto, Catuscia Palamidessi
http://dx.doi.org/10.1145/3517330, http://arxiv.org/abs/2012.12060v3, http://arxiv.org/pdf/2012.12060v3
cs.CR
35,964
th
This paper studies the general relationship between the gearing ratio of a Leveraged ETF and its corresponding expense ratio, viz., the investment management fees that are charged for the provision of this levered financial service. It must not be possible for an investor to combine two or more LETFs in such a way that his (continuously-rebalanced) LETF portfolio can match the gearing ratio of a given, professionally managed product and, at the same time, enjoy lower weighted-average expenses than the existing LETF. Given a finite set of LETFs that exist in the marketplace, I give necessary and sufficient conditions for these products to be undominated in the price-gearing plane. In a beautiful application of the duality theorem of linear programming, I prove a kind of two-fund theorem for LETFs: given a target gearing ratio for the investor, the cheapest way to achieve it is to combine (uniquely) the two nearest undominated LETF products that bracket it on the leverage axis. This also happens to be the implementation that has the lowest annual turnover. For the writer's enjoyment, we supply a second proof of the Main Theorem on LETFs that is based on Carath\'eodory's theorem in convex geometry. Thus, say, a triple-leveraged ("UltraPro") exchange-traded product should never be mixed with cash, if the investor is able to trade in the underlying index. In terms of financial innovation, our two-fund theorem for LETFs implies that the introduction of new, undominated 2.5x products would increase the welfare of all investors whose preferred gearing ratios lie between 2x ("Ultra") and 3x ("UltraPro"). Similarly for a 1.5x product.
Rational Pricing of Leveraged ETF Expense Ratios
2021-06-28 18:56:05
Alex Garivaltis
http://arxiv.org/abs/2106.14820v2, http://arxiv.org/pdf/2106.14820v2
econ.TH
35,965
th
Consider multiple experts with overlapping expertise working on a classification problem under uncertain input. What constitutes a consistent set of opinions? How can we predict the opinions of experts on missing sub-domains? In this paper, we define a framework of to analyze this problem, termed "expert graphs." In an expert graph, vertices represent classes and edges represent binary opinions on the topics of their vertices. We derive necessary conditions for expert graph validity and use them to create "synthetic experts" which describe opinions consistent with the observed opinions of other experts. We show this framework to be equivalent to the well-studied linear ordering polytope. We show our conditions are not sufficient for describing all expert graphs on cliques, but are sufficient for cycles.
Expert Graphs: Synthesizing New Expertise via Collaboration
2021-07-15 03:27:16
Bijan Mazaheri, Siddharth Jain, Jehoshua Bruck
http://arxiv.org/abs/2107.07054v1, http://arxiv.org/pdf/2107.07054v1
cs.LG
35,966
th
How can a social planner adaptively incentivize selfish agents who are learning in a strategic environment to induce a socially optimal outcome in the long run? We propose a two-timescale learning dynamics to answer this question in both atomic and non-atomic games. In our learning dynamics, players adopt a class of learning rules to update their strategies at a faster timescale, while a social planner updates the incentive mechanism at a slower timescale. In particular, the update of the incentive mechanism is based on each player's externality, which is evaluated as the difference between the player's marginal cost and the society's marginal cost in each time step. We show that any fixed point of our learning dynamics corresponds to the optimal incentive mechanism such that the corresponding Nash equilibrium also achieves social optimality. We also provide sufficient conditions for the learning dynamics to converge to a fixed point so that the adaptive incentive mechanism eventually induces a socially optimal outcome. Finally, we demonstrate that the sufficient conditions for convergence are satisfied in a variety of games, including (i) atomic networked quadratic aggregative games, (ii) atomic Cournot competition, and (iii) non-atomic network routing games.
Inducing Social Optimality in Games via Adaptive Incentive Design
2022-04-12 06:36:42
Chinmay Maheshwari, Kshitij Kulkarni, Manxi Wu, Shankar Sastry
http://arxiv.org/abs/2204.05507v1, http://arxiv.org/pdf/2204.05507v1
cs.GT
35,967
th
Alice (owner) has knowledge of the underlying quality of her items measured in grades. Given the noisy grades provided by an independent party, can Bob (appraiser) obtain accurate estimates of the ground-truth grades of the items by asking Alice a question about the grades? We address this when the payoff to Alice is additive convex utility over all her items. We establish that if Alice has to truthfully answer the question so that her payoff is maximized, the question must be formulated as pairwise comparisons between her items. Next, we prove that if Alice is required to provide a ranking of her items, which is the most fine-grained question via pairwise comparisons, she would be truthful. By incorporating the ground-truth ranking, we show that Bob can obtain an estimator with the optimal squared error in certain regimes based on any possible way of truthful information elicitation. Moreover, the estimated grades are substantially more accurate than the raw grades when the number of items is large and the raw grades are very noisy. Finally, we conclude the paper with several extensions and some refinements for practical considerations.
A Truthful Owner-Assisted Scoring Mechanism
2022-06-14 17:35:53
Weijie J. Su
http://arxiv.org/abs/2206.08149v1, http://arxiv.org/pdf/2206.08149v1
cs.LG
35,968
th
We propose to smooth out the calibration score, which measures how good a forecaster is, by combining nearby forecasts. While regular calibration can be guaranteed only by randomized forecasting procedures, we show that smooth calibration can be guaranteed by deterministic procedures. As a consequence, it does not matter if the forecasts are leaked, i.e., made known in advance: smooth calibration can nevertheless be guaranteed (while regular calibration cannot). Moreover, our procedure has finite recall, is stationary, and all forecasts lie on a finite grid. To construct the procedure, we deal also with the related setups of online linear regression and weak calibration. Finally, we show that smooth calibration yields uncoupled finite-memory dynamics in n-person games "smooth calibrated learning" in which the players play approximate Nash equilibria in almost all periods (by contrast, calibrated learning, which uses regular calibration, yields only that the time-averages of play are approximate correlated equilibria).
Smooth Calibration, Leaky Forecasts, Finite Recall, and Nash Dynamics
2022-10-13 19:34:55
Dean P. Foster, Sergiu Hart
http://dx.doi.org/10.1016/j.geb.2017.12.022, http://arxiv.org/abs/2210.07152v1, http://arxiv.org/pdf/2210.07152v1
econ.TH
35,969
th
Calibration means that forecasts and average realized frequencies are close. We develop the concept of forecast hedging, which consists of choosing the forecasts so as to guarantee that the expected track record can only improve. This yields all the calibration results by the same simple basic argument while differentiating between them by the forecast-hedging tools used: deterministic and fixed point based versus stochastic and minimax based. Additional contributions are an improved definition of continuous calibration, ensuing game dynamics that yield Nash equilibria in the long run, and a new calibrated forecasting procedure for binary events that is simpler than all known such procedures.
Forecast Hedging and Calibration
2022-10-13 19:48:25
Dean P. Foster, Sergiu Hart
http://dx.doi.org/10.1086/716559, http://arxiv.org/abs/2210.07169v1, http://arxiv.org/pdf/2210.07169v1
econ.TH
35,970
th
In 2023, the International Conference on Machine Learning (ICML) required authors with multiple submissions to rank their submissions based on perceived quality. In this paper, we aim to employ these author-specified rankings to enhance peer review in machine learning and artificial intelligence conferences by extending the Isotonic Mechanism to exponential family distributions. This mechanism generates adjusted scores that closely align with the original scores while adhering to author-specified rankings. Despite its applicability to a broad spectrum of exponential family distributions, implementing this mechanism does not require knowledge of the specific distribution form. We demonstrate that an author is incentivized to provide accurate rankings when her utility takes the form of a convex additive function of the adjusted review scores. For a certain subclass of exponential family distributions, we prove that the author reports truthfully only if the question involves only pairwise comparisons between her submissions, thus indicating the optimality of ranking in truthful information elicitation. Moreover, we show that the adjusted scores improve dramatically the estimation accuracy compared to the original scores and achieve nearly minimax optimality when the ground-truth scores have bounded total variation. We conclude the paper by presenting experiments conducted on the ICML 2023 ranking data, which show significant estimation gain using the Isotonic Mechanism.
The Isotonic Mechanism for Exponential Family Estimation
2023-04-21 20:59:08
Yuling Yan, Weijie J. Su, Jianqing Fan
http://arxiv.org/abs/2304.11160v3, http://arxiv.org/pdf/2304.11160v3
math.ST