context
stringlengths
100
2.69k
A
stringlengths
122
4.09k
B
stringlengths
101
3k
C
stringlengths
102
2.23k
D
stringlengths
108
1.99k
label
stringclasses
4 values
Therefore, (3.1) obviously holds if we use the estimator θ^0,lsubscript^𝜃0𝑙\hat{\theta}_{0,l}over^ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT 0 , italic_l end_POSTSUBSCRIPT. We consider the representation in (3.1) because we rely on more general results derived in Appendix A to analyze the theoretical properties of our new estimation procedure.
where the envelope F1:=F1,1(1)⁢F1,1(2)∨F1,2assignsubscript𝐹1superscriptsubscript𝐹111superscriptsubscript𝐹112subscript𝐹12F_{1}:=F_{1,1}^{(1)}F_{1,1}^{(2)}\vee F_{1,2}italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT := italic_F start_POSTSUBSCRIPT 1 , 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT italic_F start_POSTSUBSCRIPT 1 , 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT ∨ italic_F start_POSTSUBSCRIPT 1 , 2 end_POSTSUBSCRIPT of ℱ1subscriptℱ1\mathcal{F}_{1}caligraphic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT satisfies
Lastly, the target function f1⁢(⋅)subscript𝑓1⋅f_{1}(\cdot)italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( ⋅ ) can be estimated by
f1⁢(⋅)≈θ0T⁢g⁢(⋅)=∑l=1d1θ0,l⁢gl⁢(⋅)subscript𝑓1⋅superscriptsubscript𝜃0𝑇𝑔⋅superscriptsubscript𝑙1subscript𝑑1subscript𝜃0𝑙subscript𝑔𝑙⋅\displaystyle f_{1}(\cdot)\approx\theta_{0}^{T}g(\cdot)=\sum\limits_{l=1}^{d_{%
f1⁢(⋅)≈θ0T⁢g⁢(⋅)subscript𝑓1⋅superscriptsubscript𝜃0𝑇𝑔⋅f_{1}(\cdot)\approx\theta_{0}^{T}g(\cdot)italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( ⋅ ) ≈ italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_g ( ⋅ )
B
Such extension should be straightforward, in fact the flexibility of the interval-wise testing approach, and its subsequent extensions to a dependent data setting [39] and functional surfaces/volumes [15, see e.g.], allow to solve many inferential problems in geostatistical modelling, such as the ones discussed in the introduction, where I may be interested in performing domain-selective significance tests on model coefficients. Moreover, since my FCSI in this space-time case would be space-time objects, such methodology would prove fundamental in the domain selective testing for this proposed extension of our GSA framework.
Methodologies for GSA that are able to deal with functional outputs are present in the literature: [14] propose non-time-varying sensitivity indices for models with functional outputs, based on a PCA expansion of the data. This approach is thus not capable of detecting the presence of time variations in impacts, nor does it address the issue of statistical significance of impacts. [11] proposes a similar approach, without specifying a fixed functional basis, and proposing an innovative functional pick-and-freeze method for estimation. [9] instead use a bayesian framework, based on adaptive splines to extract also in this case non-time-varying indices. In all the cited works around GSA techniques for functional outputs uncertainty is not explicitly explored. A very sound framework for the GSA of stochastic models with scalar outputs is provided in [2].
Moreover, even if our GSA methodology was born in order to deal with simulation models, one could think about using it as a method to deal with Machine Learning-oriented methods dealing with functional data [37]. Its role in this context would be to provide a simple yet probabilistically sound way to perform significance testing of input parameters.
Some fundamental pieces of knowledge are still missing: given a dynamic phenomenon such as the evolution of C⁢O2𝐶subscript𝑂2CO_{2}italic_C italic_O start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT emissions in time a policymaker is interested if the input of the factor varies across time, and how. Moreover, given the presence of a model ensemble, with different modelling choices, and thus different impacts of identical input factors across different models, a key information to provide to policymakers is if the evidence provided by the model ensemble is significant, in the sense that it is ‘higher’ than the natural variability of the model ensemble. In this specific setting we do not want just to provide a ‘global’ idea of significance, but we also want to explore the temporal sparsity of it (e.g. I would like to know if the impact of a specific input variable is significant in a given timeframe, but fails to be ‘detectable’ in the model ensemble after a given date). Our aim in the present work is thus threefold: we want to introduce a way to express sensitivity that allows to account for time-varying impacts, and we also want to assess the significance of such sensitivities, being able to explore the presence of temporal sparsity of the significance.
To our knowledge, there are no methods that deal with GSA of stochastic models with functional or multivariate outputs. Moreover, none of the works related to GSA cited in this paragraph deal with finite changes. For these reasons, to provide methodologies that are able to tackle the applicative questions mentioned above, we will provide a novel vision of GSA for functional outputs and finite changes using concepts developed while working in FDA. Namely, by exploiting the similarity between the proposed Sensitivity Analysis technique for Functional-valued outputs and Functional Linear Models [29], we use a cutting edge non-parametric testing technique for Functional-on-Scalar Linear Models, called Interval-Wise Testing [24] to address in a statistically sound way the issue of uncertainty.
B
Lastly, our work only addresses Bayesian learning with correctly specified agents. There is a large literature on non-Bayesian social learning, surveyed by Golub and
This paper has studied a general model of sequential social learning on observational networks. Our main theme has been how learning turns jointly on preferences and information when there are multiple states. We close by commenting on certain aspects of our approach.
Sadler (2016). There has also been recent interest in (mis)learning among misspecified Bayesian agents; see, for example, Frick
Lastly, our work only addresses Bayesian learning with correctly specified agents. There is a large literature on non-Bayesian social learning, surveyed by Golub and
belief convergence. Since expanding observations is compatible with the observational network having multiple components, one cannot expect the social belief to converge even in probability.252525Consider an observational network consisting of two disjoint complete subnetworks: every odd agent observes only all odd predecessors, and symmetrically for even agents. Given any specification in which learning would fail on a complete network—such as the canonical binary state/binary action herding example—there is positive probability of the limit belief among odd agents being different from that among even agents. Furthermore, there can be a positive probability that the social belief is not eventually even in a neighborhood of the set of stationary beliefs, as already noted.
B
While welfare-weighting is not (yet) standard practice, there are recent examples in this vein of researchers constructing indices to reflect the preferences or objectives of stakeholders. Bhatt et al. (2024), for example, calculate an index of crime-related outcomes in which these are weighted by estimates of their social cost, citing an earlier version of this paper to motivate their approach. And researchers working with the NGO GiveDirectly have elicited preferences over outcomes from the recipients of cash transfers to construct the weights over those outcomes in their subsequent analysis.101010Personal communication, Miriam Laker. 27 March 2024.
Shapiro (2021), who emphasize the role of audience heterogeneity in the process of scientific communication. It also turns out to yield results that are isomorphic to those obtained earlier. We therefore study it first in Section 4.1 before turning to the issue of aggregation in Section 4.2.
To illustrate the quantitative implications of the model we apply it to our running example, regulatory approval by the FDA. Applying the formulae implied by the model to published data on the cost structure of clinical trials, we calculate adjusted critical values that are neither as liberal as unadjusted testing, nor as conservative as those implied by some of the procedures in current use. We also explore potential applicability to research in economics, where the use of MHT adjustment is on the rise (see Figure 3), using a unique dataset on the costs of projects submitted to the Abdul Latif Jameel Poverty Action Lab (J-PAL) which we assembled for this purpose.
Planner payoff. The two components of the planner’s utility each relate to a particular aspect of the regulatory approval process example. The first captures the desire to avoid implementing harmful treatments, as for example under the “do no harm” principle. The second captures the longer-term value of scientific research, which is typically important for future studies independent of the immediate regulatory decision made; as the international guidelines for clinical trials state, “the rationale and design of confirmatory trials nearly always rests on earlier clinical work carried out in a series of exploratory studies” (Lewis, 1999). As we will see, these components then justify choosing statistical testing procedures that control size and are well-powered. Section 3.4 discusses the relationship between this approach and other ways of selecting among procedures.
This section discusses the scope for applying and implementing the framework’s implications (summarized in decision-tree format in Figure 2). Section 5.1 considers our running example, the regulatory approval process, Section 5.2 explores the applicability of our approach to economic research, and Section 5.3 considers processes of scientific communication more broadly.
D
Given e≡(P,ω)𝑒𝑃𝜔e\equiv(P,\omega)italic_e ≡ ( italic_P , italic_ω ), choose an arbitrary priority profile, and let I⁢A⁢(e)𝐼𝐴𝑒IA(e)italic_I italic_A ( italic_e ) be the allocation defined by the following immediate acceptance algorithm:
Step 1: Each agent applies to his/her favorite object. Each object then accepts the applicant with the highest priority permanently and rejects the other applicants. The agents who are accepted by some objects are removed with their assigned objects.
next priority order. Each agent then tentatively accepts the object that he/she likes best among the new
That is, under the NRM rule, agent 1111 is assigned his/her endowment permanently and the other agents’ assignments are decided by whether agent 1111’s top choice object is his/her endowment or not.
Step t: Each remaining agent applies to his/her tt⁢hsuperscript𝑡𝑡ℎt^{th}italic_t start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT preferred object. Each remaining object then accepts the applicant with the highest priority permanently and rejects the other applicants.
A
This paper contributes to the literature on dynamic ordered logit models. We are aware of only one paper that studies a fixed-T𝑇Titalic_T version of this model while allowing for fixed effects. The approach in Muris, Raposo, and Vandoros (2023) builds on methods for dynamic binary choice models in Honoré and Kyriazidou (2000) by restricting how past values of the dependent variable enter the model. In particular, in Muris, Raposo, and Vandoros (2023), the lagged dependent variable Yi,t−1subscript𝑌𝑖𝑡1Y_{i,t-1}italic_Y start_POSTSUBSCRIPT italic_i , italic_t - 1 end_POSTSUBSCRIPT enters the model only via 𝟙⁢{Yi,t−1≥k}1subscript𝑌𝑖𝑡1𝑘\mathbbm{1}\{Y_{i,t-1}\geq k\}blackboard_1 { italic_Y start_POSTSUBSCRIPT italic_i , italic_t - 1 end_POSTSUBSCRIPT ≥ italic_k } for some known k𝑘kitalic_k. We do not impose such a restriction, and allow the effect of Yi,t−1subscript𝑌𝑖𝑡1Y_{i,t-1}italic_Y start_POSTSUBSCRIPT italic_i , italic_t - 1 end_POSTSUBSCRIPT to vary freely with its level.
More broadly, this paper contributes to the literature on fixed-T𝑇Titalic_T identification and estimation in nonlinear panel models with fixed effects (see Honoré 2002, Arellano 2003, and Arellano and Bonhomme 2011 for overviews). The literature contains results for several models adjacent to ours. For example, the static panel ordered logit model with fixed effects was studied by Das and van Soest (1999), Johnson (2004b), Baetschmann, Staub, and Winkelmann (2015), and Muris (2017); results for static and dynamic binomial and multinomial choice models are in Chamberlain (1980), Honoré and Kyriazidou (2000), Magnac (2000), Shi, Shum, and Song (2018), Aguirregabiria, Gu, and Luo (2021),
Other existing work on dynamic panel models for ordered outcomes uses a random effects approach (Contoyannis, Jones, and Rice 2004, Albarran, Carrasco, and Carro 2019) or requires a large number of time periods for consistency (Carro and Traferri 2014, Fernández-Val, Savchenko, and Vella 2017). An earlier version of Aristodemou (2021) contained partial identification results for a dynamic ordered choice model without logistic errors. Our approach places no restrictions on the dependence between fixed effects and regressors, requires only four periods of data for consistency, and delivers point identification and estimates.
We are interested in regression models for ordinal outcomes that allow for lagged dependent variables as well as fixed effects. In the model that we propose, the ordered outcome depends on a fixed effect, a lagged dependent variable, regressors, and a logistic error term. We study identification and estimation of the finite-dimensional parameters in this model when only a small number (≥4absent4\geq 4≥ 4) of time periods is available.
The challenge of accommodating unobserved heterogeneity in nonlinear models is well understood, especially when the researcher also wants to allow for lagged dependent variables. For example, while recent developments (Kitazawa 2021 and Honoré and Weidner 2020) relax these requirements, early work on the dynamic binary logit model with fixed effects either assumed no regressors, or restricted their joint distribution (cf. Chamberlain 1985 and Honoré and Kyriazidou 2000). The challenge of accommodating unobserved heterogeneity in the ordered logit model seems even greater than in the binary model. The reason is that even the static version of the model is not in the exponential family (Hahn 1997). As a result, one cannot directly appeal to a sufficient statistic approach. An alternative approach in the static ordered logit model is to reduce it to a set of binary choice models (cf. Das and van Soest 1999, Johnson 2004b, Baetschmann, Staub, and Winkelmann 2015, Muris 2017, and Botosaru, Muris, and Pendakur 2023). Unfortunately, the dynamic ordered logit model cannot be similarly reduced to a dynamic binary choice model (see Muris, Raposo, and Vandoros 2023). Therefore, a new approach is needed. The contribution of this paper is to develop such an approach.
B
Following [hoyer09anm], we also demonstrate how our testability result can be applied in empirical practice. Specifically, we show that identification of the causal direction is equivalent to a conditional independence test of covariates and error terms given control variables. We make use of conditional independence tests based on kernel mean embeddings, i.e., maps of probability distributions into reproducing kernel Hilbert spaces (RKHS) \parencite[see][for a survey]muandetetal16kernel. Intuitively, this corresponds to approximating conditional distributions with unconditional ones by weighting with an appropriate kernel, and evaluating their covariance in an RKHS.
We consider two formal applications of our testability result. First, we explore testing for causal direction based on conditional independence. As already indicated in the related literature, achieving exact size control can be challenging within this framework, and we provide a detailed discussion of this issue below. Second, we conduct causal discovery, where we remain agnostic about the causal direction and compare test statistics for two rivaling models to gain insight into which one represents the true causal structure \parencite[see][]peters14.
Following [hoyer09anm], we also demonstrate how our testability result can be applied in empirical practice. Specifically, we show that identification of the causal direction is equivalent to a conditional independence test of covariates and error terms given control variables. We make use of conditional independence tests based on kernel mean embeddings, i.e., maps of probability distributions into reproducing kernel Hilbert spaces (RKHS) \parencite[see][for a survey]muandetetal16kernel. Intuitively, this corresponds to approximating conditional distributions with unconditional ones by weighting with an appropriate kernel, and evaluating their covariance in an RKHS.
Endogeneity is a common threat to causal identification in econometric models. Reverse causality is one source of such endogeneity. We build on work by \textcitehoyer09anm,mooijetal16 who have shown that the causal direction between two variables X𝑋Xitalic_X and Y𝑌Yitalic_Y is identifiable in models with additively separable error terms and nonlinear function forms. We extend their results by allowing for additional control covariates W𝑊Witalic_W and heteroskedasticity w.r.t. them and, thus, provide a heteroskedasticity-robust method to test for reverse causality. In addition, we show how this test can be extended to a bivariate causal discovery algorithm by comparing the test statistics of residual and purported cause of two candidate models. We extend known results on causal identification and causal discovery to settings with heteroskedasticity with respect to additional control covariates.
Algorithm 2 shows detailed steps of the implementation of the bivariate causal discovery which is motivated by 2. Steps 1 to 4 are the same as in 1. In step 5, we compute the test statistics corresponding to two conditional independence tests: one for model (1) and one for (2). The relative size of the resulting test statistics is informative about which model is the correct causal model (Step 6).
A
In Economy 1, the population increases rapidly after t=10𝑡10t=10italic_t = 10 because the constraint on food supply is relieved. After this steep increase, the growth rate declines and almost stops, because households prefer manufacturing goods to raising children. In contrast, the population of Economy 2 grows steadily. Consequently, the population in Economy 2 exceeds that in Economy 1 at t=24𝑡24t=24italic_t = 24.
The remainder of this paper is organized as follows: Section 2 introduces the model. Section 3 discusses the analytical properties of the proposed model.
This section analytically investigates the properties of the model, particularly the population dynamics of the Malthusian state and the effect of a sudden increase in land supply.
This section interprets several historical facts addressed in previous studies through the lens of the model.
Compared with these previous studies, this study identifies the exogenous shocks that transform the economy from stagnation to growth based on economic history studies and quantitatively examines the magnitude of the shocks.
C
The characteristic that we describe as overall reciprocity consists of positive weights on the answers to all of the questions in the reciprocity questionnaire. This includes both questions about positive reciprocity (e.g. “If someone does me a favor, I am prepared to return it”), as well as negative reciprocity (“If someone puts me in a difficult position, I will do the same to them”). Estimates of the interaction between this characteristic and the behavioral utility terms suggest that these individuals are more altruistic in the baseline and behave more in line with generalized reciprocity. At the onset of the treatment, they also shift more weight toward direct reciprocity. However, this shift toward direct reciprocity is potentially offset by a decrease in altruism (measured by additional weight placed on the costs of contributing) coupled with a strong decrease in generalized reciprocity. This suggests that individuals who have a high overall reciprocity attribute use new information to discriminate between collaborators as a mechanism for punishment.
Because there are only three trust questions, the first principal component summarizes most of the information from the trust questionnaire. It places positive weight on the question that involves trust and negative weights on two questions that suggest mistrust. Perhaps surprisingly, this measure of trust is associated with a positive interaction on contribution costs in the baseline, which indicates that individuals who score highly on trust are less altruistic and more careful about where they direct effort in the baseline. This agrees with the results of Glaeser et al. (2000), which suggest that such trust questionnaires predict trustworthy behavior but do not necessarily predict trusting behavior. Further in line with these results is a strong positive interaction of the trust characteristic with generalized reciprocity in the baseline. This suggests that these individuals are trustworthy in that they respond to sharing by others by increasing their own contribution. However, they are less likely to share blindly and trust that others will reciprocate. In the treatment, estimates of the effect of trust are less precise but suggest a reversal of this phenomenon; they trust that others will reciprocate when they know that others will be aware of their sharing behavior. This is captured by the negative estimate of the interaction between trust, the treatment indicator, and contribution costs, together with the positive estimate of the coefficient for the interaction between trust, the treatment indicator, and direct reciprocity. This sheds more light on information as a mechanism driving the mixed results regarding trust and sharing behavior in public goods games, observed in previous work (Anderson et al., 2004).
On the other hand, the second component of reciprocity places positive weight on questions involving positive reciprocity and negative weight on questions involving negative reciprocity or punishment. Individuals who align with this characteristic place much lower weight on the actual cost of contributing, suggesting some altruism. While there is some tradeoff in the treatment, the sign of the aggregate interaction term remains negative in the treatment suggesting that these players are still behaving more altruistically than average. Perhaps surprisingly, there is a strong negative coefficient on the interaction between positive reciprocity and generalized reciprocity in the baseline. These together suggest that their increased sharing is not conditional on having received more benefits from their group, possibly representing a tendency to share in anticipation that others will behave reciprocally. This interpretation is reinforced by a large positive effect of the treatment on generalized reciprocity for this group, offset by a small decrease in direct reciprocity. In other words, these individuals reciprocate by sharing with the entire group, and trusting in the reciprocity of others, rather than by using new information as a tool for punishment.
The characteristic that we describe as overall reciprocity consists of positive weights on the answers to all of the questions in the reciprocity questionnaire. This includes both questions about positive reciprocity (e.g. “If someone does me a favor, I am prepared to return it”), as well as negative reciprocity (“If someone puts me in a difficult position, I will do the same to them”). Estimates of the interaction between this characteristic and the behavioral utility terms suggest that these individuals are more altruistic in the baseline and behave more in line with generalized reciprocity. At the onset of the treatment, they also shift more weight toward direct reciprocity. However, this shift toward direct reciprocity is potentially offset by a decrease in altruism (measured by additional weight placed on the costs of contributing) coupled with a strong decrease in generalized reciprocity. This suggests that individuals who have a high overall reciprocity attribute use new information to discriminate between collaborators as a mechanism for punishment.
The two principal components of reciprocity—overall reciprocity and positive reciprocity—also explain much of the variation in behavioral patterns. Overall reciprocity, which captures both a taste for positive and negative reciprocity, has a nuanced effect in the two conditions. In the baseline, subjects with a higher overall reciprocity attribute are more altruistic and exhibit a stronger preference for generalized reciprocity. In the treatment condition, they shift more weight towards direct reciprocity, as expected, but also exhibit less altruism and less concern for generalized reciprocity. One interpretation for these differences is that in the treatment condition, the additional information about sharing decisions of others facilitates negative reciprocity (punishment of those who do not share) more than it enhances positive reciprocity.
B
We remark that this example is just for illustration and showcasing the interpretation of the proposed tensor factor model. Again we note that for the TFM-tucker model, one needs to identify a proper representation of the loading space in order to interpret the model. In Chen et al., (2022), varimax rotation was used to find the most sparse loading matrix representation to model interpretation. For TFM-cp, the model is unique hence interpretation can be made directly. Interpretation is impossible for the vector factor model in such a high dimensional case.
In this paper, we propose a tensor factor model with a low rank CP structure and develop its corresponding estimation procedures.
The rest of the paper is organized as follows. After a brief introduction of the basic notations and preliminaries of tensor analysis in Section 1.1, we introduce a tensor factor model with CP low-rank structure in Section 2. The estimation procedures of the factors and the loading vectors are presented in Section 3. Section 4 investigates the theoretical properties of the proposed methods. Section 5 develops some alternative algorithms to tensor factor models, which extend existing popular CP methods to the auto-covariance tensors with cPCA as initialization, and provides some simulation studies to demonstrate the numerical performance of all the estimation procedures. Section 6 illustrates the model and its interpretations in real data applications. Section 7 provides a short concluding remark. All technical details and more simulation results are relegated to the supplementary materials.
In this paper, we develop a new estimation procedure, named as High-Order Projection Estimators (HOPE), for TFM-cp in (1).
In this paper, we investigate a tensor factor model with a CP type low-rank structure, called TFM-cp. Specifically, let 𝒳tsubscript𝒳𝑡{\cal X}_{t}caligraphic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT be an order K𝐾Kitalic_K
A
(Izmalkov et al., 2005, 2011) seeks implementations that do not rely on trusted mediators, but rather rely on simple technologies that enable verification of what was learned—like sealed envelopes. The construction allows for particular elicitation technologies that allow, e.g., envelopes-inside-envelopes. Although our general set up can accommodate such technologies, we focus on more minimal assumptions about the technology available to the designer (e.g. the individual elicitation technology), and offer a privacy order rather than an implementation concept.
There are two important precursors to contextual privacy in the theory of decentralized computation: unconditional full privacy and perfect implementation. Contextual privacy under the individual elicitation technology parallels the concept of unconditional full privacy for decentralized protocols (Chor and Kushilevitz, 1989; Brandt and Sandholm, 2005, 2008). Unconditional full privacy requires that the only information revealed through a decentralized protocol is the information contained in the outcome—this notion of privacy is unconditional in that it does not condition on the presence of a mediator or on computational hardness assumptions. It has been applied to an auction domain (Brandt and Sandholm, 2008), and a voting domain (Brandt and Sandholm, 2005), stressing impossibility results. Our definition of contextual privacy brings unconditional full privacy into a framework amenable to economic design and extends it in several ways, highlighting that design for privacy involves tradeoffs regarding whose privacy to protect.
In this paper, we study how mechanism designers can limit the superfluous information they learn. In our set up, when a designer commits to a social choice rule, they also choose a dynamic protocol for eliciting agents’ information (or ?types?). These dynamic protocols allow the designer to learn agents’ private information gradually, ruling out possible type profiles until they know enough to compute the outcome of the rule. The key idea we introduce is a contextual privacy violation. A protocol produces a contextual privacy violation for a particular agent if the designer learns a piece of their private information that may be superfluous, i.e. it might not be necessary for computing the outcome. We study protocols that are on the frontier of contextual privacy and implementation—that is, we study a setwise order based on contextual privacy violations, and look for maximal elements in this order. For some choice rules, it will be possible to find protocols that produce no violations for any agent at any type profile—we call these protocols contextually private.
Beyond unconditional full privacy and perfect implementation lies an extensive literature on privacy preserving protocols for auctions and allocation. The literature on cryptographic protocols for auctions, going back to Nurmi and Salomaa (1993) and Franklin and Reiter (1996) is too vast to summarize here—the main point is that there are many cryptographic protocols that do not reveal any private information to a designer. Such protocols allow participants to jointly compute the outcome without relying on any trusted third party while usually relying on computational hardness assumptions. Compared to this literature, contextual privacy makes explicit the social and technological environments in which many designers operate: when arbitrary cryptographic protocols are not available, we need some other privacy desideratum to guide design.
An elicitation technology represents how the designer can learn about agents’ messages, and by inverting their strategies, their private information. One possible elicitation technology is a ?trusted third party.? If there is a trusted third party, the designer can delegate information retrieval to this third party, and ask the third party to report back only what is needed to compute the outcome. So, under this trusted third party elicitation technology, all choice rules can trivially eliminate all contextual privacy violations. Cryptographic techniques like secure multi-party computation and zero-knowledge proofs are elicitation technologies that similarly trivialize contextual privacy.333For a survey of cryptographic protocols for sealed-bid auctions, see Alvarez and Nojoumian (2020).
C
The above argument can apply only when yjsuperscript𝑦𝑗y^{j}italic_y start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT is differentiable at p∗superscript𝑝p^{*}italic_p start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT. Actually, our Assumption P is too weak and can only prove the continuity of yjsuperscript𝑦𝑗y^{j}italic_y start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT. Hence, we use approximation by using mollifier, which is frequently used in Fourier analysis. As is well known, when a convex function is approximated by a convolution with a mollifier, the approximate function becomes a smooth convex function. Using this approximation for the profit function πj⁢(p)superscript𝜋𝑗𝑝\pi^{j}(p)italic_π start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT ( italic_p ), we can obtain the ‘approximated’ profit function, and using this function, we construct a smooth approximation of the excess demand function whose derivative is negative definite on the space of all vectors normal to p∗superscript𝑝p^{*}italic_p start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT. Applying the above arguments, we obtain the desired result.
Most studies on the uniqueness of equilibrium do not use primitive assumptions on the economy itself, but rather make assumptions on what is derived from the economy. For example, Arrow et al. (1959) found that if the excess demand function is gross substitute, then the equilibrium price is unique up to normalization. Mas-Colell (1991) summarized several classical results in this area. He argued that the gross substitution of the excess demand function is no longer a sufficient condition for the uniqueness of the equilibrium price when production is introduced. Instead, his paper focused on the weak axiom of revealed preference for the excess demand function, and pointed out that the set of equilibrium prices is convex when this condition holds. If the economy is regular, then the set of normalized equilibrium prices becomes discrete, but any discrete convex set is a singleton. Thus, the uniqueness of the equilibrium price is obtained for this case. On the other hand, if the weak axiom of revealed preference is not satisfied, an economy with multiple equilibrium prices can easily be created by introducing production technology that is constant returns to scale. In this sense, Mas-Colell concluded that the weak axiom of revealed preference is approximately “necessary and sufficient” for the equilibrium price to be unique. This argument was discussed again in Chapter 17 of Mas-Colell et al. (1995).
However, in order to perform this approximation properly, it must first be shown that the set of normalized equilibrium prices is discrete. Local stability is crucial in demonstrating this fact. If every equilibrium price is locally stable, the set of normalized equilibrium prices is discrete. Therefore, we first show that every equilibrium price is locally stable, and then use the above logic to derive the result. This is why local stability is necessary for the derivation of our result.
So, why are all equilibrium prices locally stable in a quasi-linear economy? The answer is obtained by the theory of no-trade equilibria. Balasko (1978, Theorem 1) showed that in a pure exchange economy, any no-trade equilibrium price is locally stable. This result was in fact substantially shown in Kihlstrom et al. (1976, Lemma 1). Namely, they showed that if the initial endowments become the equilibrium allocation, then (11) holds for the corresponding equilibrium price. If the economy is not quasi-linear, (11) may not hold at some equilibrium price because the income effect that arises from the gap between the initial endowments and the equilibrium allocation has a non-negligible power. In a quasi-linear economy, however, the income effect affects only the numeraire good, and when we aggregate the excess demand function of each consumer, the error with (11) is equal to the value of the excess demand function divided by pLsubscript𝑝𝐿p_{L}italic_p start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT (see Lemma 6 and (18) in Step 2 of the proof of Theorem 1). Thus, this effect is canceled out when the price is an equilibrium price. As a result, the result that holds at the no-trade equilibrium price is restored at any equilibrium price.
The purpose of this paper is to extend this result to a quasi-linear economy with more than two commodities. That is, the aim of this study is to determine whether the above result holds when considering a general equilibrium model in which the utility remains quasi-linear and the dimension of the consumption space may be greater than two. The results are as follows: first, the equilibrium price is unique up to normalization in an economy where all consumers have quasi-linear utility functions. Second, this equilibrium price is locally stable with respect to the tâtonnement process (Theorem 1). As expected from partial equilibrium theory, if the number of commodities is two, the equilibrium price is globally stable (Proposition 3). However, if the number of commodities is greater than two, then the global stability is not derived in this paper. This is related to the inherent difficulty of quasi-linear economies: see our discussion in subsection 3.1.
B
Therefore, the partial sums of the infinite series ∑n=1∞Dnsuperscriptsubscript𝑛1subscript𝐷𝑛\sum_{n=1}^{\infty}D_{n}∑ start_POSTSUBSCRIPT italic_n = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT form a martingale bounded in L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, and therefore the series converges almost surely.
If, e.g., A𝐴Aitalic_A is incorrectly chosen as the deviator, a similar computation to earlier yields that the partial sum of the series (5) up to N𝑁Nitalic_N is Ω⁢(log⁡log⁡N)Ω𝑁\Omega(\log\log N)roman_Ω ( roman_log roman_log italic_N ), which occurs with a probability approaching 00 as N→∞→𝑁N\to\inftyitalic_N → ∞.
so we briefly outline the changes. For sufficiently large N𝑁Nitalic_N, the probability of failure at Step 1111 is at most ϵ/3italic-ϵ3\epsilon/3italic_ϵ / 3. Similarly, the probability of failure at Step 2222 is also at most ϵ/3italic-ϵ3\epsilon/3italic_ϵ / 3 (using a second moment bound and Markov’s inequality). If both Steps 1111 and 2222 are passed, then
If we only have a finite number of samples, say N𝑁Nitalic_N, we can guarantee a probability of failure ϵ→0→italic-ϵ0\epsilon\to 0italic_ϵ → 0 as N→∞→𝑁N\to\inftyitalic_N → ∞ by modifying the test as follows.
For the same reason as in the original argument, a player will be chosen in Step 3 with probability approaching 1111 as N→∞→𝑁N\to\inftyitalic_N → ∞ (independently of ϵitalic-ϵ\epsilonitalic_ϵ), and adding the sums yields (up to a constant) ∑n=1Nsn2−sn−12n⁢log⁡n≈∑n=1Nsn2n2⁢log⁡n=o⁢(log⁡log⁡N)superscriptsubscript𝑛1𝑁superscriptsubscript𝑠𝑛2superscriptsubscript𝑠𝑛12𝑛𝑛superscriptsubscript𝑛1𝑁superscriptsubscript𝑠𝑛2superscript𝑛2𝑛𝑜𝑁\displaystyle\sum_{n=1}^{N}\frac{s_{n}^{2}-s_{n-1}^{2}}{n\log n}\approx%
C
[27] Pareto, V.: Manuale di Economia Politica con una Introduzione alla Scienza Sociale. Societa Editrice Libraria, Milano (1906)
Finally, although our paper only considers the classical consumer theory, there are several new consumer theories treating nonlinear or stochastic budget inequality. See, for example, Shiozawa (2016) for the former, and Allen et al. (2023) for the latter. Our study does not provide a solution to the estimation problem in those theories, and it is a future task.
[31] Shiozawa, K.: Revealed preference test and shortest path problem; graph theoretic structure of the rationalization test. J. Math. Econ. 67, 38-48 (2016)
[13] Hosoya, Y.: The relationship between revealed preference and the Slutsky matrix. J. Math. Econ. 70, 127-146 (2017)
[12] Hosoya, Y.: A Theory for estimating consumer’s preference from demand. Adv. Math. Econ. 18, 33-55 (2015)
B
If lX⁢(z)≤lX′⁢(z)subscript𝑙𝑋𝑧subscript𝑙superscript𝑋′𝑧\displaystyle l_{X}(z)\leq l_{X^{\prime}}(z)italic_l start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ( italic_z ) ≤ italic_l start_POSTSUBSCRIPT italic_X start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_z ) for some vector of shares z𝑧\displaystyle zitalic_z, a larger proportion of the population commands the same share of resources in allocation X′superscript𝑋′\displaystyle X^{\prime}italic_X start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT than in allocation X𝑋\displaystyle Xitalic_X. Lorenz dominance of allocation X𝑋\displaystyle Xitalic_X over allocation X′superscript𝑋′\displaystyle X^{\prime}italic_X start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT (in the sense of definition 4) implies that the relation  lX⁢(z)≤lX′⁢(z)subscript𝑙𝑋𝑧subscript𝑙superscript𝑋′𝑧\displaystyle l_{X}(z)\leq l_{X^{\prime}}(z)italic_l start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ( italic_z ) ≤ italic_l start_POSTSUBSCRIPT italic_X start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_z ) holds for each resource share vector z𝑧\displaystyle zitalic_z (see proposition 9 in the appendix).
(1) The α𝛼\displaystyle\alphaitalic_α-Lorenz curves 𝒞Xαsubscriptsuperscript𝒞𝛼𝑋\displaystyle\mathcal{C}^{\alpha}_{X}caligraphic_C start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT are the level curves of a bivariate cdf, hence they are downward sloping, non decreasing in α𝛼\displaystyle\alphaitalic_α and they do not cross. In addition, (2) The α𝛼\displaystyle\alphaitalic_α-Lorenz curves 𝒞Xαsubscriptsuperscript𝒞𝛼𝑋\displaystyle\mathcal{C}^{\alpha}_{X}caligraphic_C start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT are convex if
To visualize Lorenz dominance, we define an Inverse Lorenz Function at a given vector of resource shares as the fraction of the population that cumulatively holds those shares. It is characterized by the cumulative distribution function of the image of a uniform random vector by the Lorenz map. Hence, it is a cumulative distribution function by construction, like the univariate inverse Lorenz curve. In two dimensions, the α𝛼\displaystyle\alphaitalic_α-level sets of this cumulative distribution function, which we call α𝛼\displaystyle\alphaitalic_α-Lorenz curves, are non crossing downward sloping curves that shift to the south-west when inequality increases, as defined by the Lorenz ordering. For the cases, where allocations are not ranked in the Lorenz inequality dominance ordering, we propose a family of multivariate S-Gini coefficients based on our vector Lorenz map, with the flexibility to entertain different tastes for inequality in different dimensions. Finally, we propose an illustration to the analysis of income-wealth inequality in the United States between 1989 and 2022.
The α𝛼\displaystyle\alphaitalic_α-Lorenz curves provide a visualization of Lorenz dominance. We can compare the inequality of different allocations based on the shape and relative positions of their respective α𝛼\displaystyle\alphaitalic_α-Lorenz curves.
In case of bivariate allocations, the latter can be easily visualized on [0,1]2superscript012\displaystyle[0,1]^{2}[ 0 , 1 ] start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT through the relative positions of the level sets of the Inverse Lorenz Function, which we call α𝛼\displaystyle\alphaitalic_α-Lorenz curves, denoted
D
The last statement of Theorem 2 gives conditions for the population quantile mapping q⁢(P⁢(β,⋅))𝑞𝑃𝛽⋅q(P(\beta,\cdot))italic_q ( italic_P ( italic_β , ⋅ ) ) to be a contraction with Lipschitz constant less than κ¯∈(0,1]¯𝜅01\bar{\kappa}\in(0,1]over¯ start_ARG italic_κ end_ARG ∈ ( 0 , 1 ]. In particular, q⁢(P⁢(β,⋅))𝑞𝑃𝛽⋅q(P(\beta,\cdot))italic_q ( italic_P ( italic_β , ⋅ ) ) is a contraction with Lipschitz constant κ¯¯𝜅\bar{\kappa}over¯ start_ARG italic_κ end_ARG if all agents i𝑖iitalic_i in the population have expected scores ωi⁢(s;β)subscript𝜔𝑖𝑠𝛽\omega_{i}(s;\beta)italic_ω start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_s ; italic_β ) that are contractions in s𝑠sitalic_s with Lipschitz constant less than κ¯¯𝜅\bar{\kappa}over¯ start_ARG italic_κ end_ARG. Importantly, these conditions are sufficient, but not necessary. To see this, we note that the derivative of q⁢(P⁢(β,s))𝑞𝑃𝛽𝑠q(P(\beta,s))italic_q ( italic_P ( italic_β , italic_s ) ) with respect to s𝑠sitalic_s is a convex combination of the derivatives of ωi⁢(s;β)subscript𝜔𝑖𝑠𝛽\omega_{i}(s;\beta)italic_ω start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_s ; italic_β ) with respect to s𝑠sitalic_s. Since a univariate differentiable function is a contraction with Lipschitz constant less than κ¯¯𝜅\bar{\kappa}over¯ start_ARG italic_κ end_ARG if and only if its derivative has magnitude less than κ¯¯𝜅\bar{\kappa}over¯ start_ARG italic_κ end_ARG, our conditions imply that q⁢(P⁢(β,⋅))𝑞𝑃𝛽⋅q(P(\beta,\cdot))italic_q ( italic_P ( italic_β , ⋅ ) ) must have derivative with magnitude less than κ¯¯𝜅\bar{\kappa}over¯ start_ARG italic_κ end_ARG and thus be a contraction with Lipschitz constant less than κ¯¯𝜅\bar{\kappa}over¯ start_ARG italic_κ end_ARG. Nevertheless, it is possible for q⁢(P⁢(β,⋅))𝑞𝑃𝛽⋅q(P(\beta,\cdot))italic_q ( italic_P ( italic_β , ⋅ ) ) to be a contraction even if ωi⁢(s;β)subscript𝜔𝑖𝑠𝛽\omega_{i}(s;\beta)italic_ω start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_s ; italic_β ) is not a contraction for some agents i𝑖iitalic_i.
Notably, the bound in the concentration inequality does not depend on the particular choice of s𝑠sitalic_s. We use this lemma to characterize the behavior of the finite system for sufficiently large iterates t𝑡titalic_t and number of agents n𝑛nitalic_n. Theorem 4 shows that under the same conditions that enable fixed-point iteration in the mean-field model to converge to the mean-field equilibrium threshold s⁢(β)𝑠𝛽s(\beta)italic_s ( italic_β ) (3.1), sufficiently large iterates of the stochastic fixed-point iteration in the finite model (4.1) will lie in a small neighborhood about s⁢(β)𝑠𝛽s(\beta)italic_s ( italic_β ) with high probability. We can view these iterates as stochastic equilibria of the finite system.
Understanding equilibrium behavior of our model in the finite regime is of interest because our ultimate goal is to learn optimal equilibrium policies in finite samples. In this section, we instantiate the model from Section 2 in the regime where a finite number of agents are considered for the treatment. A difficulty of the finite regime is that deterministic equilibria do not exist. Instead, we give conditions under which stochastic equilibria arise and show that, in large samples, these stochastic equilibria sharply approximate the mean-field limit derived above.
In Section 3, we give conditions on our model that guarantee existence and uniqueness of equilibria in the mean-field regime, the limiting regime where at each time step, an infinite number of agents are considered for the treatment. Furthermore, we show that under additional conditions, the mean-field equilibrium arises via fixed-point iteration. In Section 4, we translate these results to the finite regime, where a finite number of agents, sampled i.i.d. at each time step, are considered for treatment. We show that as the number of agents grows large, the system converges to the equilibrium of the mean-field model in a stochastic version of fixed-point iteration.
Recall that the decision maker’s objective, as outlined in Section 2, is to find a selection criterion β𝛽\betaitalic_β that maximizes the equilibrium policy value Veq⁢(β)subscript𝑉eq𝛽V_{\text{eq}}(\beta)italic_V start_POSTSUBSCRIPT eq end_POSTSUBSCRIPT ( italic_β ). This is a sensible goal in settings where an equilibrium exists and is unique for each selection criterion β𝛽\betaitalic_β in consideration. In this section, we characterize the equilibrium in the mean-field regime.
B
In principle, we could also consider other assignment mechanisms such as simple random sampling. We decided to focus on stratified block randomization because it is prevalent in practice and our results show that it dominates other mechanisms for which τ⁢(s)≠0𝜏𝑠0\tau(s)\neq 0italic_τ ( italic_s ) ≠ 0 in terms of asymptotic efficiency.
Finally, treatment assignment Agsubscript𝐴𝑔A_{g}italic_A start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT follows a covariate-adaptive randomization (CAR) mechanism based on stratified block randomization with π=12𝜋12\pi=\frac{1}{2}italic_π = divide start_ARG 1 end_ARG start_ARG 2 end_ARG within each stratum. Concretely, we stratify the observations as follows:
Imbens, 2017; Su and Ding, 2021), to our knowledge we are the first to establish results for these estimators when treatment assignment is performed using a stratified covariate-adaptive randomization procedure. As in Bugni
et al., 2023) that differ in the way they aggregate, or average, the treatment effect across units. They differ, in particular, according to whether the units of interest are the clusters themselves or the individuals within the cluster. The first of these parameters takes the clusters themselves as the units of interest and identifies an equally-weighted cluster-level average treatment effect. The second of these parameters takes the individuals within the clusters as the units of interest and identifies a size-weighted cluster-level average treatment effect. When individual-level average treatment effects vary with cluster size (i.e., cluster size is non-ignorable) and cluster sizes are heterogeneous, these two parameters are generally different, though, as discussed in Remark 2.3, they coincide in some instances. Importantly, we show that the estimand associated with the standard difference-in-means estimator is a sample-weighted cluster-level average treatment effect, which cannot generally be interpreted as an average treatment effect for either the clusters themselves or the individuals within the clusters. We show, however, in Section 2.2, that this estimand can equal the size-weighted or the equally-weighted cluster-level average treatment effect for some very specific sampling designs. We argue that a clear description of whether the clusters themselves or the individuals within the clusters are of interest should therefore be at the forefront of empirical practice, yet we find that such a description is often absent. Indeed, we surveyed all articles involving a cluster randomized experiment published in the American Economic Journal: Applied Economics from 2018201820182018 to 2022202220222022. We document our findings in Appendix A.3.
The model in (27), as well as the two CAR designs, follow closely the original designs for covariate-adaptive randomization with individual-level data considered in Bugni
D
The simulation-based analysis in the previous section demonstrates the importance of respecting inter-period dependencies and basing the decision on replenishment order quantities on probabilistic information instead of expected values. However, in practice, the underlying distributions need to be estimated from historical data, typically making use of features (covariates) to arrive at time-varying predictive distributions. Thus, we now use the setting and data from a European e-grocery retailer to illustrate the analysis process in a situation where we need to integrate both parameter estimation and optimisation. In the following, we first give an overview of the data set available to us, followed by the case-specific tuning of the lookahead policy introduced in Section 3.5. We compare the results under our proposed policy to those obtained when applying the decision rule as currently implemented by the retailer.
On the other hand, while there are opportunities resulting from the control the retailer exerts over the fulfilment process, picking and delivery increase the time between the instance a replenishment order for an SKU is placed and the final availability to the customer. This longer delivery time reduces the forecasting accuracy of crucial variables, such as the demand distribution for the period under consideration. In particular, features used for the forecast on this distribution, such as the known demand are less informative more days in advance. Using data from the e-grocery retailer under consideration in this paper, Figure 1 displays the mean average percentage forecast error as a function of the lead time when applying a linear regression for all SKUs within the categories fruits and vegetables in the demand period January 2019 to December 2019. We observe that the mean average percentage error strongly increases for longer lead times, thus implying a decrease in the forecast precision.
Figure 1: Mean average percentage error (mape) as a function of the delivery time of the e-grocery retailer for all SKUs within the categories fruits and vegetables in the demand period January 2019 to December 2019.
The data set on the attended home delivery service provided by the e-grocery retailer covers demand periods of six different local fulfilment centres from January 2019 to December 2019, i.e. before the beginning of the Covid-19 pandemic. One observation here equals one demand period t𝑡titalic_t, i.e. one day of delivery. We consider four SKUs within the category fruits and vegetables, namely mushrooms, grapes, organic bananas, and lettuce. For illustration, Figure 5 displays the demand for the SKU mushrooms in 2019 for one selected fulfilment centre. We find recurring peaks on Mondays, but do not observe any notable trend or seasonality. The data set includes features to be used for the demand forecast as well as the (uncensored) realised demand in this period. For a more detailed description, we refer to Ulrich et al., (2021).
For each source of uncertainty and each SKU, we use the previous six months of data to estimate the associated probability distributions and incorporate them into the lookahead policy for an evaluation period of one month. For example, we train on data from January to June 2019 to forecast demand, spoilage, and supply shortages in July 2019. Due to the limited number of demand periods during six months, we aggregate historical data on spoilage and supply shortage over the fulfilment centres to ensure stable estimations.
C
The power for detecting p𝑝pitalic_p-hacking crucially depends on whether the researchers use a thresholding or a minimum approach to p𝑝pitalic_p-hacking, the econometric method, the fraction of p𝑝pitalic_p-hackers, τ𝜏\tauitalic_τ, and the distribution of hℎhitalic_h. When researchers p𝑝pitalic_p-hack using a threshold approach, the p𝑝pitalic_p-curves are discontinuous at the threshold, may violate the upper bounds, and may be non-monotonic. Thus, tests exploiting these testable restrictions may have power when the fraction of p𝑝pitalic_p-hackers is large enough.
The impact of publication bias on power depends on the testable restrictions that the tests exploit. Both types of publication bias can substantially increase the power of the CSUB and the CS2B test, which exploit upper bounds. This is expected since both forms of publication bias favor small p𝑝pitalic_p-values, which leads to steeper p𝑝pitalic_p-curves that are more likely to violate the upper bounds, as discussed in Section 2.3. The difference in power with and without publication bias is particularly stark under the minimum approach to p𝑝pitalic_p-hacking: publication bias can lead to nontrivial power even when the CSUB and the CS2B test have very low power for detecting p𝑝pitalic_p-hacking alone.
Finally, the results in Appendix E show that the larger K𝐾Kitalic_K — the more degrees of freedom the researchers have when p𝑝pitalic_p-hacking — the higher the power of the CSUB and CS2B test.
Under the minimum approach, the power curves of the CSUB and CS2B tests are very similar, suggesting that the power of the CS2B test comes mainly from using upper bounds. This finding demonstrates the importance of exploiting upper bounds in addition to monotonicity and continuity restrictions in practice. Figure 11 further shows that the power of the CSUB and the CS2B test may not be monotonic over h∈{0,1,2}ℎ012h\in\{0,1,2\}italic_h ∈ { 0 , 1 , 2 }. On the one hand, for large hℎhitalic_h, there are more p𝑝pitalic_p-values close to zero, where the upper bounds are more difficult to violate. On the other hand, the effective sample size increases with hℎhitalic_h, leading to more power.
The CS2B test, which exploits monotonicity restrictions and bounds, has the highest power overall. However, this test may exhibit some small size distortions when the effective sample size is small (e.g., lag length selection with h=0ℎ0h=0italic_h = 0). Among the tests that exploit monotonicity of the entire p𝑝pitalic_p-curve, the CS1 test typically exhibits higher power than the LCM test. The LCM test can exhibit non-monotonic power curves because the test statistic converges to zero in probability for strictly decreasing p𝑝pitalic_p-curves (Beare and Moon,, 2015).
D
We find a positive and statistically significant effect of external debt on GHG emissions when we take into account the potential endogeneity problems. A 1 pp. rise in external debt causes, on average, a 0.5% increase in GHG emissions.
In exploring a possible mechanism of action, we find that external debt is negatively related to an indicator of policies associated with environmental sustainability. This may suggest that when external debt increases, governments are less able to enforce environmental regulations because their main priority is to increase the tax base or because they are captured by the private sector and prevented from tightening such regulations, and therefore could explain the positive associacion between external debt and environmental degradation.
In exploring possible a mechanism of action, we find that external debt is negatively related to an indicator of policies associated with environmental sustainability. This may suggest that when external debt increases, governments are less able to enforce environmental regulations because their main priority is to increase the tax base to pay increasing debt services or because they are captured by the private sector and prevented from tightening such regulations, and therefore could explain the positive associacion between external debt and environmental degradation.
In exploring a possible mechanism of action, we find that external debt is negatively related to an indicator of policies associated with environmental sustainability. This may suggest that when external debt increases, governments are less able to enforce environmental regulations because their main priority is to increase the tax base or because they are captured by the private sector and prevented from tightening such regulations, and therefore could explain the positive associacion between external debt and environmental degradation.
As discussed above, there is a plausible main channel through which external debt could affect GHG emissions. External debt-driven economic growth, e.g., due to investment, could increase energy consumption and, thus, environmental pollution. However, in our baseline estimates we find a positive effect of external debt on GHG emissions even controlling for GDP growth. Therefore, external debt must affect emissions through another mechanism. One possible mechanism could be that, when external debt increases, governments are less able to enforce environmental regulations because their main priority is to increase the tax base to pay increasing debt services or because they are captured by the private sector and are prevented from tightening such regulations. Similarly, Woolfenden, (2014) argues that debt is preventing the phase-out of fossil fuels in Global South countries. The pressure to repay debt forces them to continue to invest in fossil fuel projects to repay loans from richer countries and financial institutions.
A
Appendix Table A.4 compares the performance of our baseline SyNBEATS estimator to these variants. Unsurprisingly, SyNBEATS performs the best, but the performance degradation from excluding horizontal information is dramatically larger than from excluding vertical information. This suggests that much of the performance gains in SyNBEATS might be traced back to its ability to efficiently learn the time series structure of the treated unit’s outcomes. Without vertical information, SyNBEATS performs similar to or slightly better than TWFE or SC, whereas it performs substantially worse than competing estimators for the longer-term predictions. These results suggest that incorporating the post-treatment information from the control units becomes increasingly important as the prediction horizon grows and the accuracy of the forecast degrades.
To investigate the importance of the model architecture upon which SyNBEATS relies, Appendix Table A.5 compares SyNBEATS to alternative methods for imputing counterfactual treated unit outcomes based on vertical and horizontal information. SyNBEATS substantially outperforms “off-the-shelf” neural network and random forest models. A possible explanation is that SyNBEATS’ residual structure allows for efficient mining of the signal without much data, in a similar manner to boosted trees. In a highly overparametrized regime where the number of model parameters far exceeds training data available, SyNBEATS’ residual-based architecture allows the model to make full use of the limited data available and converge quickly in a setting where other flexible estimation methods typically suffer. In contrast, the performance gains are present, but less dramatic, when SyNBEATS is compared to a simple linear model. We thus conclude that SyNBEATS’ strong observed performance is due to both its model architecture and its ability to use this architecture to exploit information from horizontal and vertical sources efficiently.
In this section, we investigate the source of SyNBEATS’ strong observed performance relative to alternative estimators. SyNBEATS differs from other estimators in two important ways: (1) its use of both horizontal and vertical information to inform its imputation, and (2) its use of the N-BEATS residual block architecture. But, unlike many common estimators, SyNBEATS is not guaranteed to be consistent; it could be that its performance gains in terms of RMSE stem from providing concentrated, but potentially biased estimates. We explore each of these possibilities in turn.
Appendix Table A.4 compares the performance of our baseline SyNBEATS estimator to these variants. Unsurprisingly, SyNBEATS performs the best, but the performance degradation from excluding horizontal information is dramatically larger than from excluding vertical information. This suggests that much of the performance gains in SyNBEATS might be traced back to its ability to efficiently learn the time series structure of the treated unit’s outcomes. Without vertical information, SyNBEATS performs similar to or slightly better than TWFE or SC, whereas it performs substantially worse than competing estimators for the longer-term predictions. These results suggest that incorporating the post-treatment information from the control units becomes increasingly important as the prediction horizon grows and the accuracy of the forecast degrades.
Although the N-BEATS algorithm has been shown to excel at a range of forecasting tasks, an important concern is whether its performance will be as strong when applied to the relatively small panel data sets typically employed in social science research. With limited data, simpler methods like synthetic controls (SC) or two-way fixed effects (TWFE) may yield more reliable causal estimates. To assess the suitability of SyNBEATS for causal inference with panel data, we compare it to existing alternatives across two canonical panel data settings. Specifically, we contrast performance in data that has been used to estimate the effect of a cigarette sales tax in California Abadie et al. (2010) and the German reunification on the West Germany economy Abadie et al. (2015). In both of these settings, we find that SyNBEATS outperforms canonical methods such as SC and TWFE estimation. In addition, we compare the performance of these models in estimating the impact of simulated events on abnormal returns in publicly traded firms Baker and Gelbach (2020). In this setting, where historical values would not be expected to provide much information about future outcomes, SyNBEATS only marginally improves performance relative to the other estimators. We also compare SyNBEATS to two recent proposed causal inference methods for panel data settings: matrix completion (MC) Athey et al. (2021) and synthetic difference-in-differences (SDID) Arkhangelsky et al. (2021). In the three settings we consider, we find that SyNBEATS generally achieves comparable or better performance compared to synthetic difference-in-differences, and significantly outperforms matrix completion. We further investigate the factors that shape the relative performance of SDID and SyNBEATS through a range of simulations. Finally, we unpack SyNBEATS’ strong comparative performance, and find it stems from both model architecture and SyNBEATS’ efficient use of time-series data in informing its predictions.
A
{L}_{w}^{(i-1)l+t}}\left[Y_{(i-1)l+t}\right]roman_Δ 111 start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_w ) = divide start_ARG 1 end_ARG start_ARG italic_b end_ARG ∑ start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT blackboard_E start_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i - 1 ) italic_l + italic_t end_POSTSUPERSCRIPT end_POSTSUBSCRIPT [ italic_Y start_POSTSUBSCRIPT ( italic_i - 1 ) italic_l + italic_t end_POSTSUBSCRIPT ]
b𝑏bitalic_b time periods of the last treatment switch. We then show that, using this design, we can estimate the global treatment effect for non-burn-in periods
We assume that the centered second moment of the block-averaged potential outcomes in the burn-in periods (b𝑏bitalic_b)
Finally, we assume that all cross products of between the block-averaged potential outcomes in one block
Table 2 presents the bias, variance, mean squared error, and coverage achieved by those estimators. As expected, we notice that the use of burn-in periods considerably reduces the bias of the treatment effect estimation. Although the variance of the estimators with burn-in periods increases, the decrease in bias still results in a large decrease in the overall mean squared error. Furthermore, including burn-in periods leads to more accurate inferential results, with confidence intervals having coverage close to the nominal level. Again, we observe that the performance of the estimators with burn-in periods remains relatively stable regardless of the chosen length of the burn-in period.
B
The impulse response of the FFR peaks at horizon 1 and then steadily declines to zero, which is in line with Figure 1 in Bernanke et al. (2005).
we see a more pronounced price puzzle with the HDLP. The response is positive and significant for 30 months, peaking at horizon 20. The FAVAR impulse response is largely in line with Bernanke et al. (2005), with a small positive effect at early horizons, followed by a negative, though mainly insignificant, response after horizon 10.
For the HDLP, the response stays at a higher level for a longer period than for the FAVAR. The response of IP is also in line with Bernanke et al. (2005), with the largest drop around horizon 20, before eventually returning to zero. Notably, the response obtained with the HDLP is considerably smaller than the one for the FAVAR. Finally for the response of CPI,
We compare the HDLP impulse responses obtained from the desparsified lasso to the ones obtained from a 3-factor FAVAR as used in Bernanke et al. (2005). Details about the FAVAR estimation are provided in Appendix C.3. Figure 3 shows the impulse responses of FFR, IP, and CPI to a shock in the FFR of a size such that the FFR has unit response at horizon 0.
The impulse response of the FFR peaks at horizon 1 and then steadily declines to zero, which is in line with Figure 1 in Bernanke et al. (2005).
B
The first run of the experiment, with each run indicated by the time-stamp in Table 4, yielded different responses, whereby the numerical component of the answer with the pronoun ‘‘he" was $15, but when the pronoun was changed to ‘‘she," the response switched to $12. The histogram distance is also the highest of all observed prior to that point. I therefore repeated the experiment at multiple subsequent times, noticing a repeatable outcome for the pronoun ‘‘she’’ but a sudden change to the answer for the pronoun ‘‘he’’ starting at 6:45PM, which began to match the outcome of $12. Despite the temperature parameter being fixed at zero, subsequent repeated tests yielded different outcomes ranging from $12 to $16 for the pronoun ‘‘she.’’ I speculate that this phenomenon may be due to the DaVinci interface having post-processing stages that check for sensitive attributes and apply remedies.
Table 5: Sensitive Attributes: Race. Statistics of bot responses to named prompt with sample race-stereotypical names. Although the numerical response of the bot is identical, there are measurable differences in the distribution of ranked responses. The last two tests are for mere curiosity.
I find mild variation among responses for different perturbations but strong effects associated with changing gendered pronouns (Table 4): changing the pronoun from “he” to “she” caused the perceived fair wage to change from $15 to $12. However, identical tests conducted at a later time showed a decrease in the response for “he” from $15 to $12, matching the results with the pronoun “she.” I hypothesize that post-processing checks are put in place to equalize outcomes based on cues associated with protected attributes. I also test for different prompts individualized with race-stereotypical names and find small inconsistencies (Table 5).
In the race-based experiment, I changed the prompt to be personalized to the generic name John and subsequently replaced it with names randomly selected among the most common race-stereotypical, DeShawn, Shanice, Jada, Harrison. The results are shown in Table 5.
Table 4: Effect of Sensitive Attributes: Gender. I test GPT-3 by changing the original pronoun ‘‘They’’ in the prompt, to ‘‘He’’ and ‘‘She.’’ Gendered pronouns changed the response, dropping from a consistent answer of $15 to $12 when using the pronoun ‘‘she.’’ However, in a later identical test (see the time stamps), the response to the pronoun ‘‘he’’ returned by GPT-3 changed to match ‘‘she’’ response despite identical prompt sequences. This suggests that additional post-processing steps are implemented in the DaVinci interface beyond sampling from the softmax probabilities.
C
One could use a similar argument to defend calling with probability at the upper bound of the interval—min⁡{nx⁢(1+n),1}𝑛𝑥1𝑛1\min\left\{\frac{n}{x(1+n)},1\right\}roman_min { divide start_ARG italic_n end_ARG start_ARG italic_x ( 1 + italic_n ) end_ARG , 1 }. If the opponent somehow knew that betting n𝑛nitalic_n was part of an optimal strategy but did not know that checking was, then perhaps we should follow an equilibrium of the game where the opponent is restricted to only betting x𝑥xitalic_x or n𝑛nitalic_n, in which case our calling frequency should focus on dissuading the opponent from betting x𝑥xitalic_x with a winning hand instead of n.𝑛n.italic_n .
The first argument seems much more natural than the second, as it seems much more reasonable that a human is aware they should check sometimes with weak hands, but may have trouble computing that n𝑛nitalic_n is the optimal size and guess that it is x.𝑥x.italic_x . However, both arguments could be appropriate depending on assumptions about the reasoning process of the opponent. The entire point of Nash equilibrium as a prescriptive solution concept is that we do not have any additional information about the players’ reasoning process, so will opt to assume that all players are fully rational. If any additional information is available—such as historical data (either from our specific opponents’ play or from a larger population of players), observations of play from the current match, a prior distribution, or any other model of the reasoning mechanism of the opponents—then we should clearly utilize this information and not simply follow a Nash equilibrium. Without any such additional information, it does not seem clear whether we should call with the lower bound probability, upper bound probability, or a value in the middle of the interval. The point of the equilibrium refinements we have considered is exactly to help us select between equilibria in a theoretically principled way in the absence of any additional information that could be used to model the specific opponents.
According to our above analysis, the unique Nash equilibrium strategy for player 1 is to bet 2 with probability 1 with a winning hand, to bet 2 with probability 2323\frac{2}{3}divide start_ARG 2 end_ARG start_ARG 3 end_ARG with a losing hand, and to check with probability 1313\frac{1}{3}divide start_ARG 1 end_ARG start_ARG 3 end_ARG with a losing hand. The Nash equilibrium strategies for player 2 are to call a bet of 2 with probability 1313\frac{1}{3}divide start_ARG 1 end_ARG start_ARG 3 end_ARG, and to call a bet of 1 with probability in the interval [12,23].1223\left[\frac{1}{2},\frac{2}{3}\right].[ divide start_ARG 1 end_ARG start_ARG 2 end_ARG , divide start_ARG 2 end_ARG start_ARG 3 end_ARG ] . As it turns out, the unique trembling-hand perfect equilibrium strategy for player 2 is to call vs. a bet of 1 with probability 2323\frac{2}{3}divide start_ARG 2 end_ARG start_ARG 3 end_ARG.444Observe that this game explicitly shows that Theorem 1 does not hold in general for extensive-form games, since all of the Nash equilibria in this game satisfy the alternative formulation of trembling-hand perfect equilibrium. To see this, consider the sequence of strategies for player 1 that bet 1 with probability ϵitalic-ϵ\epsilonitalic_ϵ with a winning hand and with probability ϵ2italic-ϵ2\frac{\epsilon}{2}divide start_ARG italic_ϵ end_ARG start_ARG 2 end_ARG with a losing hand. This sequence will converge to the unique Nash equilibrium strategy for player 1 as ϵ→0→italic-ϵ0\epsilon\rightarrow 0italic_ϵ → 0, and furthermore player 2 is indifferent between calling and folding vs. a bet of 1 against all of these strategies, so all of player 2’s Nash equilibrium strategies are best responses. So the equivalent formulation of trembling-hand perfect equilibrium is only valid for simultaneous strategic-form games and does not apply to extensive-form games. Since this is a one-step extensive-form imperfect-information game, this is also the unique quasi-perfect equilibrium. And since player 2’s strategy is fully mixed, this is also the unique one-sided quasi-perfect equilibrium. However, the unique observable perfect equilibrium strategy for player 2 is to call with probability 5959\frac{5}{9}divide start_ARG 5 end_ARG start_ARG 9 end_ARG. Interestingly, the OPE corresponds to a different strategy for this game than all the other refinements we have considered, and none of them correspond to the “natural” argument for calling with probability 1212\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG based on an assumption about the typical reasoning of human opponents. The OPE value of 5959\frac{5}{9}divide start_ARG 5 end_ARG start_ARG 9 end_ARG corresponds to the solution assuming only that player 1 has bet 1 but that otherwise all players are playing as rationally as possible. Note also that the OPE does not simply correspond to the average of the two interval boundaries, which would be 712.712\frac{7}{12}.divide start_ARG 7 end_ARG start_ARG 12 end_ARG .
a Nash equilibrium. Even if additional information is available about the opponents, e.g., from historical data or observations of play, we would often still opt to start playing a Nash equilibrium strategy until we are confident in our ability to successfully exploit opponents by deviating [11, 14]. It is well known that several conceptual and computational limitations exist for Nash equilibrium. For multiplayer and two-player non-zero-sum games, it is PPAD-hard to compute or approximate one Nash equilibrium [6, 7, 8, 25], different Nash equilibria may give different values to the players, and following a Nash equilibrium strategy provides no performance guarantee. Even for two-player zero-sum games, in which these issues do not arise, there can still exist multiple Nash equilibria that we must select from. Therefore several solution concepts that refine Nash equilibrium in various ways have been proposed to help select one that is more preferable in some way. Most of the common equilibrium refinements are based on the idea of ensuring robustness against certain arbitrarily small “trembles” in players’ execution of a given strategy. Variants of these Nash equilibrium refinements have been devised for simultaneous strategic-form games as well as sequential games of perfect and imperfect information. In this paper we will be primarily interested in sequential games of imperfect information, which are more complex than the other games classes and have received significant interest recently in artificial intelligence due to their ability to model many important scenarios. To simplify analysis we will primarily be studying a subclass of these games in which there are two players, only one player has private information, and both players take a single action; however, our results apply broadly to extensive-form imperfect-information games. We will also be primarily focused on two-player zero-sum games, though some analysis also applies to two-player non-zero-sum and multiplayer games. We will show that existing Nash equilibrium refinement concepts have limitations in sequential imperfect-information games, and propose the new concept of observable perfect equilibrium that addresses these limitations.
So we have shown that player 2 has infinitely many Nash equilibrium strategies that differ in their frequencies of calling vs. “suboptimal” bet sizes of player 1. Which of these strategies should we play when we encounter an opponent who bets a suboptimal size? One argument for calling with probability at the lower bound of the interval—11+x11𝑥\frac{1}{1+x}divide start_ARG 1 end_ARG start_ARG 1 + italic_x end_ARG—is as follows (note that the previously-computed equilibrium strategy uses this value [2]). If the opponent bets x𝑥xitalic_x as opposed to the optimal size of n𝑛nitalic_n that he should bet in equilibrium, then a reasonable deduction is that he isn’t even aware that n𝑛nitalic_n would have been the optimal size, and believes that x𝑥xitalic_x is optimal. Therefore, it would make sense to play a strategy that is an equilibrium in the game where the opponent is restricted to only betting x𝑥xitalic_x (or to betting 0, i.e., checking). Doing so would correspond to calling a bet of x𝑥xitalic_x with probability 11+x.11𝑥\frac{1}{1+x}.divide start_ARG 1 end_ARG start_ARG 1 + italic_x end_ARG . The other equilibria pay more heed to the concern that the opponent could exploit us by deviating to bet x𝑥xitalic_x instead of n𝑛nitalic_n; but we need not be as concerned about this possibility, since a rational opponent who knew to bet n𝑛nitalic_n would not bet x𝑥xitalic_x.
A
The proof follows from the observation that there is no profitable coalitional deviation involving two opposed-biased senders. Likewise, the receiver cannot gain from a coalitional deviation because the equilibrium is already efficient. Therefore, the equilibrium in Proposition 4 is strong (Aumann, \APACyear1959) and coalition-proof (Bernheim \BOthers., \APACyear1987).
Findings in the previous section show that there is a unique communication protocol that is efficient, minimal, and resilient to collusion. This protocol, called public advocacy, requires the receiver to consult sequentially and publicly two senders with conflicting interests. This section discusses the robustness of these findings, and highlights further differences with respect to related work.
The rest of the paper is organized as follows. Section 2 reviews the related literature, and Section 3 presents the model. The main results are in Section 4. Section 5 discusses the model’s assumptions and the robustness of the results. Finally, Section 6 concludes.
The second part of this paper focuses on the last type of minimal arrangement left to analyze: public advocacy, that is, the sequential and public consultation of senders with conflicting interests over decision-making. The main result shows that public advocacy is efficient and robust to collusion. Importantly, it is the only minimal communication protocol to have these desirable properties. A characterization of the efficient equilibrium is provided, showing the mechanism through which the receiver achieves efficiency: the report delivered by the first speaker sets the burden of proof borne by the second speaker, who has to prove its case “beyond a reasonable doubt.” The endogenously determined burden of proof ensures that both senders consistently report truthfully. As a result, the receiver learns their private information and makes fully informed decisions. No resources are wasted in the attempt to persuade the receiver. All players obtain the payoff they would get if there were no information asymmetries in the first place.
The main result has potentially significant implications for the understanding of organizational design. It shows that only one minimal protocol can achieve efficiency under the threat of senders’ collusion. This protocol prescribes the sequential and public consultation of two informed agents with conflicting interests. The proposed arrangement has a plain structure and does not require commitment power as ex-ante, and in the interim, the organization adheres to the protocol. Importantly, such an arrangement always yields an efficient outcome for any configuration permitted by the model described in Section 3. This finding provides a rationale for using public advocacy structures.
A
The generalized is thus F⁢A⁢S=[0.5,1]𝐹𝐴𝑆0.51FAS=[0.5,1]italic_F italic_A italic_S = [ 0.5 , 1 ] and does not contain β𝛽\betaitalic_β.
that are themselves endogenous explanatory variables with γℓ⁢αℓ≠0subscript𝛾ℓsubscript𝛼ℓ0\gamma_{\ell}\alpha_{\ell}\neq 0italic_γ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT italic_α start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT ≠ 0.
of Z1subscript𝑍1Z_{1}italic_Z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, only identifies β𝛽\betaitalic_β when Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is
where Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT violates the exclusion assumption and Z1subscript𝑍1Z_{1}italic_Z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and
Because Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is here an endogenous explanatory variable, and because
D
Based on the earlier discussion, these cost estimates should be interpreted as the remaining costs the firm must incur between application submission and FDA approval.
For instance, we can use discontinuation announcements made after Phase II (but before Phase III) clinical trials to identify the cost of Phase III clinical trials.
Likewise, we can use discontinuation announcements after Phase I clinical trials to identify the costs of Phase II.
To identify the costs, we use discontinuation announcements made just before the FDA application—that is, we use the announcements made after Phase III clinical trials.
For instance, we can use discontinuation announcements made after discovery (but before Phase I clinical trials) to identify the costs of Phase I clinical trials.
A
We also study an easier version of the Outcome-Effect Question: we ask how one applicant can affect a single other applicant’s match (Section A.2).
We find both 𝖳𝖳𝖢𝖳𝖳𝖢{{\mathsf{TTC}}}sansserif_TTC and 𝖣𝖠𝖣𝖠\mathsf{DA}sansserif_DA have low complexity according to this measure.
Our results under this complexity measure give a separation between 𝖣𝖠𝖣𝖠\mathsf{DA}sansserif_DA and 𝖳𝖳𝖢𝖳𝖳𝖢{{\mathsf{TTC}}}sansserif_TTC:
This result gives perhaps the most application-oriented distinction between 𝖣𝖠𝖣𝖠\mathsf{DA}sansserif_DA and 𝖳𝖳𝖢𝖳𝖳𝖢{{\mathsf{TTC}}}sansserif_TTC in our paper. With a single complexity measure, it gives a precise sense in which priorities relate to the outcome matching in a more complex manner in 𝖳𝖳𝖢𝖳𝖳𝖢{{\mathsf{TTC}}}sansserif_TTC than in 𝖣𝖠𝖣𝖠\mathsf{DA}sansserif_DA, corroborating and clarifying past intuitions from both practitioners [BPS05] and theorists [LL21].
While our results in Section 3 separate the simple mechanism 𝖲𝖣𝖲𝖣{{\mathsf{SD}}}sansserif_SD from 𝖳𝖳𝖢𝖳𝖳𝖢{{\mathsf{TTC}}}sansserif_TTC and 𝖣𝖠𝖣𝖠\mathsf{DA}sansserif_DA, they do not distinguish between the complexity of 𝖳𝖳𝖢𝖳𝖳𝖢{{\mathsf{TTC}}}sansserif_TTC and 𝖣𝖠𝖣𝖠\mathsf{DA}sansserif_DA themselves.
A
15 voters in all, with 3 experts: N=15𝑁15N=15italic_N = 15, K=3𝐾3K=3italic_K = 3. The two treatments
Table 2: p=0.7𝑝0.7p=0.7italic_p = 0.7, F⁢(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform over [0.5,0.7]0.50.7[0.5,0.7][ 0.5 , 0.7 ]
In all experiments, we set π=0.5𝜋0.5\pi=0.5italic_π = 0.5, p=0.7𝑝0.7p=0.7italic_p = 0.7, and F⁢(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform
With p=0.7𝑝0.7p=0.7italic_p = 0.7 and q𝑞qitalic_q uniform over [0.5,[0.5,[ 0.5 ,0.7], we have verified
Table 1: p=0.7𝑝0.7p=0.7italic_p = 0.7, F⁢(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform over [0.5,0.7]0.50.7[0.5,0.7][ 0.5 , 0.7 ]
D
But in order to understand the processes of (de)-industrialization, it is crucial to develop theories that explore the interaction among multiple spatial linkages.
about the circular causality between migration and knowledge flows. In the present work, production of knowledge affects the firms’ capacity to innovate, which in turn allows the production of higher quality manufactured varieties in a region. The chance of successful innovations depends on the spatial distribution of mobile agents in the economy. Therefore, it is assumed that regional knowledge levels transfer imperfectly between regions, depending on the related variety (Frenken et al.,, 2007), i.e., the relative importance of interaction between agents within
We aim to fill this gap by explaining how the intra-regional and inter-regional interaction between researchers impacts knowledge creation and affects the spatial distribution of agents.
returns in manufacturing interact to shape the spatial economy. Knowledge levels translate in a firms’ capacity to innovate. The chance of successful innovations depends on the spatial distribution of mobile agents in the economy, i.e. on the intra- and inter-regional
the same region rather than between different regions – which depends on several factors such as cognitive proximity, cultural factors, diversity of skills and abilities, among others. We assume further that the increasing complexity of each variety is offset by the available regional quality levels (cf. Section 3.3) generated from knowledge spillovers. Our modeling strategy is such that indirect utility differentials, which govern the migration of mobile agents between regions, are determined solely by trade linkages and by the spatial dimension of regional interaction (cf. Section 3.4). We thus avoid the explicit use of dynamics for the innovation process. This allows us for great analytical tractability and to focus on spatial outcomes as a result of pecuniary factors and the economic geography of knowledge spillovers (Bond-Smith,, 2021).
B
We characterize the extreme points of monotone function intervals and apply this result to several economic problems. We show that any extreme point of a monotone function interval must either coincide with one of the monotone function interval’s bounds, or be constant on an interval in its domain, where at least one end of the interval reaches one of the bounds. Using this result, we characterize the set of distributions of posterior quantiles, which coincide with a monotone function interval. We apply this insight to topics in political economy, Bayesian persuasion, and the psychology of judgment. Furthermore, monotone function intervals provide a common structure to security design. We unify and generalize seminal results in that literature when either adverse selection or moral hazard afflicts the environment.
In the first class of applications, we use Theorem 1 and Choquet’s theorem to characterize the set of distributions of posterior quantiles. Consider a one-dimensional state and a signal (i.e., a Blackwell experiment). Each signal realization induces a posterior belief. For every posterior belief, one can compute the posterior mean. Strassen’s theorem (Strassen 1965) implies that the distribution of these posterior means is a mean-preserving contraction of the prior. Conversely, every mean-preserving contraction of the prior is the distribution of posterior means under some signal. Instead of posterior means, one can derive many other statistics of a posterior. The characterization of the extreme points of monotone function intervals leads to an analog of Strassen’s theorem, which characterizes the set of distributions of posterior quantiles (Theorem 2 and Theorem 3). The set of distributions of posterior quantiles coincides with an interval of CDFs bounded by a natural upper and lower truncation of the prior.
Theorem 1 alongside Choquet’s theorem leads to the characterization of the set of distributions of posterior quantiles. This characterization is an analog of the celebrated characterization of the set of distributions of posterior means that follows from Strassen’s theorem (Strassen 1965). Quantiles are important in settings where only the ordinal values or relative rankings of the relevant variables are meaningful, rather than the cardinal values or numeric differences (e.g., voting, grading or rating schemes, measures of potential losses such as the value-at-risk), or in settings where moments are not well-defined (e.g., finance or insurance). In this regard, the characterization of the set of distributions of posterior quantiles is useful for identifying possible outcomes from a signal (e.g., posterior value-at-risk that arises from a signal), as well as optimal policies (e.g., optimal voter signals in an election) in these settings.
It is worthwhile to acknowledge the paper’s limitations. Regarding the distributions of posterior quantiles, the analysis is restricted to a one-dimensional state space. Moreover, while the characterization parallels the well-known characterization of distributions of posterior means, it provides little intuition for how distributions of other statistics (say, the posterior k𝑘kitalic_k-th moment) may behave. In particular, while the characterization of the set of distributions of posterior quantiles allows one to compare Bayesian persuasion problems when the receiver has either an absolute loss function or a quadratic loss function, optimal signals under other loss functions remain largely under-explored.
The first application of the extreme point characterization to the distributions of posterior quantiles is related to belief-based characterizations of signals, which date back to the seminal contributions of Blackwell (1953) and Strassen (1965). Blackwell’s and Strassen’s characterizations also lead to the characterization of the set of distributions of posterior means. This paper’s characterization of the set of distributions of posterior quantiles (Theorem 2 and Theorem 3) can be regarded as an analog. In a recent paper, Kolotilin and Wolitzky (2024) provide an alternative proof of Theorem 2, which does not require the use of extreme points.
C
The core components of the analysis, i.e. the extraction of CO2 emissions data for Hungarian ETS firms and the initialization of the studied decarbonization strategies are available at: https://github.com/jo-stangl/reducing_employment_and_economic_output_loss_in_rapid_decarbonization_scenario
To empirically test our framework, we approximate hypothetical decarbonization efforts with the removal of firms from the Hungarian production network. A firm that is removed from the production network no longer supplies its customers nor does it place demand to its (former) suppliers in the subsequent time step. It also stops emitting CO2. This hypothetical scenario allows us to quantify the worst-case outcomes in terms of job and economic output loss of a strict command-and-control approach towards decarbonization. In our simulation, a decarbonization strategy is realized as follows. We first rank firms according to four different charachteristics, CO2 emissions, number of employees, systemic importance, and CO2 emissions per systemic importance. Then, for every of these four strategies (shown in Fig. 3) firms are cumulatively removed from the production network to assess the effects of the given heuristic. The first data point in Fig. 3 represents the highest ranked firm being removed from the production network. The second data point represents the highest and the second highest ranked firm, according to the respective heuristic, and so forth until all ETS firms are removed from the network. Each set of closed firms reduces the total CO2 output by the combined annual CO2 emissions of the respective firms. The closure of firms initializes a shock in the production network which results in the loss of jobs and economic output. These effects are calculated using the ESRI shock propagation algorithm [28] once in the output-weighted and once in the employment-weighted version. The removal of all 119 Hungarian ETS firms results in 31.7% of job and 38.2% of economic output loss. This is the same for all decarbonization strategies, but the order in which firms are removed from the production network determines the fractions of expected job and output loss on the way to this final value. The shock-propagation is deterministic, which means that the same set of closed firms always leads to the same outcomes in terms of CO2 reduction, job and output losses. The time horizon of our analysis is one year, as we consider the annual emissions of companies and a shock propagation on a production network that is assumed to remain constant. The estimated job and output losses can therefore be considered worst-case estimates that will likely be smaller when applied to the economy in the real world. Employees who lost their jobs would try to find a new employer. Some jobs might in fact be easily transferred between firms or even sectors, while highly specialized jobs might be harder to replace, see for example [41] [42]. We do not consider these effects in the present framework, but project immediate total potential job loss as a consequence of an imposed decarbonization policy, imposed by a hypothetical social planner. In addition, firms that lost a supplier or a buyer would try to establish new supply relations. In the present modeling framework this is only captured heuristically by assuming that firms with low market shares within their respective NACE4 industry sector are more easily replaceable and firms with high market shares are more difficult to replace. Explicitly considering rewiring and the reallocation of jobs during the shock propagation remains future work. In theory, all combinations of removals of the 119 ETS firms would need to be tested to find the truly optimal strategy with respect to maximum CO2 reduction and minimal expected job and output loss. Since this would result in a combinatorial explosion of possibilities, our goal here is to find a satisfying heuristic that allows for acceptable levels of expected job and output loss for a given CO2 reduction target. In total, we test eleven different heuristics for their potential to rapidly reduce CO2 emissions while securing high levels of employment and economic output. The outcomes for the four main decarbonization strategies are displayed in Fig. 3 and discussed in the subsequent section. The remaining strategies are shown and discussed in the SI section S2.
The employment-weighted economic systemic risk index (EW-ESRI) is a measure of potential job loss in case of a production stop of a single company or a set of companies in the entire economy. As explained in the Methods section in the main text, it captures the indirect job losses of firms in the production network whose production becomes constrained due to downstream or upstream shocks caused by the initial set of failing companies. Empirically, we are able to do this re-weighting of the ESRI for the Hungarian production network since a dataset on the number of employees of a large subset of firms is available to us through the Hungarian Central Bank. This dataset covers 2,333,975 jobs in total, which is 65.6% of the 3,557,700 employees in the year 2019 according to official statistics [36]. In order to understand the coverage of our firm-level employment data set, we aggregate jobs by NACE level 1 category of their respective employing firms. The coverage of different economic sectors by our employment data varies. In Fig. 9 we compare the two relative distributions of employees per Nace level 1 sector of both the official statistics and our aggregated data. The sectors in which Hungarian firms operate that are covered by the EU emission trading system (ETS) are colored, and the remaining sectors are held in gray. As can be seen in Fig. 9, our firm-level employment data recovers essential features of the official employment distribution, such as the relative importance between sectors. Also, the majority of jobs are located in the manufacturing sector, even though this sector exhibits a higher share of 30.53% of employees in our data set. In general, our data set exhibits the best coverage in the producing and in the service sectors and shows the worst coverage in the sectors higher up in the NACE classification scheme, like the public or education sector.
This work was supported in part by the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology as part of the funding project GZ 2021-0.664.668 (S.T.), the Austrian Science Fund FWF under P 33751 (S.T.), the Austrian Science Promotion Agency, FFG project under 39071248 (S.T.) and the OeNB Hochschuljubiläumsfund P18696 (S.T.). We thank Janos Kertesz for helpful discussions.
From the beginning of 2015 to the second quarter of 2018, only transactions exceeding 1 million HUF of tax content for the sum of the transactions between two firms in a given reporting period (monthly, quarterly, annually) were recorded. In the period from the third quarter of 2018 to the second quarter of 2020, the threshold was lowered to 100,000 Hungarian Forint (HUF), but the threshold now referred to individual transactions. This brought many more firms and supply relations into view, while some firms dropped out of the dataset when their individual transactions stayed below this new threshold. As of the third quarter of 2020, there is no longer a threshold and all inter-firm invoices must be reported. For this study we focus on the year 2019 for two reasons: first, at the time of conducting our analysis, 2019 was the most recent year for which CO2 emissions data has been available through the EU Emission Trading System (ETS). Second, 2019 was the last year before the COVID-19 pandemic and the subsequent economic crisis, which disrupted many supply relations between firms and arguably makes the network snapshots of these subsequent years unrepresentative as initial states of the Hungarian economy.
C
Despite being able to solve the illustrative instance presented in Section 4, the state-of-the-art mathematical optimization solvers we tested were unable to solve this more realistic instance. We experimented with different complementary slackness reformulation options and primal-dual equality reformulation available in the BilevelJuMP.jl package (Garcia et al., 2022) along with reductions in the instance size. However, trying to solve a simplified instance with Gurobi v.9.5.0 (Gurobi Optimization, 2020) or CPLEX v.12.10 (IBM, 2021) within 48 hours did not lead to a solution with integrality gaps lower than 100 %. An analogous issue has been faced by Virasjoki et al. (2020) where the authors were not able to solve their large-scale mixed-integer quadratically constrained quadratic programming problems with any of the state-of-the-art solvers available in GAMS (GAMS, 2023).
For each combination of the input parameters, we analyse the total welfare (the value of the upper-level problem objective function), the share of VRE in the total generation mix and the total generation level, which we refer to as output factors.
In this paper, we study the impact of the TSO infrastructure expansion decisions in combination with carbon taxes and renewable-driven investment incentives on the optimal generation mix. To examine the impact of renewables-driven policies we propose a novel bi-level modelling assessment to plan optimal transmission infrastructure expansion. At the lower level, we consider a perfectly competitive energy market comprising GenCos who decide optimal generation levels and their own infrastructure expansion strategy. The upper level consists of a TSO who proactively anticipates the aforementioned decisions and decides the optimal transmission capacity expansion plan. To supplement the TSO decisions with other renewable-driven policies, we introduced carbon taxes and renewable capacity investment incentives in the model. Additionally, we accounted for variations in GenCos’ and TSO’s willingness to expand the infrastructure by introducing an upper limit on the generation (GEB) and transmission capacity expansion (TEB) costs. Therefore, as the input parameters for the proposed bi-level model, we considered different values of TEB, GEB, incentives and carbon tax. This paper examined the proposed modelling approach by applying it to a simple, three-node illustrative case study and a more realistic energy system representing Nordic and Baltic countries. The output factors explored in the analysis are the optimal total welfare, the share of VRE in the optimal generation mix and the total amount of energy generated.
Therefore, following the solution approach proposed in (Virasjoki et al., 2020), we applied an iterative procedure in which we discretise and exhaustively enumerate and fix all possible upper-level decisions and solve the lower-level problem. Then, we determine ex post the decisions that yield optimal solutions. The discrete values we considered for the capacity expansion of the transmission lines are {0 MW, 3000 MW, 6000 MW, 9000 MW}. Taking into account the existence of 10 transmission lines in the system, the aforementioned enumerative set leads to 1,048,576 lower-level problems that must be solved for each combination of the input parameters.
The proposed model assumes the TSO to take a leading position and anticipate the generation capacity investment decisions influenced by its transmission system expansion. This assumption leads to the bi-level structure of the proposed model. Such a modelling approach is widely used in energy market planning. As an example, Zhang et al. (2016) exploited a bi-level scheme to consider integrated generation-transmission expansion at the upper level and modified unit-commitment model with demand response at the lower level. Virasjoki et al. (2020) considered a bi-level structure when formulating the model for optimal energy storage capacity sizing and use planning. In this paper, we reformulate the model proposed in (Virasjoki et al., 2020) to consider welfare maximising TSO at the upper level, making decisions in the transmission lines instead of energy storage. An analogous strategy has been considered by Siddiqui et al. (2019) during the investigation of the indirect influence of the TSO’s decisions as a part of an emissions mitigation strategy aligned with different levels of carbon charges in a deregulated industry. Aimed at the analytical implications, their paper neglects VRE and demand-associated uncertainty, as well as the heterogeneity of the GenCos, while assuming unlimited generation capacity. These shortcomings are addressed in the current paper by means of introducing VRE intermittency and allowing GenCos to invest in diversified power generation technologies. Furthermore, we account for various investment budget portfolios for TSO and GenCos to investigate how GenCos’ investment capital availability influences the total VRE share in the optimal generation mix.
C
Let f:ℛ→Ω:𝑓→ℛΩf:\mathcal{R}\rightarrow\Omegaitalic_f : caligraphic_R → roman_Ω be SP. Then, for each vector of ΩΩ\Omegaroman_Ω-restricted peaks p∈ΩapsuperscriptΩ𝑎\emph{{p}}\in\Omega^{a}p ∈ roman_Ω start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT, |Ω⁢(p)|≤2Ωp2|\Omega(\emph{{p}})|\leq 2| roman_Ω ( p ) | ≤ 2.
Definition 4 imposes some conditions on the left-decisive sets. Let R∈ℛ𝑅ℛR\in\cal{R}italic_R ∈ caligraphic_R. The first condition requires that the left-decisive sets of a binary decision function gp⁢(R)subscript𝑔𝑝𝑅g_{p(R)}italic_g start_POSTSUBSCRIPT italic_p ( italic_R ) end_POSTSUBSCRIPT have to be subsets of (N⁢(R)∩A)∪D𝑁𝑅𝐴𝐷(N(R)\cap A)\cup D( italic_N ( italic_R ) ∩ italic_A ) ∪ italic_D. This condition implies that, once the ΩΩ\Omegaroman_Ω-restricted peaks p⁢(R)𝑝𝑅p(R)italic_p ( italic_R ) are known, the decision between ω¯⁢(p⁢(R))¯𝜔𝑝𝑅\underline{\omega}(p(R))under¯ start_ARG italic_ω end_ARG ( italic_p ( italic_R ) ) and ω¯⁢(p⁢(R))¯𝜔𝑝𝑅\overline{\omega}(p(R))over¯ start_ARG italic_ω end_ARG ( italic_p ( italic_R ) ) only depends on the opinion of the agents with single-dipped preferences and those agents with single-peaked preferences whose ΩΩ\Omegaroman_Ω-restricted peaks at R𝑅Ritalic_R are located between the two preselected alternatives.
Vorsatz (2018) also follow a two-step procedure. In their model, the location of the peak/dip of each agent is known, so the first step of their rules asks which agents have single-peaked preferences. As a result of the first step, both the type of preference of each agent and the location of the peaks and dips are known. In the domain analyzed here, the type of preference of each agent is public information and in the first step we ask agents with single-peaked preferences about their peaks. As a result, the type of preference of each agent and the location of all peaks are known. Note that even though the social planner in our domain has less information after the first step (since she does not know the location of the dips), at most two alternatives are preselected in both settings. If two alternatives are preselected, the second step of Alcalde-Unzu and
Our characterization establishes that the first step of the strategy-proof rules on our domain is similar to the strategy-proof rules on the single-peaked preference domain, and the second step is similar to the strategy-proof rules on the single-dipped preference domain. As a consequence, previous results in the literature are generalized. To see this, remember that in the first step of the two-step procedure, only the agents with single-peaked preferences are asked about their peaks and that either a single alternative or a pair of contiguous alternatives is preselected. Proposition 3 shows that if we define an intuitive order on the possible sets of preselected alternatives, a generalized median voter function has to be applied in the first step. Moreover, in the second step of the two-step procedure, a binary decision problem is faced and we find that the choice between the two preselected locations is made in the same way as in the strategy-proof rules on the single-dipped preference domain (Proposition 4). Thus, if the set of agents with single-dipped preferences is empty, only the first step applies and only single alternatives appear as the outcome of the first step. Consequently, the two-step procedure reduces to the standard generalized median voter rules of Moulin (1980) and Barberà and
In the first step, the agents with single-peaked preferences indicate their ΩΩ\Omegaroman_Ω-restricted peaks.
D
Economically motivated short-run restrictions have been an integral part of identifying SVAR models in numerous applications since Sims, (1980).
Note that identification requires at least (n−1)⁢n/2𝑛1𝑛2(n-1)n/2( italic_n - 1 ) italic_n / 2 restrictions, and incorrect restrictions lead to inconsistent estimates.
Consider the scenario where the shocks are sufficiently non-Gaussian, allowing the first-step estimator to consistently estimate all elements of B0subscript𝐵0B_{0}italic_B start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. If a restriction is correct, the consistency of the first-step estimator ensures that the first term of the adaptive weight for that restriction converges to infinity. Consequently, it becomes costly to deviate from correct restrictions, and the estimator shrinks towards those restrictions in line with the data. However, if a restriction is not correct, the first-step estimator will not converge to the restriction, and the first term of the adaptive weight for an incorrect restriction will not diverge to infinity. As a result, the estimator can deviate from incorrect restrictions.
One crucial distinction from traditional approaches, where restrictions are treated as binding constraints, is that the ridge penalty mitigates the adverse effects of incorrect restrictions, especially as the sample size increases. In small samples, the statistical identification approach may not provide robust evidence against invalid restrictions, causing the estimator to shrink towards them. However, with more data, the approach can detect that shrinking towards the incorrect restriction leads to dependent shocks, resulting in smaller tuning parameters determined by cross-validation. This reduces the impact of incorrect restriction with increasing sample size. Further details on the selected tuning parameters are provided in the Appendix.
Despite their popularity in applied work, the disadvantageous of restriction based identification methods are well known: incorrect restrictions lead to biased estimates.
D
At the same time, our model allows us to capture two crucial aspects features of real-world digital advertising: the platform’s ability to match consumers with their preferred firms and the value created through personalized pricing, which offers discounts to lower-value consumers who would not purchase at the monopoly price.
The model also accommodates a broader interpretation where each firm offers a range of products varying in quality and price. The platform’s information enables firms to guide each consumer to a different quality-price pair within their product line, a process known as product steering. This process combines value creation and extraction, similar to our single-product model. As the variation in product quality diminishes (i.e., the products of each firm become more alike), product steering becomes akin to personalized pricing.999See Bar-Isaac and
Digital platforms offer a variety of advertisement formats, such as sponsored links, images, or videos. The content, often a product-price pair selected from the advertiser’s portfolio, can differ across media channels and is tailored to individual consumers. Advertisers face three key decisions: identifying the target users, selecting the appropriate advertisement for each user, and determining the bid for each user’s attention. The best strategy depends on the platform’s nature and the advertiser’s product line. For instance, a brand with multiple product lines would adopt a different approach than a single-product firm, adjusting its campaign according to the type of platform, e.g., search engines, social networks, or third-party publishers.
Finally, in our model, personalized pricing is exclusive to the platform and only applies to the firm that wins the sponsored slot. This is due to the consumers’ unit demand and the significant difference in the information each firm possesses on- and off-platform. However, in real-world scenarios, multiple forms of price discrimination, such as market segmentation and nonlinear pricing, can occur both on and off the platform. In that sense, our model accentuates the differences between these two sales channels.
implications of managed campaigns for equilibrium product quality, relative to our paper’s exploration of showrooming and its impact on pricing strategies in the off-platform markets.
A
Table 1 gives an overview of the fit. We note that in all our analysis we separately estimate the weights and calculate the flow matrix for the years 2001–2021.
The statistics in table 1 show that these approximate corrections do in fact improve the fit of the model significantly. Interestingly the fit does not only improve with the mining data but also with the use data, which indicates that the correction has also an indirect effect on the column sums of the flow matrix.
Table 1: Fit of the flow matrix with observed PR mining and P use. The table shows the average R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT value (for the years 2001–2021) for the fit of F𝐹Fitalic_F with M𝑀Mitalic_M and U𝑈Uitalic_U separately as well as combined, as well as the standard deviation. Further we show the average D𝐷Ditalic_D as well as its standard deviation. Note that FM⁢2superscript𝐹𝑀2F^{M2}italic_F start_POSTSUPERSCRIPT italic_M 2 end_POSTSUPERSCRIPT to FM⁢6superscript𝐹𝑀6F^{M6}italic_F start_POSTSUPERSCRIPT italic_M 6 end_POSTSUPERSCRIPT have been calculated based on weights optimized for each model.
While the original trade data provides already a useful approximation of the P flows per se, it is worth noting that the ranking of the countries in terms of exports and imports differs from the ranking in terms of the material flow. This can already be observed from the aggregated data presented in the bottom of figure 2. For example, we observe that Belgium appears among the P exporters, but not in the panel summarizing P out-flow. Similarly, the USA do not appear in the panel describing in-flow, since from the perspective of material flow, only little P used in the USA originates from outside the country. Also, the massive changes with respect to the Chinese contribution to global phosphate flows become much clearer by looking at the material flow of P \citep[see also][for a comparison]knight_china.
While using net exports already produces a good fit of the flow matrix with the data, we observe that just by optimizing the weights we can improve the average R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT from 94% to 97% (first vs. second column).
D
In this paper, the equilibrium concept we focus on is Nash equilibrium. In a Nash equilibrium, no agent improves by deviating from its initial chosen strategy, assuming the other agents keep their strategies unchanged.
It is well known, for matching markets, that there is no stable rule for which truth-telling is a dominant strategy for all agents (see Dubins and Freedman, 1981; Roth, 1982, 1985; Sotomayor, 1996, 2012; Martínez et al., 2004; Manasero and
Oviedo, 2022, among others). That is, given the true preferences and a stable rule, at least one agent might benefit from misrepresenting her preferences regardless of what the rest of the agents state. Thus, stable matchings cannot be reached through dominant “truth-telling equilibria". The stability of equilibrium solutions under true preferences is expected to be affected when agents behave strategically.
Next, we show that any stable matching rule implements the individually rational correspondence in Nash equilibrium.666This result generalizes the result first presented by Alcalde (1996) for the marriage market and then extended by Sotomayor (2012) for the many-to-one matching market with responsive preferences.
Wilson, 1971; Roth, 1984, 1985; Martínez et al., 2000; Alkan, 2002; Kojima, 2012, among others). The version of this theorem for a many-to-many matching market where all agents have substitutable choice functions satisfying LAD, that also applies in our setting, is presented in Alkan (2002) and states that each agent is matched with the same number of partners in every stable matching.. Therefore μW⁢(P⋆)=μF⁢(P⋆)=μ,subscript𝜇𝑊superscript𝑃⋆subscript𝜇𝐹superscript𝑃⋆𝜇\mu_{W}(P^{\star})=\mu_{F}(P^{\star})=\mu,italic_μ start_POSTSUBSCRIPT italic_W end_POSTSUBSCRIPT ( italic_P start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ) = italic_μ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT ( italic_P start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ) = italic_μ , i.e. μ𝜇\muitalic_μ is the unique stable matching under (P⋆,C)superscript𝑃⋆𝐶(P^{\star},C)( italic_P start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT , italic_C ).
A
In the analysis, we use the following two definitions of homeownership. First, we consider an individual as a homeowner if either she ever had a mortgage or she is recorded as a homeowner according to Experian’s imputation. With this comprehensive homeownership definition, about 70% of individuals in our sample are homeowners (vs 40% if we consider only individuals with a positive open mortgage amount). Second, in an alternative definition of homeownership, we consider the origination of new mortgages. In this second case, we define a mortgage origination as a situation in which either the number of open mortgage trades in year t𝑡titalic_t is bigger than the number of open mortgage trades in year t−1𝑡1t-1italic_t - 1 or the number of months since the most recent mortgage trade has been opened is lower than 12. Clearly, this definition would only capture the flow, and perhaps more importantly would miss cash purchases and wouldn’t distinguish between a new mortgage and a remortgage.
No individual in our sample has a credit limit equal to zero (that is almost mechanical, as to be in the credit bureau a credit line is needed). The same holds true for the total credit limit on open revolving trades (which is defined as the total credit limit on all open revolving trades with credit limit larger than zero), which has an average of about 34,000USD. Finally, the total balance on revolving trades has an average of about 12,000USD in the whole dataset, and about 90% of the individuals in the data have a total balance on revolving trades greater than zero. Among those, the average recorded is about 16,000USD.
In Appendix D, E, and G we focus on individuals who have been hit by a soft default in 2010. Hence, in that case we drop from the sample all those who have been hit by at least one harsh default between year 2004 and year 2010 (extremes included) and all those who have been hit by a soft default between year 2004 and 2009 (extremes included)..
Next, we study the impact of a soft default on the probability that total credit limit is lower than 10,000USD (Figure 3, Panel (ii)), that credit limit is about the 10-th percentile of credit limits in 2010, and on revolving credit balance (Figure 3, Panel (iii)).
Table 1: Summary statistics of our main variables, 2010, balanced panel. Individuals who experienced a harsh default before or in the same year as a soft default in the sample period (i.e. from 2004 onwards) have been dropped. Special codes credit scores lower than 300 have been trimmed. Similarly, the top 1% of total credit limit, total balance on revolving trades and total revolving limit have been trimmed.
D
_{jk}\right)∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_q ⋅ italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_f start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT + ∑ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT sign ( italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - italic_s start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) italic_r start_POSTSUBSCRIPT italic_j italic_k end_POSTSUBSCRIPT ),
to Minimax. As with IRV, each ballot is a ranking of some or all of the candidates.101010While it is often recommended that equal rankings be allowed under
Block Approval: Voters vote for any number of candidates.272727We use the same sincere strategy as for single-winner Approval Voting.
Minimax: Vote sincerely222222While a viability-aware strategy was included for Minimax in Wolk et al. (2023),
Approval: Vote for all candidates with uj≥E⁢Vsubscript𝑢𝑗𝐸𝑉u_{j}\geq EVitalic_u start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≥ italic_E italic_V.
C
Let 𝒮ϵ⁢(ξ)superscript𝒮italic-ϵ𝜉\mathcal{S}^{\epsilon}(\xi)caligraphic_S start_POSTSUPERSCRIPT italic_ϵ end_POSTSUPERSCRIPT ( italic_ξ ) denote the set of solutions whose objective values are at most a fraction ϵitalic-ϵ\epsilonitalic_ϵ worse than the optimal objective value z*⁢(ξ)superscript𝑧𝜉z^{*}(\xi)italic_z start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ( italic_ξ ), i.e., 𝒗⊺⁢𝒙𝒋+𝒘⊺⁢𝒚𝒋≥(1−ϵ)⁢z*⁢(ξ)superscript𝒗bold-⊺superscript𝒙𝒋superscript𝒘bold-⊺superscript𝒚𝒋1italic-ϵsuperscript𝑧𝜉\bm{v^{\intercal}}\bm{x^{j}}+\bm{w^{\intercal}y^{j}}\geq(1-\epsilon)z^{*}(\xi)bold_italic_v start_POSTSUPERSCRIPT bold_⊺ end_POSTSUPERSCRIPT bold_italic_x start_POSTSUPERSCRIPT bold_italic_j end_POSTSUPERSCRIPT + bold_italic_w start_POSTSUPERSCRIPT bold_⊺ end_POSTSUPERSCRIPT bold_italic_y start_POSTSUPERSCRIPT bold_italic_j end_POSTSUPERSCRIPT ≥ ( 1 - italic_ϵ ) italic_z start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ( italic_ξ ) for all solutions (𝒙𝒋,𝒚𝒋)∈𝒮ϵ⁢(ξ)superscript𝒙𝒋superscript𝒚𝒋superscript𝒮italic-ϵ𝜉(\bm{x^{j}},\bm{y^{j}})\in\mathcal{S}^{\epsilon}(\xi)( bold_italic_x start_POSTSUPERSCRIPT bold_italic_j end_POSTSUPERSCRIPT , bold_italic_y start_POSTSUPERSCRIPT bold_italic_j end_POSTSUPERSCRIPT ) ∈ caligraphic_S start_POSTSUPERSCRIPT italic_ϵ end_POSTSUPERSCRIPT ( italic_ξ ). For the partitioning algorithm (Proposition 1) and for the iterative implementation of RSD (Algorithm 1), finding a distribution over the optimal and near-optimal solutions simply implies checking whether a solution belongs to 𝒮ϵ⁢(ξ)superscript𝒮italic-ϵ𝜉\mathcal{S}^{\epsilon}(\xi)caligraphic_S start_POSTSUPERSCRIPT italic_ϵ end_POSTSUPERSCRIPT ( italic_ξ ) instead of to 𝒮⁢(ξ)𝒮𝜉\mathcal{S}(\xi)caligraphic_S ( italic_ξ ). When imposing that a solution belongs to 𝒮ϵ⁢(ξ)superscript𝒮italic-ϵ𝜉\mathcal{S}^{\epsilon}(\xi)caligraphic_S start_POSTSUPERSCRIPT italic_ϵ end_POSTSUPERSCRIPT ( italic_ξ ) in the other solution methods, we can simply replace the constraint that the objective value is equal to z*⁢(ξ)superscript𝑧𝜉z^{*}(\xi)italic_z start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ( italic_ξ ) by 𝒗⊺⁢𝒙𝒋+𝒘⊺⁢𝒚𝒋≥(1−ϵ)⁢z*⁢(ξ)superscript𝒗bold-⊺superscript𝒙𝒋superscript𝒘bold-⊺superscript𝒚𝒋1italic-ϵsuperscript𝑧𝜉\bm{v^{\intercal}}\bm{x^{j}}+\bm{w^{\intercal}y^{j}}\geq(1-\epsilon)z^{*}(\xi)bold_italic_v start_POSTSUPERSCRIPT bold_⊺ end_POSTSUPERSCRIPT bold_italic_x start_POSTSUPERSCRIPT bold_italic_j end_POSTSUPERSCRIPT + bold_italic_w start_POSTSUPERSCRIPT bold_⊺ end_POSTSUPERSCRIPT bold_italic_y start_POSTSUPERSCRIPT bold_italic_j end_POSTSUPERSCRIPT ≥ ( 1 - italic_ϵ ) italic_z start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ( italic_ξ ). From an axiomatic point of view, the discussion in Section 6 remains unchanged.
In this section, we study how to extend the proposed methods to problems where agents have cardinal preferences over the solutions, i.e., they associate a real value to each of the optimal solutions.
Table 2 displays the required computational effort to find a partitioning of the agents, and to obtain each of the distributions. The computation time for finding an implementation of RSD for dichotomous preferences is, as expected, close to that of finding a single optimal solution, excluding the time to find a partitioning of the agents into 𝒴𝒴\mathcal{Y}caligraphic_Y, ℳℳ\mathcal{M}caligraphic_M, and 𝒩𝒩\mathcal{N}caligraphic_N. This is particularly true for kidney exchange, as we can apply the first implementation of RSD, which perturbs the objective function, whereas we apply the iterative implementation (Algorithm 1) for minimizing total tardiness. Combining this observation with the performance of RSD in Figure 1, this presents RSD-variants as a pragmatic method to control the selection probabilities of the optimal solutions. Furthermore, we observe that, overall, the computation times for the leximin distribution scale better than for the Nash distribution for the evaluated instances, with the Nash rule having a particularly high variance in solution times (see Appendix B).
In this section, we investigate the performance of the distribution rules that were discussed in Section 5. We study the proposed distribution rules for two distinct problems: kidney exchange, where the agents have dichotomous preferences, and the single-machine scheduling problem to minimize total tardiness, where the agents have cardinal preferences. We evaluate how the proposed distributions compare to the optimal Nash product and to the optimal minimum selection probability, and we compare the required computation times to obtain them.
For both problems, the performance of the leximin and the Nash distributions are close to the optimum on the criterion that they do not optimize. The performance of the other distribution rules, however, differs for the two studied applications.
C
Since it is based on actual trades, realized volatility (RV) is the ultimate measure of market volatility, although the latter is more often associated with the implied volatility, most commonly measured by the VIX index cboevix ; cboevixhistoric – the so called market ”fear index” – that tries to predict RV of the S&P500 index for the following month. Its model-independent evaluation demeterfi1999guide is based on options contracts, which are meant to predict future stock prices fluctuations whitepaper2003cboe . The question of how well VIX predicts future realized volatility has been of great interest to researchers christensen1998relation ; vodenska2013understanding ; kownatzki2016howgood ; russon2017nonlinear . Recent results dashti2019implied ; dashti2021realized show that VIX is only marginally better than past RV in predicting future RV. In particular, it underestimates future low volatility and, most importantly, future high volatility. In fact, while both RV and VIX exhibit scale-free power-law tails, the distribution of the ratio of RV to VIX also has a power-law tail with a relatively small power exponent dashti2019implied ; dashti2021realized , meaning that VIX is incapable of predicting large surges in volatility.
While the standard search for Dragon Kings involves performing a linear fit of the tails of the distribution pisarenko2012robust ; janczura2012black , here we tried to broaden our analysis by also fitting the entire distribution using mGB (7) and GB2 (11) – the two members of the Generalized Beta family of distributions liu2023rethinking , mcdonald1995generalization . As explained in the paragraph that follows (7), the central feature of mGB is that, after exhibiting a long power-law dependence, it eventually terminates at a finite value of the variable. GB2, on the other hand, has a power-law tail that extends mGB’s power-law dependence to infinity.
We fit CCDF of the full RV distribution – for the entire time span discussed in Sec. 2 – using mGB (7) and GB2 (11). The fits are shown on the log-log scale in Figs. 4 – 13, together with the linear fit (LF) of the tails with R⁢V>40𝑅𝑉40RV>40italic_R italic_V > 40. LF excludes the end points, as prescribed in pisarenko2012robust , that visually may be nDK candidates. (In order to mimic LF we also excluded those points in GB2 fits, which has minimal effect on GB2 fits, including the slope and KS statistic). To make the progression of the fits as a function of n𝑛nitalic_n clearer, we included results for n=5𝑛5n=5italic_n = 5 and n=17𝑛17n=17italic_n = 17, in addition to n=1,7,21𝑛1721n=1,7,21italic_n = 1 , 7 , 21 that we used in Sec. 2. Confidence intervals (CI) were evaluated per janczura2012black , via inversion of the binomial distribution. p𝑝pitalic_p-values were evaluated in the framework of the U-test, which is discussed in pisarenko2012robust and is based on order statistics:
The main result of this paper is that the largest values of RV are in fact nDK. We find that daily returns are the closest to the BS behavior. However, with the increase of n𝑛nitalic_n we observe the development of ”potential” DK with statistically significant deviations upward from the straight line. This trend terminates with the data points returning to the straight line and then abruptly plunging into nDK territory.
It should be emphasized that RV is agnostic with respect to gains or losses in stock returns. Nonetheless, it has been habitual that large gains and losses occur at around the same time. Here we wish to address the question of whether the largest values of RV fall on the power-law tail of the RV distribution. As is well known, the largest upheavals in the stock market happened on, and close to, the Black Monday, which was a precursor to the Savings and Loan crisis, the Tech Bubble, the Financial Crisis and the COVID Pandemic. Plotted on a log-log scale, power-law tails of a distribution show as a straight line. If the largest RV fall on the straight line they can be classified as Black Swans (BS). If, however, they show statistically significant deviations upward or downward from this straight line, they can be classified as Dragon Kings (DK) sornette2009 ; sornette2012dragon or negative Dragon Kings (nDK) respectively pisarenko2012robust .
D
Zeithammer (2019) shows that, in a symmetric auction for a single object, bid functions are monotonic. As a consequence, the revenue equivalence theorem (Myerson, 1981) applies777Intuitively, under different pricing rules, bidding behavior is also different, in a way that exactly offsets differences in the way prices are computed., and introducing soft-floors in second-price auctions do not affect either the final allocation or the advertising service’s revenues. He then demonstrates by way of examples that, with asymmetric bid distributions, revenues in a second-price auction with a soft-floor can be either higher or lower than in a standard second-price auction.
Zeithammer (2019) shows that, in a symmetric auction for a single object, bid functions are monotonic. As a consequence, the revenue equivalence theorem (Myerson, 1981) applies777Intuitively, under different pricing rules, bidding behavior is also different, in a way that exactly offsets differences in the way prices are computed., and introducing soft-floors in second-price auctions do not affect either the final allocation or the advertising service’s revenues. He then demonstrates by way of examples that, with asymmetric bid distributions, revenues in a second-price auction with a soft-floor can be either higher or lower than in a standard second-price auction.
§4.3 considers asymmetric, single-query auctions. We complement the results of Zeithammer (2019) by considering specifications of bidder value distributions that are not amenable to closed-form equilibrium solutions. We show that, in a variety of cases of real-world relevance, soft floors yield lower revenues than second-price auctions with a suitably chosen (not necessarily optimal) reserve price.
As noted in §1, our contribution here is twofold. First, we demonstrate through simulations that, with multi-query targeting, different auction formats—and in particular auctions with a soft floor—can yield different revenues, even if bidder types are drawn from the same distribution. Second, restricting attention to single-query auctions, we consider a collection of asymmetric bid distributions that complement those analyzed by Zeithammer (2019), and do not allow for analytical equilibrium solutions. For such environments, we demonstrate how a second-price auction with a suitably chosen reserve price yields higher revenue than a soft-floor auction.
Our first key finding is that, as anticipated, the three auction formats yield different expected revenues. Soft-floor reserve prices can impact revenues; thus, our simulations provide some support to this common industry practice. Under Hedge, revenues are higher in soft-floor reserve price auctions than in first-price auctions, but second-price auctions perform best. With EXP3-IX, second-price auctions do not fare as well; the highest revenues come from first-price auctions, with soft-floor pricing behind.
C
To intuitively understand why sharing contracts are well-suited to the research environment, note that sharing contracts create an encouragement effect by altering the degree of strategic complementarity or substitutability. In particular, for an environment similar to Keller et al. (2005) in which free-riding drives inefficiency, the strategic complementary induced by a sharing contract can manufacture an offsetting encouraging effect. However, as a byproduct, this implies that the contracts considered in this paper can only alter the degree of strategic complementarity or substitutability across all agents uniformly.
There are N𝑁Nitalic_N agents i∈{1,2,…⁢N}𝑖12…𝑁i\in\{1,2,\dots N\}italic_i ∈ { 1 , 2 , … italic_N } investigating a potential research breakthrough. The research idea is good or bad, which is drawn by Nature prior to the start of the game and unobserved by the agents. Formally, the quality of the research idea is the state of the world, ω∈Ω:={good,bad}𝜔Ωassigngoodbad\omega\in\Omega:=\{\textnormal{good},\textnormal{bad}\}italic_ω ∈ roman_Ω := { good , bad }. Nature draws the state of the world to be good with probability p⁢(0)𝑝0p(0)italic_p ( 0 ), which is the initial prior belief shared by the agents on the state of the world. Time is continuous, t∈[0,∞)𝑡0t\in[0,\infty)italic_t ∈ [ 0 , ∞ ), and at every instant of time, each agent is endowed with a unit measure of a resource (effort) that it allocates over two projects, the status quo technology or the research process.
While the formal analysis was constrained to a specific model, this theoretical work offers important insights for thinking about research. First, the condition for efficiency when there are breakthrough payoff externalities is that breakthroughs must have a neutral impact on the losers. As much of the contest literature has focused on thinking about how to award winners, the analysis in this paper suggests that the key to understanding whether the amount of research conducted in such an environment is socially efficient is to consider how the losers weigh the arrival of the breakthrough against the status quo. Second, the existence of simple contracts that restore efficiency suggests a method for sharing rewards for joint projects. The main insight is that the guarantee (or what agents are promised independent of their effort choices) must match their status quo opportunity cost of research effort. Indeed, these sharing contracts restore efficiency in a self-enforcing way; provided a contract that awards winners and losers in the right way, it becomes unnecessary to observe or contract on the actions of the other agents. On the other hand, if it is impractical or infeasible to identify the winner/losers, it is also sufficient for contracts to condition on effort shares at the time of breakthrough.
As a consequence, the insights of this paper do not necessarily hold in environments in which the nature of inefficiency is heterogeneous between agents. For example, with asymmetric returns to research effort, the first-best solution takes a more complex form where some agents stop experimenting at different beliefs than others. The type of sharing contracts considered in this paper fail because the “winner” or discovery bonus can only be calibrated to the agent with the highest returns to effort and cannot ensure efficient behavior of the other agents; in those environments, stronger contracting instruments are necessary to restore efficiency. However, the results of the paper do extend to allow for some heterogeneity; namely, if agents have heterogeneity in the measure of resources available to invest in research (but identical returns to effort), the main insights still hold. While the analysis in this paper focuses on the conclusive good-news model of experimentation, the techniques do not rely on any specific features of the good-news model beyond the Markov assumptions.111Preliminary calculations suggest that the insights also extend to other strategic experimentation environments, like the bad-news model of Keller & Rady (2015). This paper focuses on conclusive good-news primarily because arrivals of stochastic breakthroughs plausibly model the process of conducting research.
I show that one piece of information sufficient to restore efficiency is the identity of the researcher that makes the breakthrough (termed the “winner”). To show this, I first consider a full-information environment with heterogeneity in payoffs between the discoverer and non-discoverers. I find that absent contracting, equilibria are inefficient generically, except in a knife-edge case. This case requires payoff parameters to align in a specific way; the payoff externalities must be such that the continuation value of failing to make a discovery (“losing”) equals the flow opportunity cost of research. Intuitively, if the losers benefit too much from a discovery, strategic agents have an incentive to free-ride on the efforts of others; they inefficiently reduce research and give up on research projects too easily. On the other hand, if the losers suffer in the event of a discovery, strategic agents overexert effort on failing research endeavors because they are afraid of another agent making the discovery. The efficiency condition thus depends on the losers’ payoffs and the opportunity cost of research, but it notably does not depend on what the winner receives.
C
The points Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are called knots.
not a natural cubic spline with knots at Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Let g𝑔gitalic_g be the
means that, unless g~~𝑔\widetilde{g}over~ start_ARG italic_g end_ARG itself is a natural cubic spline
on [a,b]𝑎𝑏[a,b][ italic_a , italic_b ] is a cubic spline if two conditions are satisfied: on each of the
A cubic spline on [a,b]𝑎𝑏[a,b][ italic_a , italic_b ] is said to be a natural cubic spline if its second and
D
In short, the main conclusion from this type classification in the simultaneous public goods games literature is that conditional co-operation is the predominant pattern, free-riding is frequent, while unconditional co-operation is very rare. In a sequential discrete public goods experiment involving position uncertainty or position certainty with a partial lack of information on past contributions, we demonstrate that the majority of subjects exhibit altruistic or conditional cooperating behaviour. Approximately 25% of subjects behave as G&M-type individuals, while free-riding is found to be very rare, unless the position in the sequence is known.
25% of the subjects behave according to the theoretical predictions of Gallice and Monzón (2019). Allowing for the presence of alternative behavioural types among the remaining subjects, we find that the majority are classified as conditional co-operators, some are altruists, and very few behave in a free-riding way.
In short, the main conclusion from this type classification in the simultaneous public goods games literature is that conditional co-operation is the predominant pattern, free-riding is frequent, while unconditional co-operation is very rare. In a sequential discrete public goods experiment involving position uncertainty or position certainty with a partial lack of information on past contributions, we demonstrate that the majority of subjects exhibit altruistic or conditional cooperating behaviour. Approximately 25% of subjects behave as G&M-type individuals, while free-riding is found to be very rare, unless the position in the sequence is known.
The majority of the subjects behave in an altruistic or conditional co-operating way, around 25% of the subjects as G&M type, and free-riding is very rare.
2-4). Additionally, we investigate whether subjects align with the predictions of the G&M model (G&M type). We find that around 25% of the subjects behave according to the G&M model, the vast majority behaves in a conditional co-operating or altruistic way, and a non-significant proportion free rides. From a mechanism design point of view, we find that introducing uncertainty regarding the position, along with a constrained sample of previous actions (i.e. only what the immediate precedent player), maximises the public good provision.
C
Similarly, individuals may choose to delegate to save on information-gathering costs behind the observed choices as in Sinclair (1990).
For example, investors may gain substantially if they delegate at low cost to high quality experts, but can be harmed if they chase the past performance of recently lucky but low quality experts, or if they choose an expert who maintains an excessively risky portfolio.
This small positive cost will ensure that a neo-classically rational subject would not delegate their decision in the trivial investment task.
Such investors may be rational in the usual neoclassical sense, but recognize their own cognitive limitations. If true expertise is available at moderate cost, it is rational for them to hire it.
reducing decision costs may be rational and beneficial for an investor, and (to the extent that it is consistent with investor well-being) increasing risk tolerance may also be beneficial.
C
Our work is related to the recent literature on sensitivity analysis for IPW estimators, which relates to our first application. A sensitivity analysis is an approach to partial identification that begins from assumptions that point-identify the causal estimand of interest and then considers increasing relaxations of those assumptions (Molinari, 2020). Our analysis is an extension of Dorn and Guo (2023)’s sharp characterization of bounds under Tan (2006)’s marginal sensitivity model. Tan (2022) and Frauen et al. (2023) previously extended this characterization to families that bound the Radon-Nikodym derivative of interest. We generalize these results to also include unbounded Radon-Nikodym derivative, so that we can include a compact characterization of bounds under Masten and Poirier (2018)’s conditional c-dependence model as a special case. There is rich work in this literature under other sensitivity assumptions like f𝑓fitalic_f-divergences and Total Variation distance. These other assumptions also fit within our framework, because our target distribution constructions are independent of the L∞subscript𝐿L_{\infty}italic_L start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT sensitivity assumptions that we analyze.
Our other two applications relate to existing work on sensitivity analysis. Our proposed sensitivity analysis for sharp regression discontinuity (RD) applies when data on the running variable fails tests for manipulation (McCrary, 2008; Otsu et al., 2013; Bugni and Canay, 2021). Our proposal nests both an exogeneity-type assumption and Gerard et al. (2020)’s bounds as special cases. There is other work in the RD context on partial identification bounds under manipulation (Rosenman et al., 2019; Ishihara and Sawada, 2020) but to our knowledge, our proposal is the first sensitivity analysis for manipulation. There are sensitivity analysis for exclusion failure with instrumental variables (Ramsahai, 2012; Van Kippersluis and Rietveld, 2018; Masten and Poirier, 2021; Freidling and Zhao, 2022), but to our knowledge our proposal is the first sensitivity analysis whose underlying assumptions are invariant to invertible transformations of variables.
Our work is related to the recent literature on sensitivity analysis for IPW estimators, which relates to our first application. A sensitivity analysis is an approach to partial identification that begins from assumptions that point-identify the causal estimand of interest and then considers increasing relaxations of those assumptions (Molinari, 2020). Our analysis is an extension of Dorn and Guo (2023)’s sharp characterization of bounds under Tan (2006)’s marginal sensitivity model. Tan (2022) and Frauen et al. (2023) previously extended this characterization to families that bound the Radon-Nikodym derivative of interest. We generalize these results to also include unbounded Radon-Nikodym derivative, so that we can include a compact characterization of bounds under Masten and Poirier (2018)’s conditional c-dependence model as a special case. There is rich work in this literature under other sensitivity assumptions like f𝑓fitalic_f-divergences and Total Variation distance. These other assumptions also fit within our framework, because our target distribution constructions are independent of the L∞subscript𝐿L_{\infty}italic_L start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT sensitivity assumptions that we analyze.
This paper proposes a novel sensitivity analysis framework for identification failures for linear estimators. By placing bounds on the distributional distance between the observed distribution and a target distribution that identifies the causal parameter of interest, we obtain sharp and tractable analytic bounds. This framework generalizes existing sensitivity models in RD and IPW and motivates a new sensitivity model for IV exclusion failures. We provide new results on sharp and valid sensitivity analysis that allow even unbounded likelihood ratios. We illustrate how our framework and partial identification results contribute to three important applications, including new procedures for sensitivity analysis for the CATE under RD with manipulation and for instrumental variables with exclusion.
As McCrary (2008) notes, the RD assumption of no manipulation is testable. When McCrary’s test fails, Gerard et al. (2020) propose a worst-case bound on the conditional average treatment effect for non-manipulators: a Conditional Local Average Treatment Effect (CLATE). We show that a stronger restriction on manipulation choice allows us to obtain meaningful bounds on the more standard conditional average treatment effect (CATE), which to the best of our knowledge has been an open issue in the literature. Our framework provides a sensitivity analysis by nesting an unconfoundedness-type assumption and the Gerard et al. (2020) assumption as extreme cases.
A
Next, we analyze the maximality of the domain of preferences (including the domain of single-peaked preferences) for which a rule satisfying own-peak-onliness, efficiency, the equal division guarantee, and NOM exists. For the properties of efficiency, strategy-proofness, and symmetry, the single-plateaued domain is maximal (Ching and
Neme, 2001). In Theorem 3, we show that the single-plateaued domain is maximal for our properties as well. Therefore, even though replacing strategy-proofness with NOM greatly expands the family of admissible rules, the maximal domain of preferences involved remains basically unaltered.
Next, we analyze the maximality of the domain of preferences (including the domain of single-peaked preferences) for which a rule satisfying own-peak-onliness, efficiency, the equal division guarantee, and NOM exists. For the properties of efficiency, strategy-proofness, and symmetry, the single-plateaued domain is maximal (Ching and
Serizawa (1998) show that the single-plateaued domain is maximal for efficiency, symmetry, and strategy-proofness. As we have seen, when we weaken strategy-proofness to NOM (and explicitly invoke own-peak-onliness), the class of rules that also meet the equal division guarantee enlarges considerably. One may suspect that the maximal domain of preferences for these new properties enlarges as well. However, as we will see in this section, the single-plateaued domain is still maximal for these properties.151515Ching and
Serizawa (1998) show that the single-plateaued domain is the unique maximal domain for their properties. We show that this domain is one maximal domain for our properties. There could exist “pathological” maximal domains containing just a portion of the single-plateaued domain. Nevertheless, all of them have to be contained in the domain of convex preferences.
A
README.md exists but content is empty.
Downloads last month
35