id
int64
28.8k
36k
category
stringclasses
3 values
text
stringlengths
44
3.03k
title
stringlengths
10
236
published
stringlengths
19
19
author
stringlengths
6
943
link
stringlengths
66
127
primary_category
stringclasses
62 values
29,067
em
We present a class of one-to-one matching models with perfectly transferable utility. We discuss identification and inference in these separable models, and we show how their comparative statics are readily analyzed.
The Econometrics and Some Properties of Separable Matching Models
2021-02-04 14:55:10
Alfred Galichon, Bernard Salanié
http://dx.doi.org/10.1257/aer.p20171113, http://arxiv.org/abs/2102.02564v1, http://arxiv.org/pdf/2102.02564v1
econ.EM
29,068
em
The notion of hypothetical bias (HB) constitutes, arguably, the most fundamental issue in relation to the use of hypothetical survey methods. Whether or to what extent choices of survey participants and subsequent inferred estimates translate to real-world settings continues to be debated. While HB has been extensively studied in the broader context of contingent valuation, it is much less understood in relation to choice experiments (CE). This paper reviews the empirical evidence for HB in CE in various fields of applied economics and presents an integrative framework for how HB relates to external validity. Results suggest mixed evidence on the prevalence, extent and direction of HB as well as considerable context and measurement dependency. While HB is found to be an undeniable issue when conducting CEs, the empirical evidence on HB does not render CEs unable to represent real-world preferences. While health-related choice experiments often find negligible degrees of HB, experiments in consumer behaviour and transport domains suggest that significant degrees of HB are ubiquitous. Assessments of bias in environmental valuation studies provide mixed evidence. Also, across these disciplines many studies display HB in their total willingness to pay estimates and opt-in rates but not in their hypothetical marginal rates of substitution (subject to scale correction). Further, recent findings in psychology and brain imaging studies suggest neurocognitive mechanisms underlying HB that may explain some of the discrepancies and unexpected findings in the mainstream CE literature. The review also observes how the variety of operational definitions of HB prohibits consistent measurement of HB in CE. The paper further identifies major sources of HB and possible moderating factors. Finally, it explains how HB represents one component of the wider concept of external validity.
Hypothetical bias in stated choice experiments: Part I. Integrative synthesis of empirical evidence and conceptualisation of external validity
2021-02-05 03:45:50
Milad Haghani, Michiel C. J. Bliemer, John M. Rose, Harmen Oppewal, Emily Lancsar
http://dx.doi.org/10.1016/j.jocm.2021.100309, http://arxiv.org/abs/2102.02940v1, http://arxiv.org/pdf/2102.02940v1
econ.EM
29,069
em
This paper reviews methods of hypothetical bias (HB) mitigation in choice experiments (CEs). It presents a bibliometric analysis and summary of empirical evidence of their effectiveness. The paper follows the review of empirical evidence on the existence of HB presented in Part I of this study. While the number of CE studies has rapidly increased since 2010, the critical issue of HB has been studied in only a small fraction of CE studies. The present review includes both ex-ante and ex-post bias mitigation methods. Ex-ante bias mitigation methods include cheap talk, real talk, consequentiality scripts, solemn oath scripts, opt-out reminders, budget reminders, honesty priming, induced truth telling, indirect questioning, time to think and pivot designs. Ex-post methods include follow-up certainty calibration scales, respondent perceived consequentiality scales, and revealed-preference-assisted estimation. It is observed that the use of mitigation methods markedly varies across different sectors of applied economics. The existing empirical evidence points to their overall effectives in reducing HB, although there is some variation. The paper further discusses how each mitigation method can counter a certain subset of HB sources. Considering the prevalence of HB in CEs and the effectiveness of bias mitigation methods, it is recommended that implementation of at least one bias mitigation method (or a suitable combination where possible) becomes standard practice in conducting CEs. Mitigation method(s) suited to the particular application should be implemented to ensure that inferences and subsequent policy decisions are as much as possible free of HB.
Hypothetical bias in stated choice experiments: Part II. Macro-scale analysis of literature and effectiveness of bias mitigation methods
2021-02-05 03:53:21
Milad Haghani, Michiel C. J. Bliemer, John M. Rose, Harmen Oppewal, Emily Lancsar
http://dx.doi.org/10.1016/j.jocm.2021.100322, http://arxiv.org/abs/2102.02945v1, http://arxiv.org/pdf/2102.02945v1
econ.EM
29,070
em
We provide a geometric formulation of the problem of identification of the matching surplus function and we show how the estimation problem can be solved by the introduction of a generalized entropy function over the set of matchings.
Identification of Matching Complementarities: A Geometric Viewpoint
2021-02-07 21:31:54
Alfred Galichon
http://dx.doi.org/10.1108/S0731-9053(2013)0000032005, http://arxiv.org/abs/2102.03875v1, http://arxiv.org/pdf/2102.03875v1
econ.EM
29,071
em
This paper studies inference in a randomized controlled trial (RCT) with covariate-adaptive randomization (CAR) and imperfect compliance of a binary treatment. In this context, we study inference on the LATE. As in Bugni et al. (2018,2019), CAR refers to randomization schemes that first stratify according to baseline covariates and then assign treatment status so as to achieve "balance" within each stratum. In contrast to these papers, however, we allow participants of the RCT to endogenously decide to comply or not with the assigned treatment status. We study the properties of an estimator of the LATE derived from a "fully saturated" IV linear regression, i.e., a linear regression of the outcome on all indicators for all strata and their interaction with the treatment decision, with the latter instrumented with the treatment assignment. We show that the proposed LATE estimator is asymptotically normal, and we characterize its asymptotic variance in terms of primitives of the problem. We provide consistent estimators of the standard errors and asymptotically exact hypothesis tests. In the special case when the target proportion of units assigned to each treatment does not vary across strata, we can also consider two other estimators of the LATE, including the one based on the "strata fixed effects" IV linear regression, i.e., a linear regression of the outcome on indicators for all strata and the treatment decision, with the latter instrumented with the treatment assignment. Our characterization of the asymptotic variance of the LATE estimators allows us to understand the influence of the parameters of the RCT. We use this to propose strategies to minimize their asymptotic variance in a hypothetical RCT based on data from a pilot study. We illustrate the practical relevance of these results using a simulation study and an empirical application based on Dupas et al. (2018).
Inference under Covariate-Adaptive Randomization with Imperfect Compliance
2021-02-08 01:36:26
Federico A. Bugni, Mengsi Gao
http://arxiv.org/abs/2102.03937v3, http://arxiv.org/pdf/2102.03937v3
econ.EM
29,079
em
Using results from convex analysis, we investigate a novel approach to identification and estimation of discrete choice models which we call the Mass Transport Approach (MTA). We show that the conditional choice probabilities and the choice-specific payoffs in these models are related in the sense of conjugate duality, and that the identification problem is a mass transport problem. Based on this, we propose a new two-step estimator for these models; interestingly, the first step of our estimator involves solving a linear program which is identical to the classic assignment (two-sided matching) game of Shapley and Shubik (1971). The application of convex-analytic tools to dynamic discrete choice models, and the connection with two-sided matching models, is new in the literature.
Duality in dynamic discrete-choice models
2021-02-08 18:50:03
Khai Xiang Chiong, Alfred Galichon, Matt Shum
http://dx.doi.org/10.3982/QE436, http://arxiv.org/abs/2102.06076v2, http://arxiv.org/pdf/2102.06076v2
econ.EM
29,072
em
In a landmark contribution to the structural vector autoregression (SVARs) literature, Rubio-Ramirez, Waggoner, and Zha (2010, `Structural Vector Autoregressions: Theory of Identification and Algorithms for Inference,' Review of Economic Studies) shows a necessary and sufficient condition for equality restrictions to globally identify the structural parameters of a SVAR. The simplest form of the necessary and sufficient condition shown in Theorem 7 of Rubio-Ramirez et al (2010) checks the number of zero restrictions and the ranks of particular matrices without requiring knowledge of the true value of the structural or reduced-form parameters. However, this note shows by counterexample that this condition is not sufficient for global identification. Analytical investigation of the counterexample clarifies why their sufficiency claim breaks down. The problem with the rank condition is that it allows for the possibility that restrictions are redundant, in the sense that one or more restrictions may be implied by other restrictions, in which case the implied restriction contains no identifying information. We derive a modified necessary and sufficient condition for SVAR global identification and clarify how it can be assessed in practice.
A note on global identification in structural vector autoregressions
2021-02-08 11:14:27
Emanuele Bacchiocchi, Toru Kitagawa
http://arxiv.org/abs/2102.04048v2, http://arxiv.org/pdf/2102.04048v2
econ.EM
29,073
em
We propose an easily implementable test of the validity of a set of theoretical restrictions on the relationship between economic variables, which do not necessarily identify the data generating process. The restrictions can be derived from any model of interactions, allowing censoring and multiple equilibria. When the restrictions are parameterized, the test can be inverted to yield confidence regions for partially identified parameters, thereby complementing other proposals, primarily Chernozhukov et al. [Chernozhukov, V., Hong, H., Tamer, E., 2007. Estimation and confidence regions for parameter sets in econometric models. Econometrica 75, 1243-1285].
A test of non-identifying restrictions and confidence regions for partially identified parameters
2021-02-08 15:01:13
Alfred Galichon, Marc Henry
http://dx.doi.org/10.1016/j.jeconom.2009.01.010, http://arxiv.org/abs/2102.04151v1, http://arxiv.org/pdf/2102.04151v1
econ.EM
29,074
em
A general framework is given to analyze the falsifiability of economic models based on a sample of their observable components. It is shown that, when the restrictions implied by the economic theory are insufficient to identify the unknown quantities of the structure, the duality of optimal transportation with zero-one cost function delivers interpretable and operational formulations of the hypothesis of specification correctness from which tests can be constructed to falsify the model.
Optimal transportation and the falsifiability of incompletely specified economic models
2021-02-08 15:25:46
Ivar Ekeland, Alfred Galichon, Marc Henry
http://dx.doi.org/10.1007/s00199-008-0432-y, http://arxiv.org/abs/2102.04162v2, http://arxiv.org/pdf/2102.04162v2
econ.EM
29,075
em
Despite their popularity, machine learning predictions are sensitive to potential unobserved predictors. This paper proposes a general algorithm that assesses how the omission of an unobserved variable with high explanatory power could affect the predictions of the model. Moreover, the algorithm extends the usage of machine learning from pointwise predictions to inference and sensitivity analysis. In the application, we show how the framework can be applied to data with inherent uncertainty, such as students' scores in a standardized assessment on financial literacy. First, using Bayesian Additive Regression Trees (BART), we predict students' financial literacy scores (FLS) for a subgroup of students with missing FLS. Then, we assess the sensitivity of predictions by comparing the predictions and performance of models with and without a highly explanatory synthetic predictor. We find no significant difference in the predictions and performances of the augmented (i.e., the model with the synthetic predictor) and original model. This evidence sheds a light on the stability of the predictive model used in the application. The proposed methodology can be used, above and beyond our motivating empirical example, in a wide range of machine learning applications in social and health sciences.
Assessing Sensitivity of Machine Learning Predictions.A Novel Toolbox with an Application to Financial Literacy
2021-02-08 20:42:10
Falco J. Bargagli Stoffi, Kenneth De Beckker, Joana E. Maldonado, Kristof De Witte
http://arxiv.org/abs/2102.04382v1, http://arxiv.org/pdf/2102.04382v1
econ.EM
29,076
em
We propose a methodology for constructing confidence regions with partially identified models of general form. The region is obtained by inverting a test of internal consistency of the econometric structure. We develop a dilation bootstrap methodology to deal with sampling uncertainty without reference to the hypothesized economic structure. It requires bootstrapping the quantile process for univariate data and a novel generalization of the latter to higher dimensions. Once the dilation is chosen to control the confidence level, the unknown true distribution of the observed data can be replaced by the known empirical distribution and confidence regions can then be obtained as in Galichon and Henry (2011) and Beresteanu, Molchanov and Molinari (2011).
Dilation bootstrap
2021-02-08 17:13:37
Alfred Galichon, Marc Henry
http://dx.doi.org/10.1016/j.jeconom.2013.07.001, http://arxiv.org/abs/2102.04457v1, http://arxiv.org/pdf/2102.04457v1
econ.EM
29,077
em
This article proposes a generalized notion of extreme multivariate dependence between two random vectors which relies on the extremality of the cross-covariance matrix between these two vectors. Using a partial ordering on the cross-covariance matrices, we also generalize the notion of positive upper dependence. We then proposes a means to quantify the strength of the dependence between two given multivariate series and to increase this strength while preserving the marginal distributions. This allows for the design of stress-tests of the dependence between two sets of financial variables, that can be useful in portfolio management or derivatives pricing.
Extreme dependence for multivariate data
2021-02-08 17:57:13
Damien Bosc, Alfred Galichon
http://dx.doi.org/10.1080/14697688.2014.886777, http://arxiv.org/abs/2102.04461v1, http://arxiv.org/pdf/2102.04461v1
econ.EM
29,078
em
Responding to the U.S. opioid crisis requires a holistic approach supported by evidence from linking and analyzing multiple data sources. This paper discusses how 20 available resources can be combined to answer pressing public health questions related to the crisis. It presents a network view based on U.S. geographical units and other standard concepts, crosswalked to communicate the coverage and interlinkage of these resources. These opioid-related datasets can be grouped by four themes: (1) drug prescriptions, (2) opioid related harms, (3) opioid treatment workforce, jobs, and training, and (4) drug policy. An interactive network visualization was created and is freely available online; it lets users explore key metadata, relevant scholarly works, and data interlinkages in support of informed decision making through data analysis.
Interactive Network Visualization of Opioid Crisis Related Data- Policy, Pharmaceutical, Training, and More
2021-02-10 20:51:48
Olga Scrivner, Elizabeth McAvoy, Thuy Nguyen, Tenzin Choeden, Kosali Simon, Katy Börner
http://arxiv.org/abs/2102.05596v1, http://arxiv.org/pdf/2102.05596v1
econ.EM
29,081
em
We consider structural vector autoregressions subject to 'narrative restrictions', which are inequality restrictions on functions of the structural shocks in specific periods. These restrictions raise novel problems related to identification and inference, and there is currently no frequentist procedure for conducting inference in these models. We propose a solution that is valid from both Bayesian and frequentist perspectives by: 1) formalizing the identification problem under narrative restrictions; 2) correcting a feature of the existing (single-prior) Bayesian approach that can distort inference; 3) proposing a robust (multiple-prior) Bayesian approach that is useful for assessing and eliminating the posterior sensitivity that arises in these models due to the likelihood having flat regions; and 4) showing that the robust Bayesian approach has asymptotic frequentist validity. We illustrate our methods by estimating the effects of US monetary policy under a variety of narrative restrictions.
Identification and Inference Under Narrative Restrictions
2021-02-12 14:38:55
Raffaella Giacomini, Toru Kitagawa, Matthew Read
http://arxiv.org/abs/2102.06456v1, http://arxiv.org/pdf/2102.06456v1
econ.EM
29,082
em
Weak instruments present a major setback to empirical work. This paper introduces an estimator that admits weak, uncorrelated, or mean-independent instruments that are non-independent of endogenous covariates. Relative to conventional instrumental variable methods, the proposed estimator weakens the relevance condition considerably without imposing a stronger exclusion restriction. Identification mainly rests on (1) a weak conditional median exclusion restriction imposed on pairwise differences in disturbances and (2) non-independence between covariates and instruments. Under mild conditions, the estimator is consistent and asymptotically normal. Monte Carlo experiments showcase an excellent performance of the estimator, and two empirical examples illustrate its practical utility.
A Distance Covariance-based Estimator
2021-02-14 00:55:09
Emmanuel Selorm Tsyawo, Abdul-Nasah Soale
http://arxiv.org/abs/2102.07008v1, http://arxiv.org/pdf/2102.07008v1
econ.EM
29,083
em
This paper contributes to the literature on hedonic models in two ways. First, it makes use of Queyranne's reformulation of a hedonic model in the discrete case as a network flow problem in order to provide a proof of existence and integrality of a hedonic equilibrium and efficient computation of hedonic prices. Second, elaborating on entropic methods developed in Galichon and Salani\'{e} (2014), this paper proposes a new identification strategy for hedonic models in a single market. This methodology allows one to introduce heterogeneities in both consumers' and producers' attributes and to recover producers' profits and consumers' utilities based on the observation of production and consumption patterns and the set of hedonic prices.
Entropy methods for identifying hedonic models
2021-02-15 14:49:21
Arnaud Dupuy, Alfred Galichon, Marc Henry
http://dx.doi.org/10.1007/s11579-014-0125-1, http://arxiv.org/abs/2102.07491v1, http://arxiv.org/pdf/2102.07491v1
econ.EM
29,084
em
Unlike other techniques of causality inference, the use of valid instrumental variables can deal with unobserved sources of both variable errors, variable omissions, and sampling bias, and still arrive at consistent estimates of average treatment effects. The only problem is to find the valid instruments. Using the definition of Pearl (2009) of valid instrumental variables, a formal condition for validity can be stated for variables in generalized linear causal models. The condition can be applied in two different ways: As a tool for constructing valid instruments, or as a foundation for testing whether an instrument is valid. When perfectly valid instruments are not found, the squared bias of the IV-estimator induced by an imperfectly valid instrument -- estimated with bootstrapping -- can be added to its empirical variance in a mean-square-error-like reliability measure.
Constructing valid instrumental variables in generalized linear causal models from directed acyclic graphs
2021-02-16 13:09:15
Øyvind Hoveid
http://arxiv.org/abs/2102.08056v1, http://arxiv.org/pdf/2102.08056v1
econ.EM
29,085
em
We propose a general framework for the specification testing of continuous treatment effect models. We assume a general residual function, which includes the average and quantile treatment effect models as special cases. The null models are identified under the unconfoundedness condition and contain a nonparametric weighting function. We propose a test statistic for the null model in which the weighting function is estimated by solving an expanding set of moment equations. We establish the asymptotic distributions of our test statistic under the null hypothesis and under fixed and local alternatives. The proposed test statistic is shown to be more efficient than that constructed from the true weighting function and can detect local alternatives deviated from the null models at the rate of $O(N^{-1/2})$. A simulation method is provided to approximate the null distribution of the test statistic. Monte-Carlo simulations show that our test exhibits a satisfactory finite-sample performance, and an application shows its practical value.
A Unified Framework for Specification Tests of Continuous Treatment Effect Models
2021-02-16 13:18:52
Wei Huang, Oliver Linton, Zheng Zhang
http://arxiv.org/abs/2102.08063v2, http://arxiv.org/pdf/2102.08063v2
econ.EM
29,086
em
In light of the increasing interest to transform the fixed-route public transit (FRT) services into on-demand transit (ODT) services, there exists a strong need for a comprehensive evaluation of the effects of this shift on the users. Such an analysis can help the municipalities and service providers to design and operate more convenient, attractive, and sustainable transit solutions. To understand the user preferences, we developed three hybrid choice models: integrated choice and latent variable (ICLV), latent class (LC), and latent class integrated choice and latent variable (LC-ICLV) models. We used these models to analyze the public transit user's preferences in Belleville, Ontario, Canada. Hybrid choice models were estimated using a rich dataset that combined the actual level of service attributes obtained from Belleville's ODT service and self-reported usage behaviour obtained from a revealed preference survey of the ODT users. The latent class models divided the users into two groups with different travel behaviour and preferences. The results showed that the captive user's preference for ODT service was significantly affected by the number of unassigned trips, in-vehicle time, and main travel mode before the ODT service started. On the other hand, the non-captive user's service preference was significantly affected by the Time Sensitivity and the Online Service Satisfaction latent variables, as well as the performance of the ODT service and trip purpose. This study attaches importance to improving the reliability and performance of the ODT service and outlines directions for reducing operational costs by updating the required fleet size and assigning more vehicles for work-related trips.
On-Demand Transit User Preference Analysis using Hybrid Choice Models
2021-02-16 19:27:50
Nael Alsaleh, Bilal Farooq, Yixue Zhang, Steven Farber
http://arxiv.org/abs/2102.08256v2, http://arxiv.org/pdf/2102.08256v2
econ.EM
29,087
em
This article discusses tests for nonlinear cointegration in the presence of variance breaks. We build on cointegration test approaches under heteroskedasticity (Cavaliere and Taylor, 2006, Journal of Time Series Analysis) and for nonlinearity (Choi and Saikkonen, 2010, Econometric Theory) to propose a bootstrap test and prove its consistency. A Monte Carlo study shows the approach to have good finite sample properties. We provide an empirical application to the environmental Kuznets curves (EKC), finding that the cointegration test provides little evidence for the EKC hypothesis. Additionally, we examine the nonlinear relation between the US money and the interest rate, finding that our test does not reject the null of a smooth transition cointegrating relation.
Testing for Nonlinear Cointegration under Heteroskedasticity
2021-02-17 18:14:19
Christoph Hanck, Till Massing
http://arxiv.org/abs/2102.08809v2, http://arxiv.org/pdf/2102.08809v2
econ.EM
29,088
em
This paper provides a user's guide to the general theory of approximate randomization tests developed in Canay, Romano, and Shaikh (2017) when specialized to linear regressions with clustered data. An important feature of the methodology is that it applies to settings in which the number of clusters is small -- even as small as five. We provide a step-by-step algorithmic description of how to implement the test and construct confidence intervals for the parameter of interest. In doing so, we additionally present three novel results concerning the methodology: we show that the method admits an equivalent implementation based on weighted scores; we show the test and confidence intervals are invariant to whether the test statistic is studentized or not; and we prove convexity of the confidence intervals for scalar parameters. We also articulate the main requirements underlying the test, emphasizing in particular common pitfalls that researchers may encounter. Finally, we illustrate the use of the methodology with two applications that further illuminate these points. The companion {\tt R} and {\tt Stata} packages facilitate the implementation of the methodology and the replication of the empirical exercises.
On the implementation of Approximate Randomization Tests in Linear Models with a Small Number of Clusters
2021-02-18 01:32:52
Yong Cai, Ivan A. Canay, Deborah Kim, Azeem M. Shaikh
http://arxiv.org/abs/2102.09058v4, http://arxiv.org/pdf/2102.09058v4
econ.EM
29,089
em
We propose a novel structural estimation framework in which we train a surrogate of an economic model with deep neural networks. Our methodology alleviates the curse of dimensionality and speeds up the evaluation and parameter estimation by orders of magnitudes, which significantly enhances one's ability to conduct analyses that require frequent parameter re-estimation. As an empirical application, we compare two popular option pricing models (the Heston and the Bates model with double-exponential jumps) against a non-parametric random forest model. We document that: a) the Bates model produces better out-of-sample pricing on average, but both structural models fail to outperform random forest for large areas of the volatility surface; b) random forest is more competitive at short horizons (e.g., 1-day), for short-dated options (with less than 7 days to maturity), and on days with poor liquidity; c) both structural models outperform random forest in out-of-sample delta hedging; d) the Heston model's relative performance has deteriorated significantly after the 2008 financial crisis.
Deep Structural Estimation: With an Application to Option Pricing
2021-02-18 11:15:47
Hui Chen, Antoine Didisheim, Simon Scheidegger
http://arxiv.org/abs/2102.09209v1, http://arxiv.org/pdf/2102.09209v1
econ.EM
29,090
em
We propose a method for constructing confidence intervals that account for many forms of spatial correlation. The interval has the familiar `estimator plus and minus a standard error times a critical value' form, but we propose new methods for constructing the standard error and the critical value. The standard error is constructed using population principal components from a given `worst-case' spatial covariance model. The critical value is chosen to ensure coverage in a benchmark parametric model for the spatial correlations. The method is shown to control coverage in large samples whenever the spatial correlation is weak, i.e., with average pairwise correlations that vanish as the sample size gets large. We also provide results on correct coverage in a restricted but nonparametric class of strong spatial correlations, as well as on the efficiency of the method. In a design calibrated to match economic activity in U.S. states the method outperforms previous suggestions for spatially robust inference about the population mean.
Spatial Correlation Robust Inference
2021-02-18 17:04:43
Ulrich K. Müller, Mark W. Watson
http://arxiv.org/abs/2102.09353v1, http://arxiv.org/pdf/2102.09353v1
econ.EM
29,091
em
This paper aims to provide reliable estimates for the COVID-19 contact rate of a Susceptible-Infected-Recovered (SIR) model. From observable data on confirmed, recovered, and deceased cases, a noisy measurement for the contact rate can be constructed. To filter out measurement errors and seasonality, a novel unobserved components (UC) model is set up. It specifies the log contact rate as a latent, fractionally integrated process of unknown integration order. The fractional specification reflects key characteristics of aggregate social behavior such as strong persistence and gradual adjustments to new information. A computationally simple modification of the Kalman filter is introduced and is termed the fractional filter. It allows to estimate UC models with richer long-run dynamics, and provides a closed-form expression for the prediction error of UC models. Based on the latter, a conditional-sum-of-squares (CSS) estimator for the model parameters is set up that is shown to be consistent and asymptotically normally distributed. The resulting contact rate estimates for several countries are well in line with the chronology of the pandemic, and allow to identify different contact regimes generated by policy interventions. As the fractional filter is shown to provide precise contact rate estimates at the end of the sample, it bears great potential for monitoring the pandemic in real time.
Monitoring the pandemic: A fractional filter for the COVID-19 contact rate
2021-02-19 20:55:45
Tobias Hartl
http://arxiv.org/abs/2102.10067v1, http://arxiv.org/pdf/2102.10067v1
econ.EM
29,092
em
A novel approach to price indices, leading to an innovative solution in both a multi-period or a multilateral framework, is presented. The index turns out to be the generalized least squares solution of a regression model linking values and quantities of the commodities. The index reference basket, which is the union of the intersections of the baskets of all country/period taken in pair, has a coverage broader than extant indices. The properties of the index are investigated and updating formulas established. Applications to both real and simulated data provide evidence of the better index performance in comparison with extant alternatives.
A Novel Multi-Period and Multilateral Price Index
2021-02-21 09:44:18
Consuelo Rubina Nava, Maria Grazia Zoia
http://arxiv.org/abs/2102.10528v1, http://arxiv.org/pdf/2102.10528v1
econ.EM
29,094
em
Here, we have analysed a GARCH(1,1) model with the aim to fit higher order moments for different companies' stock prices. When we assume a gaussian conditional distribution, we fail to capture any empirical data when fitting the first three even moments of financial time series. We show instead that a double gaussian conditional probability distribution better captures the higher order moments of the data. To demonstrate this point, we construct regions (phase diagrams), in the fourth and sixth order standardised moment space, where a GARCH(1,1) model can be used to fit these moments and compare them with the corresponding moments from empirical data for different sectors of the economy. We found that the ability of the GARCH model with a double gaussian conditional distribution to fit higher order moments is dictated by the time window our data spans. We can only fit data collected within specific time window lengths and only with certain parameters of the conditional double gaussian distribution. In order to incorporate the non-stationarity of financial series, we assume that the parameters of the GARCH model have time dependence.
Non-stationary GARCH modelling for fitting higher order moments of financial series within moving time windows
2021-02-23 14:05:23
Luke De Clerk, Sergey Savel'ev
http://arxiv.org/abs/2102.11627v4, http://arxiv.org/pdf/2102.11627v4
econ.EM
29,095
em
We propose a computationally feasible way of deriving the identified features of models with multiple equilibria in pure or mixed strategies. It is shown that in the case of Shapley regular normal form games, the identified set is characterized by the inclusion of the true data distribution within the core of a Choquet capacity, which is interpreted as the generalized likelihood of the model. In turn, this inclusion is characterized by a finite set of inequalities and efficient and easily implementable combinatorial methods are described to check them. In all normal form games, the identified set is characterized in terms of the value of a submodular or convex optimization program. Efficient algorithms are then given and compared to check inclusion of a parameter in this identified set. The latter are illustrated with family bargaining games and oligopoly entry games.
Set Identification in Models with Multiple Equilibria
2021-02-24 15:20:11
Alfred Galichon, Marc Henry
http://dx.doi.org/10.1093/restud/rdr008, http://arxiv.org/abs/2102.12249v1, http://arxiv.org/pdf/2102.12249v1
econ.EM
29,096
em
We provide a test for the specification of a structural model without identifying assumptions. We show the equivalence of several natural formulations of correct specification, which we take as our null hypothesis. From a natural empirical version of the latter, we derive a Kolmogorov-Smirnov statistic for Choquet capacity functionals, which we use to construct our test. We derive the limiting distribution of our test statistic under the null, and show that our test is consistent against certain classes of alternatives. When the model is given in parametric form, the test can be inverted to yield confidence regions for the identified parameter set. The approach can be applied to the estimation of models with sample selection, censored observables and to games with multiple equilibria.
Inference in Incomplete Models
2021-02-24 15:39:52
Alfred Galichon, Marc Henry
http://arxiv.org/abs/2102.12257v1, http://arxiv.org/pdf/2102.12257v1
econ.EM
29,097
em
This paper estimates the break point for large-dimensional factor models with a single structural break in factor loadings at a common unknown date. First, we propose a quasi-maximum likelihood (QML) estimator of the change point based on the second moments of factors, which are estimated by principal component analysis. We show that the QML estimator performs consistently when the covariance matrix of the pre- or post-break factor loading, or both, is singular. When the loading matrix undergoes a rotational type of change while the number of factors remains constant over time, the QML estimator incurs a stochastically bounded estimation error. In this case, we establish an asymptotic distribution of the QML estimator. The simulation results validate the feasibility of this estimator when used in finite samples. In addition, we demonstrate empirical applications of the proposed method by applying it to estimate the break points in a U.S. macroeconomic dataset and a stock return dataset.
Quasi-maximum likelihood estimation of break point in high-dimensional factor models
2021-02-25 06:43:18
Jiangtao Duan, Jushan Bai, Xu Han
http://arxiv.org/abs/2102.12666v3, http://arxiv.org/pdf/2102.12666v3
econ.EM
29,098
em
We propose a new control function (CF) method to estimate a binary response model in a triangular system with multiple unobserved heterogeneities The CFs are the expected values of the heterogeneity terms in the reduced form equations conditional on the histories of the endogenous and the exogenous variables. The method requires weaker restrictions compared to CF methods with similar imposed structures. If the support of endogenous regressors is large, average partial effects are point-identified even when instruments are discrete. Bounds are provided when the support assumption is violated. An application and Monte Carlo experiments compare several alternative methods with ours.
A Control Function Approach to Estimate Panel Data Binary Response Model
2021-02-25 18:26:41
Amaresh K Tiwari
http://dx.doi.org/10.1080/07474938.2021.1983328, http://arxiv.org/abs/2102.12927v2, http://arxiv.org/pdf/2102.12927v2
econ.EM
29,099
em
This paper proposes an empirical method to implement the recentered influence function (RIF) regression of Firpo, Fortin and Lemieux (2009), a relevant method to study the effect of covariates on many statistics beyond the mean. In empirically relevant situations where the influence function is not available or difficult to compute, we suggest to use the \emph{sensitivity curve} (Tukey, 1977) as a feasible alternative. This may be computationally cumbersome when the sample size is large. The relevance of the proposed strategy derives from the fact that, under general conditions, the sensitivity curve converges in probability to the influence function. In order to save computational time we propose to use a cubic splines non-parametric method for a random subsample and then to interpolate to the rest of the cases where it was not computed. Monte Carlo simulations show good finite sample properties. We illustrate the proposed estimator with an application to the polarization index of Duclos, Esteban and Ray (2004).
RIF Regression via Sensitivity Curves
2021-12-02 20:24:43
Javier Alejo, Gabriel Montes-Rojas, Walter Sosa-Escudero
http://arxiv.org/abs/2112.01435v1, http://arxiv.org/pdf/2112.01435v1
econ.EM
29,119
em
Startups have become in less than 50 years a major component of innovation and economic growth. An important feature of the startup phenomenon has been the wealth created through equity in startups to all stakeholders. These include the startup founders, the investors, and also the employees through the stock-option mechanism and universities through licenses of intellectual property. In the employee group, the allocation to important managers like the chief executive, vice-presidents and other officers, and independent board members is also analyzed. This report analyzes how equity was allocated in more than 400 startups, most of which had filed for an initial public offering. The author has the ambition of informing a general audience about best practice in equity split, in particular in Silicon Valley, the central place for startup innovation.
Equity in Startups
2017-11-02 12:33:44
Hervé Lebret
http://arxiv.org/abs/1711.00661v1, http://arxiv.org/pdf/1711.00661v1
econ.EM
29,100
em
We study estimation of factor models in a fixed-T panel data setting and significantly relax the common correlated effects (CCE) assumptions pioneered by Pesaran (2006) and used in dozens of papers since. In the simplest case, we model the unobserved factors as functions of the cross-sectional averages of the explanatory variables and show that this is implied by Pesaran's assumptions when the number of factors does not exceed the number of explanatory variables. Our approach allows discrete explanatory variables and flexible functional forms in the covariates. Plus, it extends to a framework that easily incorporates general functions of cross-sectional moments, in addition to heterogeneous intercepts and time trends. Our proposed estimators include Pesaran's pooled correlated common effects (CCEP) estimator as a special case. We also show that in the presence of heterogeneous slopes our estimator is consistent under assumptions much weaker than those previously used. We derive the fixed-T asymptotic normality of a general estimator and show how to adjust for estimation of the population moments in the factor loading equation.
Simple Alternatives to the Common Correlated Effects Model
2021-12-02 21:37:52
Nicholas L. Brown, Peter Schmidt, Jeffrey M. Wooldridge
http://dx.doi.org/10.13140/RG.2.2.12655.76969/1, http://arxiv.org/abs/2112.01486v1, http://arxiv.org/pdf/2112.01486v1
econ.EM
29,101
em
Until recently, there has been a consensus that clinicians should condition patient risk assessments on all observed patient covariates with predictive power. The broad idea is that knowing more about patients enables more accurate predictions of their health risks and, hence, better clinical decisions. This consensus has recently unraveled with respect to a specific covariate, namely race. There have been increasing calls for race-free risk assessment, arguing that using race to predict patient outcomes contributes to racial disparities and inequities in health care. Writers calling for race-free risk assessment have not studied how it would affect the quality of clinical decisions. Considering the matter from the patient-centered perspective of medical economics yields a disturbing conclusion: Race-free risk assessment would harm patients of all races.
Patient-Centered Appraisal of Race-Free Clinical Risk Assessment
2021-12-03 02:37:07
Charles F. Manski
http://arxiv.org/abs/2112.01639v2, http://arxiv.org/pdf/2112.01639v2
econ.EM
29,102
em
We develop a non-parametric multivariate time series model that remains agnostic on the precise relationship between a (possibly) large set of macroeconomic time series and their lagged values. The main building block of our model is a Gaussian process prior on the functional relationship that determines the conditional mean of the model, hence the name of Gaussian process vector autoregression (GP-VAR). A flexible stochastic volatility specification is used to provide additional flexibility and control for heteroskedasticity. Markov chain Monte Carlo (MCMC) estimation is carried out through an efficient and scalable algorithm which can handle large models. The GP-VAR is illustrated by means of simulated data and in a forecasting exercise with US data. Moreover, we use the GP-VAR to analyze the effects of macroeconomic uncertainty, with a particular emphasis on time variation and asymmetries in the transmission mechanisms.
Gaussian Process Vector Autoregressions and Macroeconomic Uncertainty
2021-12-03 19:16:10
Niko Hauzenberger, Florian Huber, Massimiliano Marcellino, Nico Petz
http://arxiv.org/abs/2112.01995v3, http://arxiv.org/pdf/2112.01995v3
econ.EM
29,103
em
Despite the widespread use of graphs in empirical research, little is known about readers' ability to process the statistical information they are meant to convey ("visual inference"). We study visual inference within the context of regression discontinuity (RD) designs by measuring how accurately readers identify discontinuities in graphs produced from data generating processes calibrated on 11 published papers from leading economics journals. First, we assess the effects of different graphical representation methods on visual inference using randomized experiments. We find that bin widths and fit lines have the largest impacts on whether participants correctly perceive the presence or absence of a discontinuity. Our experimental results allow us to make evidence-based recommendations to practitioners, and we suggest using small bins with no fit lines as a starting point to construct RD graphs. Second, we compare visual inference on graphs constructed using our preferred method with widely used econometric inference procedures. We find that visual inference achieves similar or lower type I error (false positive) rates and complements econometric inference.
Visual Inference and Graphical Representation in Regression Discontinuity Designs
2021-12-06 18:02:14
Christina Korting, Carl Lieberman, Jordan Matsudaira, Zhuan Pei, Yi Shen
http://arxiv.org/abs/2112.03096v2, http://arxiv.org/pdf/2112.03096v2
econ.EM
29,104
em
The `paradox of progress' is an empirical regularity that associates more education with larger income inequality. Two driving and competing factors behind this phenomenon are the convexity of the `Mincer equation' (that links wages and education) and the heterogeneity in its returns, as captured by quantile regressions. We propose a joint least-squares and quantile regression statistical framework to derive a decomposition in order to evaluate the relative contribution of each explanation. The estimators are based on the `functional derivative' approach. We apply the proposed decomposition strategy to the case of Argentina 1992 to 2015.
A decomposition method to evaluate the `paradox of progress' with evidence for Argentina
2021-12-07 20:20:26
Javier Alejo, Leonardo Gasparini, Gabriel Montes-Rojas, Walter Sosa-Escudero
http://arxiv.org/abs/2112.03836v1, http://arxiv.org/pdf/2112.03836v1
econ.EM
29,105
em
Linear regressions with period and group fixed effects are widely used to estimate policies' effects: 26 of the 100 most cited papers published by the American Economic Review from 2015 to 2019 estimate such regressions. It has recently been shown that those regressions may produce misleading estimates, if the policy's effect is heterogeneous between groups or over time, as is often the case. This survey reviews a fast-growing literature that documents this issue, and that proposes alternative estimators robust to heterogeneous effects. We use those alternative estimators to revisit Wolfers (2006).
Two-Way Fixed Effects and Differences-in-Differences with Heterogeneous Treatment Effects: A Survey
2021-12-08 23:14:26
Clément de Chaisemartin, Xavier D'Haultfœuille
http://arxiv.org/abs/2112.04565v6, http://arxiv.org/pdf/2112.04565v6
econ.EM
29,126
em
Some empirical results are more likely to be published than others. Such selective publication leads to biased estimates and distorted inference. This paper proposes two approaches for identifying the conditional probability of publication as a function of a study's results, the first based on systematic replication studies and the second based on meta-studies. For known conditional publication probabilities, we propose median-unbiased estimators and associated confidence sets that correct for selective publication. We apply our methods to recent large-scale replication studies in experimental economics and psychology, and to meta-studies of the effects of minimum wages and de-worming programs.
Identification of and correction for publication bias
2017-11-28 22:45:36
Isaiah Andrews, Maximilian Kasy
http://arxiv.org/abs/1711.10527v1, http://arxiv.org/pdf/1711.10527v1
econ.EM
29,106
em
I suggest an enhancement of the procedure of Chiong, Hsieh, and Shum (2017) for calculating bounds on counterfactual demand in semiparametric discrete choice models. Their algorithm relies on a system of inequalities indexed by cycles of a large number $M$ of observed markets and hence seems to require computationally infeasible enumeration of all such cycles. I show that such enumeration is unnecessary because solving the "fully efficient" inequality system exploiting cycles of all possible lengths $K=1,\dots,M$ can be reduced to finding the length of the shortest path between every pair of vertices in a complete bidirected weighted graph on $M$ vertices. The latter problem can be solved using the Floyd--Warshall algorithm with computational complexity $O\left(M^3\right)$, which takes only seconds to run even for thousands of markets. Monte Carlo simulations illustrate the efficiency gain from using cycles of all lengths, which turns out to be positive, but small.
Efficient counterfactual estimation in semiparametric discrete choice models: a note on Chiong, Hsieh, and Shum (2017)
2021-12-09 03:49:56
Grigory Franguridi
http://arxiv.org/abs/2112.04637v1, http://arxiv.org/pdf/2112.04637v1
econ.EM
29,107
em
This study contributes a house price prediction model selection in Tehran City based on the area between Lorenz curve (LC) and concentration curve (CC) of the predicted price by using 206,556 observed transaction data over the period from March 21, 2018, to February 19, 2021. Several different methods such as generalized linear models (GLM) and recursive partitioning and regression trees (RPART), random forests (RF) regression models, and neural network (NN) models were examined house price prediction. We used 90% of all data samples which were chosen randomly to estimate the parameters of pricing models and 10% of remaining datasets to test the accuracy of prediction. Results showed that the area between the LC and CC curves (which are known as ABC criterion) of real and predicted prices in the test data sample of the random forest regression model was less than by other models under study. The comparison of the calculated ABC criteria leads us to conclude that the nonlinear regression models such as RF regression models give an accurate prediction of house prices in Tehran City.
Housing Price Prediction Model Selection Based on Lorenz and Concentration Curves: Empirical Evidence from Tehran Housing Market
2021-12-12 12:44:28
Mohammad Mirbagherijam
http://arxiv.org/abs/2112.06192v1, http://arxiv.org/pdf/2112.06192v1
econ.EM
29,108
em
A new Stata command, ldvqreg, is developed to estimate quantile regression models for the cases of censored (with lower and/or upper censoring) and binary dependent variables. The estimators are implemented using a smoothed version of the quantile regression objective function. Simulation exercises show that it correctly estimates the parameters and it should be implemented instead of the available quantile regression methods when censoring is present. An empirical application to women's labor supply in Uruguay is considered.
Quantile Regression under Limited Dependent Variable
2021-12-13 20:33:54
Javier Alejo, Gabriel Montes-Rojas
http://arxiv.org/abs/2112.06822v1, http://arxiv.org/pdf/2112.06822v1
econ.EM
29,109
em
This article presents identification results for the marginal treatment effect (MTE) when there is sample selection. We show that the MTE is partially identified for individuals who are always observed regardless of treatment, and derive uniformly sharp bounds on this parameter under three increasingly restrictive sets of assumptions. The first result imposes standard MTE assumptions with an unrestricted sample selection mechanism. The second set of conditions imposes monotonicity of the sample selection variable with respect to treatment, considerably shrinking the identified set. Finally, we incorporate a stochastic dominance assumption which tightens the lower bound for the MTE. Our analysis extends to discrete instruments. The results rely on a mixture reformulation of the problem where the mixture weights are identified, extending Lee's (2009) trimming procedure to the MTE context. We propose estimators for the bounds derived and use data made available by Deb, Munking and Trivedi (2006) to empirically illustrate the usefulness of our approach.
Identifying Marginal Treatment Effects in the Presence of Sample Selection
2021-12-14 00:08:49
Otávio Bartalotti, Désiré Kédagni, Vitor Possebom
http://arxiv.org/abs/2112.07014v1, http://arxiv.org/pdf/2112.07014v1
econ.EM
29,110
em
We develop a novel test of the instrumental variable identifying assumptions for heterogeneous treatment effect models with conditioning covariates. We assume semiparametric dependence between potential outcomes and conditioning covariates. This allows us to obtain testable equality and inequality restrictions among the subdensities of estimable partial residuals. We propose jointly testing these restrictions. To improve power, we introduce distillation, where a trimmed sample is used to test the inequality restrictions. In Monte Carlo exercises we find gains in finite sample power from testing restrictions jointly and distillation. We apply our test procedure to three instruments and reject the null for one.
Testing Instrument Validity with Covariates
2021-12-15 16:06:22
Thomas Carr, Toru Kitagawa
http://arxiv.org/abs/2112.08092v2, http://arxiv.org/pdf/2112.08092v2
econ.EM
29,111
em
This paper examines the local linear regression (LLR) estimate of the conditional distribution function $F(y|x)$. We derive three uniform convergence results: the uniform bias expansion, the uniform convergence rate, and the uniform asymptotic linear representation. The uniformity in the above results is with respect to both $x$ and $y$ and therefore has not previously been addressed in the literature on local polynomial regression. Such uniform convergence results are especially useful when the conditional distribution estimator is the first stage of a semiparametric estimator. We demonstrate the usefulness of these uniform results with two examples: the stochastic equicontinuity condition in $y$, and the estimation of the integrated conditional distribution function.
Uniform Convergence Results for the Local Linear Regression Estimation of the Conditional Distribution
2021-12-16 04:04:23
Haitian Xie
http://arxiv.org/abs/2112.08546v2, http://arxiv.org/pdf/2112.08546v2
econ.EM
29,112
em
We consider a two-stage estimation method for linear regression that uses the lasso in Tibshirani (1996) to screen variables and re-estimate the coefficients using the least-squares boosting method in Friedman (2001) on every set of selected variables. Based on the large-scale simulation experiment in Hastie et al. (2020), the performance of lassoed boosting is found to be as competitive as the relaxed lasso in Meinshausen (2007) and can yield a sparser model under certain scenarios. An application to predict equity returns also shows that lassoed boosting can give the smallest mean square prediction error among all methods under consideration.
Lassoed Boosting and Linear Prediction in Equities Market
2021-12-16 18:00:37
Xiao Huang
http://arxiv.org/abs/2112.08934v2, http://arxiv.org/pdf/2112.08934v2
econ.EM
29,113
em
This paper studies the robustness of estimated policy effects to changes in the distribution of covariates. Robustness to covariate shifts is important, for example, when evaluating the external validity of quasi-experimental results, which are often used as a benchmark for evidence-based policy-making. I propose a novel scalar robustness metric. This metric measures the magnitude of the smallest covariate shift needed to invalidate a claim on the policy effect (for example, $ATE \geq 0$) supported by the quasi-experimental evidence. My metric links the heterogeneity of policy effects and robustness in a flexible, nonparametric way and does not require functional form assumptions. I cast the estimation of the robustness metric as a de-biased GMM problem. This approach guarantees a parametric convergence rate for the robustness metric while allowing for machine learning-based estimators of policy effect heterogeneity (for example, lasso, random forest, boosting, neural nets). I apply my procedure to the Oregon Health Insurance experiment. I study the robustness of policy effects estimates of health-care utilization and financial strain outcomes, relative to a shift in the distribution of context-specific covariates. Such covariates are likely to differ across US states, making quantification of robustness an important exercise for adoption of the insurance policy in states other than Oregon. I find that the effect on outpatient visits is the most robust among the metrics of health-care utilization considered.
Robustness, Heterogeneous Treatment Effects and Covariate Shifts
2021-12-17 02:53:42
Pietro Emilio Spini
http://arxiv.org/abs/2112.09259v1, http://arxiv.org/pdf/2112.09259v1
econ.EM
29,114
em
Aims: To re-introduce the Heckman model as a valid empirical technique in alcohol studies. Design: To estimate the determinants of problem drinking using a Heckman and a two-part estimation model. Psychological and neuro-scientific studies justify my underlying estimation assumptions and covariate exclusion restrictions. Higher order tests checking for multicollinearity validate the use of Heckman over the use of two-part estimation models. I discuss the generalizability of the two models in applied research. Settings and Participants: Two pooled national population surveys from 2016 and 2017 were used: the Behavioral Risk Factor Surveillance Survey (BRFS), and the National Survey of Drug Use and Health (NSDUH). Measurements: Participation in problem drinking and meeting the criteria for problem drinking. Findings: Both U.S. national surveys perform well with the Heckman model and pass all higher order tests. The Heckman model corrects for selection bias and reveals the direction of bias, where the two-part model does not. For example, the coefficients on age are upward biased and unemployment is downward biased in the two-part where the Heckman model does not have a selection bias. Covariate exclusion restrictions are sensitive to survey conditions and are contextually generalizable. Conclusions: The Heckman model can be used for alcohol (smoking studies as well) if the underlying estimation specification passes higher order tests for multicollinearity and the exclusion restrictions are justified with integrity for the data used. Its use is merit-worthy because it corrects for and reveals the direction and the magnitude of selection bias where the two-part does not.
Heckman-Selection or Two-Part models for alcohol studies? Depends
2021-12-20 17:08:35
Reka Sundaram-Stukel
http://arxiv.org/abs/2112.10542v2, http://arxiv.org/pdf/2112.10542v2
econ.EM
29,115
em
We study the Stigler model of citation flows among journals adapting the pairwise comparison model of Bradley and Terry to do ranking and selection of journal influence based on nonparametric empirical Bayes procedures. Comparisons with several other rankings are made.
Ranking and Selection from Pairwise Comparisons: Empirical Bayes Methods for Citation Analysis
2021-12-21 12:46:29
Jiaying Gu, Roger Koenker
http://arxiv.org/abs/2112.11064v1, http://arxiv.org/pdf/2112.11064v1
econ.EM
29,116
em
We ask if there are alternative contest models that minimize error or information loss from misspecification and outperform the Pythagorean model. This article aims to use simulated data to select the optimal expected win percentage model among the choice of relevant alternatives. The choices include the traditional Pythagorean model and the difference-form contest success function (CSF). Method. We simulate 1,000 iterations of the 2014 MLB season for the purpose of estimating and analyzing alternative models of expected win percentage (team quality). We use the open-source, Strategic Baseball Simulator and develop an AutoHotKey script that programmatically executes the SBS application, chooses the correct settings for the 2014 season, enters a unique ID for the simulation data file, and iterates these steps 1,000 times. We estimate expected win percentage using the traditional Pythagorean model, as well as the difference-form CSF model that is used in game theory and public choice economics. Each model is estimated while accounting for fixed (team) effects. We find that the difference-form CSF model outperforms the traditional Pythagorean model in terms of explanatory power and in terms of misspecification-based information loss as estimated by the Akaike Information Criterion. Through parametric estimation, we further confirm that the simulator yields realistic statistical outcomes. The simulation methodology offers the advantage of greatly improved sample size. As the season is held constant, our simulation-based statistical inference also allows for estimation and model comparison without the (time series) issue of non-stationarity. The results suggest that improved win (productivity) estimation can be achieved through alternative CSF specifications.
An Analysis of an Alternative Pythagorean Expected Win Percentage Model: Applications Using Major League Baseball Team Quality Simulations
2021-12-30 01:08:24
Justin Ehrlich, Christopher Boudreaux, James Boudreau, Shane Sanders
http://arxiv.org/abs/2112.14846v1, http://arxiv.org/pdf/2112.14846v1
econ.EM
29,117
em
In this paper we examine the relation between market returns and volatility measures through machine learning methods in a high-frequency environment. We implement a minute-by-minute rolling window intraday estimation method using two nonlinear models: Long-Short-Term Memory (LSTM) neural networks and Random Forests (RF). Our estimations show that the CBOE Volatility Index (VIX) is the strongest candidate predictor for intraday market returns in our analysis, specially when implemented through the LSTM model. This model also improves significantly the performance of the lagged market return as predictive variable. Finally, intraday RF estimation outputs indicate that there is no performance improvement with this method, and it may even worsen the results in some cases.
Modeling and Forecasting Intraday Market Returns: a Machine Learning Approach
2021-12-30 19:05:17
Iuri H. Ferreira, Marcelo C. Medeiros
http://arxiv.org/abs/2112.15108v1, http://arxiv.org/pdf/2112.15108v1
econ.EM
29,118
em
Startups have become in less than 50 years a major component of innovation and economic growth. Silicon Valley has been the place where the startup phenomenon was the most obvious and Stanford University was a major component of that success. Companies such as Google, Yahoo, Sun Microsystems, Cisco, Hewlett Packard had very strong links with Stanford but even these vary famous success stories cannot fully describe the richness and diversity of the Stanford entrepreneurial activity. This report explores the dynamics of more than 5000 companies founded by Stanford University alumni and staff, through their value creation, their field of activities, their growth patterns and more. The report also explores some features of the founders of these companies such as their academic background or the number of years between their Stanford experience and their company creation.
Startups and Stanford University
2017-11-02 11:14:26
Hervé Lebret
http://arxiv.org/abs/1711.00644v1, http://arxiv.org/pdf/1711.00644v1
econ.EM
29,120
em
I propose a treatment selection model that introduces unobserved heterogeneity in both choice sets and preferences to evaluate the average effects of a program offer. I show how to exploit the model structure to define parameters capturing these effects and then computationally characterize their identified sets under instrumental variable variation in choice sets. I illustrate these tools by analyzing the effects of providing an offer to the Head Start preschool program using data from the Head Start Impact Study. I find that such a policy affects a large number of children who take up the offer, and that they subsequently have positive effects on test scores. These effects arise from children who do not have any preschool as an outside option. A cost-benefit analysis reveals that the earning benefits associated with the test score gains can be large and outweigh the net costs associated with offer take up.
Identifying the Effects of a Program Offer with an Application to Head Start
2017-11-06 20:55:59
Vishal Kamat
http://arxiv.org/abs/1711.02048v6, http://arxiv.org/pdf/1711.02048v6
econ.EM
29,121
em
I study identification, estimation and inference for spillover effects in experiments where units' outcomes may depend on the treatment assignments of other units within a group. I show that the commonly-used reduced-form linear-in-means regression identifies a weighted sum of spillover effects with some negative weights, and that the difference in means between treated and controls identifies a combination of direct and spillover effects entering with different signs. I propose nonparametric estimators for average direct and spillover effects that overcome these issues and are consistent and asymptotically normal under a precise relationship between the number of parameters of interest, the total sample size and the treatment assignment mechanism. These findings are illustrated using data from a conditional cash transfer program and with simulations. The empirical results reveal the potential pitfalls of failing to flexibly account for spillover effects in policy evaluation: the estimated difference in means and the reduced-form linear-in-means coefficients are all close to zero and statistically insignificant, whereas the nonparametric estimators I propose reveal large, nonlinear and significant spillover effects.
Identification and Estimation of Spillover Effects in Randomized Experiments
2017-11-08 01:04:44
Gonzalo Vazquez-Bare
http://arxiv.org/abs/1711.02745v8, http://arxiv.org/pdf/1711.02745v8
econ.EM
29,122
em
Futures market contracts with varying maturities are traded concurrently and the speed at which they process information is of value in understanding the pricing discovery process. Using price discovery measures, including Putnins (2013) information leadership share and intraday data, we quantify the proportional contribution of price discovery between nearby and deferred contracts in the corn and live cattle futures markets. Price discovery is more systematic in the corn than in the live cattle market. On average, nearby contracts lead all deferred contracts in price discovery in the corn market, but have a relatively less dominant role in the live cattle market. In both markets, the nearby contract loses dominance when its relative volume share dips below 50%, which occurs about 2-3 weeks before expiration in corn and 5-6 weeks before expiration in live cattle. Regression results indicate that the share of price discovery is most closely linked to trading volume but is also affected, to far less degree, by time to expiration, backwardation, USDA announcements and market crashes. The effects of these other factors vary between the markets which likely reflect the difference in storability as well as other market-related characteristics.
Measuring Price Discovery between Nearby and Deferred Contracts in Storable and Non-Storable Commodity Futures Markets
2017-11-09 21:12:05
Zhepeng Hu, Mindy Mallory, Teresa Serra, Philip Garcia
http://arxiv.org/abs/1711.03506v1, http://arxiv.org/pdf/1711.03506v1
econ.EM
29,123
em
Economic complexity reflects the amount of knowledge that is embedded in the productive structure of an economy. It resides on the premise of hidden capabilities - fundamental endowments underlying the productive structure. In general, measuring the capabilities behind economic complexity directly is difficult, and indirect measures have been suggested which exploit the fact that the presence of the capabilities is expressed in a country's mix of products. We complement these studies by introducing a probabilistic framework which leverages Bayesian non-parametric techniques to extract the dominant features behind the comparative advantage in exported products. Based on economic evidence and trade data, we place a restricted Indian Buffet Process on the distribution of countries' capability endowment, appealing to a culinary metaphor to model the process of capability acquisition. The approach comes with a unique level of interpretability, as it produces a concise and economically plausible description of the instantiated capabilities.
Economic Complexity Unfolded: Interpretable Model for the Productive Structure of Economies
2017-11-17 17:09:19
Zoran Utkovski, Melanie F. Pradier, Viktor Stojkoski, Fernando Perez-Cruz, Ljupco Kocarev
http://dx.doi.org/10.1371/journal.pone.0200822, http://arxiv.org/abs/1711.07327v2, http://arxiv.org/pdf/1711.07327v2
econ.EM
29,124
em
This study briefly introduces the development of Shantou Special Economic Zone under Reform and Opening-Up Policy from 1980 through 2016 with a focus on policy making issues and its influences on local economy. This paper is divided into two parts, 1980 to 1991, 1992 to 2016 in accordance with the separation of the original Shantou District into three cities: Shantou, Chaozhou and Jieyang in the end of 1991. This study analyzes the policy making issues in the separation of the original Shantou District, the influences of the policy on Shantou's economy after separation, the possibility of merging the three cities into one big new economic district in the future and reasons that lead to the stagnant development of Shantou in recent 20 years. This paper uses statistical longitudinal analysis in analyzing economic problems with applications of non-parametric statistics through generalized additive model and time series forecasting methods. The paper is authored by Bowen Cai solely, who is the graduate student in the PhD program of Applied and Computational Mathematics and Statistics at the University of Notre Dame with concentration in big data analysis.
The Research on the Stagnant Development of Shantou Special Economic Zone Under Reform and Opening-Up Policy
2017-11-24 09:34:15
Bowen Cai
http://arxiv.org/abs/1711.08877v1, http://arxiv.org/pdf/1711.08877v1
econ.EM
29,125
em
This paper presents the identification of heterogeneous elasticities in the Cobb-Douglas production function. The identification is constructive with closed-form formulas for the elasticity with respect to each input for each firm. We propose that the flexible input cost ratio plays the role of a control function under "non-collinear heterogeneity" between elasticities with respect to two flexible inputs. The ex ante flexible input cost share can be used to identify the elasticities with respect to flexible inputs for each firm. The elasticities with respect to labor and capital can be subsequently identified for each firm under the timing assumption admitting the functional independence.
Constructive Identification of Heterogeneous Elasticities in the Cobb-Douglas Production Function
2017-11-28 01:51:57
Tong Li, Yuya Sasaki
http://arxiv.org/abs/1711.10031v1, http://arxiv.org/pdf/1711.10031v1
econ.EM
29,127
em
Research on growing American political polarization and antipathy primarily studies public institutions and political processes, ignoring private effects including strained family ties. Using anonymized smartphone-location data and precinct-level voting, we show that Thanksgiving dinners attended by opposing-party precinct residents were 30-50 minutes shorter than same-party dinners. This decline from a mean of 257 minutes survives extensive spatial and demographic controls. Dinner reductions in 2016 tripled for travelers from media markets with heavy political advertising --- an effect not observed in 2015 --- implying a relationship to election-related behavior. Effects appear asymmetric: while fewer Democratic-precinct residents traveled in 2016 than 2015, political differences shortened Thanksgiving dinners more among Republican-precinct residents. Nationwide, 34 million person-hours of cross-partisan Thanksgiving discourse were lost in 2016 to partisan effects.
The Effect of Partisanship and Political Advertising on Close Family Ties
2017-11-29 01:58:02
M. Keith Chen, Ryne Rohla
http://dx.doi.org/10.1126/science.aaq1433, http://arxiv.org/abs/1711.10602v2, http://arxiv.org/pdf/1711.10602v2
econ.EM
29,128
em
The main purpose of this paper is to analyze threshold effects of official development assistance (ODA) on economic growth in WAEMU zone countries. To achieve this, the study is based on OECD and WDI data covering the period 1980-2015 and used Hansen's Panel Threshold Regression (PTR) model to "bootstrap" aid threshold above which its effectiveness is effective. The evidence strongly supports the view that the relationship between aid and economic growth is non-linear with a unique threshold which is 12.74% GDP. Above this value, the marginal effect of aid is 0.69 points, "all things being equal to otherwise". One of the main contribution of this paper is to show that WAEMU countries need investments that could be covered by the foreign aid. This later one should be considered just as a complementary resource. Thus, WEAMU countries should continue to strengthen their efforts in internal resource mobilization in order to fulfil this need.
Aide et Croissance dans les pays de l'Union Economique et Mon{é}taire Ouest Africaine (UEMOA) : retour sur une relation controvers{é}e
2018-04-13 16:07:11
Nimonka Bayale
http://arxiv.org/abs/1805.00435v1, http://arxiv.org/pdf/1805.00435v1
econ.EM
29,129
em
In this paper, I endeavour to construct a new model, by extending the classic exogenous economic growth model by including a measurement which tries to explain and quantify the size of technological innovation ( A ) endogenously. I do not agree technology is a "constant" exogenous variable, because it is humans who create all technological innovations, and it depends on how much human and physical capital is allocated for its research. I inspect several possible approaches to do this, and then I test my model both against sample and real world evidence data. I call this method "dynamic" because it tries to model the details in resource allocations between research, labor and capital, by affecting each other interactively. In the end, I point out which is the new residual and the parts of the economic growth model which can be further improved.
Endogenous growth - A dynamic technology augmentation of the Solow model
2018-05-02 11:23:18
Murad Kasim
http://arxiv.org/abs/1805.00668v1, http://arxiv.org/pdf/1805.00668v1
econ.EM
29,130
em
This paper studies the identification and estimation of the optimal linear approximation of a structural regression function. The parameter in the linear approximation is called the Optimal Linear Instrumental Variables Approximation (OLIVA). This paper shows that a necessary condition for standard inference on the OLIVA is also sufficient for the existence of an IV estimand in a linear model. The instrument in the IV estimand is unknown and may not be identified. A Two-Step IV (TSIV) estimator based on Tikhonov regularization is proposed, which can be implemented by standard regression routines. We establish the asymptotic normality of the TSIV estimator assuming neither completeness nor identification of the instrument. As an important application of our analysis, we robustify the classical Hausman test for exogeneity against misspecification of the linear structural model. We also discuss extensions to weighted least squares criteria. Monte Carlo simulations suggest an excellent finite sample performance for the proposed inferences. Finally, in an empirical application estimating the elasticity of intertemporal substitution (EIS) with US data, we obtain TSIV estimates that are much larger than their standard IV counterparts, with our robust Hausman test failing to reject the null hypothesis of exogeneity of real interest rates.
Optimal Linear Instrumental Variables Approximations
2018-05-08 23:44:27
Juan Carlos Escanciano, Wei Li
http://arxiv.org/abs/1805.03275v3, http://arxiv.org/pdf/1805.03275v3
econ.EM
29,131
em
We study the identification and estimation of structural parameters in dynamic panel data logit models where decisions are forward-looking and the joint distribution of unobserved heterogeneity and observable state variables is nonparametric, i.e., fixed-effects model. We consider models with two endogenous state variables: the lagged decision variable, and the time duration in the last choice. This class of models includes as particular cases important economic applications such as models of market entry-exit, occupational choice, machine replacement, inventory and investment decisions, or dynamic demand of differentiated products. The identification of structural parameters requires a sufficient statistic that controls for unobserved heterogeneity not only in current utility but also in the continuation value of the forward-looking decision problem. We obtain the minimal sufficient statistic and prove identification of some structural parameters using a conditional likelihood approach. We apply this estimator to a machine replacement model.
Sufficient Statistics for Unobserved Heterogeneity in Structural Dynamic Logit Models
2018-05-10 19:27:33
Victor Aguirregabiria, Jiaying Gu, Yao Luo
http://arxiv.org/abs/1805.04048v1, http://arxiv.org/pdf/1805.04048v1
econ.EM
29,132
em
This paper constructs individual-specific density forecasts for a panel of firms or households using a dynamic linear model with common and heterogeneous coefficients as well as cross-sectional heteroskedasticity. The panel considered in this paper features a large cross-sectional dimension N but short time series T. Due to the short T, traditional methods have difficulty in disentangling the heterogeneous parameters from the shocks, which contaminates the estimates of the heterogeneous parameters. To tackle this problem, I assume that there is an underlying distribution of heterogeneous parameters, model this distribution nonparametrically allowing for correlation between heterogeneous parameters and initial conditions as well as individual-specific regressors, and then estimate this distribution by combining information from the whole panel. Theoretically, I prove that in cross-sectional homoskedastic cases, both the estimated common parameters and the estimated distribution of the heterogeneous parameters achieve posterior consistency, and that the density forecasts asymptotically converge to the oracle forecast. Methodologically, I develop a simulation-based posterior sampling algorithm specifically addressing the nonparametric density estimation of unobserved heterogeneous parameters. Monte Carlo simulations and an empirical application to young firm dynamics demonstrate improvements in density forecasts relative to alternative approaches.
Density Forecasts in Panel Data Models: A Semiparametric Bayesian Perspective
2018-05-10 23:51:01
Laura Liu
http://arxiv.org/abs/1805.04178v3, http://arxiv.org/pdf/1805.04178v3
econ.EM
29,134
em
This paper contributes to the literature on treatment effects estimation with machine learning inspired methods by studying the performance of different estimators based on the Lasso. Building on recent work in the field of high-dimensional statistics, we use the semiparametric efficient score estimation structure to compare different estimators. Alternative weighting schemes are considered and their suitability for the incorporation of machine learning estimators is assessed using theoretical arguments and various Monte Carlo experiments. Additionally we propose an own estimator based on doubly robust Kernel matching that is argued to be more robust to nuisance parameter misspecification. In the simulation study we verify theory based intuition and find good finite sample properties of alternative weighting scheme estimators like the one we propose.
The Finite Sample Performance of Treatment Effects Estimators based on the Lasso
2018-05-14 11:50:54
Michael Zimmert
http://arxiv.org/abs/1805.05067v1, http://arxiv.org/pdf/1805.05067v1
econ.EM
29,135
em
This paper introduces a method for linking technological improvement rates (i.e. Moore's Law) and technology adoption curves (i.e. S-Curves). There has been considerable research surrounding Moore's Law and the generalized versions applied to the time dependence of performance for other technologies. The prior work has culminated with methodology for quantitative estimation of technological improvement rates for nearly any technology. This paper examines the implications of such regular time dependence for performance upon the timing of key events in the technological adoption process. We propose a simple crossover point in performance which is based upon the technological improvement rates and current level differences for target and replacement technologies. The timing for the cross-over is hypothesized as corresponding to the first 'knee'? in the technology adoption "S-curve" and signals when the market for a given technology will start to be rewarding for innovators. This is also when potential entrants are likely to intensely experiment with product-market fit and when the competition to achieve a dominant design begins. This conceptual framework is then back-tested by examining two technological changes brought about by the internet, namely music and video transmission. The uncertainty analysis around the cases highlight opportunities for organizations to reduce future technological uncertainty. Overall, the results from the case studies support the reliability and utility of the conceptual framework in strategic business decision-making with the caveat that while technical uncertainty is reduced, it is not eliminated.
Data-Driven Investment Decision-Making: Applying Moore's Law and S-Curves to Business Strategies
2018-05-16 17:09:04
Christopher L. Benson, Christopher L. Magee
http://arxiv.org/abs/1805.06339v1, http://arxiv.org/pdf/1805.06339v1
econ.EM
29,136
em
Some aspects of the problem of stable marriage are discussed. There are two distinguished marriage plans: the fully transferable case, where money can be transferred between the participants, and the fully non transferable case where each participant has its own rigid preference list regarding the other gender. We continue to discuss intermediate partial transferable cases. Partial transferable plans can be approached as either special cases of cooperative games using the notion of a core, or as a generalization of the cyclical monotonicity property of the fully transferable case (fake promises). We shall introduced these two approaches, and prove the existence of stable marriage for the fully transferable and non-transferable plans.
Happy family of stable marriages
2018-05-17 13:33:04
Gershon Wolansky
http://arxiv.org/abs/1805.06687v1, http://arxiv.org/pdf/1805.06687v1
econ.EM
29,137
em
This study back-tests a marginal cost of production model proposed to value the digital currency bitcoin. Results from both conventional regression and vector autoregression (VAR) models show that the marginal cost of production plays an important role in explaining bitcoin prices, challenging recent allegations that bitcoins are essentially worthless. Even with markets pricing bitcoin in the thousands of dollars each, the valuation model seems robust. The data show that a price bubble that began in the Fall of 2017 resolved itself in early 2018, converging with the marginal cost model. This suggests that while bubbles may appear in the bitcoin market, prices will tend to this bound and not collapse to zero.
Bitcoin price and its marginal cost of production: support for a fundamental value
2018-05-19 18:30:29
Adam Hayes
http://arxiv.org/abs/1805.07610v1, http://arxiv.org/pdf/1805.07610v1
econ.EM
29,138
em
The issue of model selection in applied research is of vital importance. Since the true model in such research is not known, which model should be used from among various potential ones is an empirical question. There might exist several competitive models. A typical approach to dealing with this is classic hypothesis testing using an arbitrarily chosen significance level based on the underlying assumption that a true null hypothesis exists. In this paper we investigate how successful this approach is in determining the correct model for different data generating processes using time series data. An alternative approach based on more formal model selection techniques using an information criterion or cross-validation is suggested and evaluated in the time series environment via Monte Carlo experiments. This paper also explores the effectiveness of deciding what type of general relation exists between two variables (e.g. relation in levels or relation in first differences) using various strategies based on hypothesis testing and on information criteria with the presence or absence of unit roots.
Model Selection in Time Series Analysis: Using Information Criteria as an Alternative to Hypothesis Testing
2018-05-23 10:40:53
R. Scott Hacker, Abdulnasser Hatemi-J
http://arxiv.org/abs/1805.08991v1, http://arxiv.org/pdf/1805.08991v1
econ.EM
29,139
em
This study investigates the dose-response effects of making music on youth development. Identification is based on the conditional independence assumption and estimation is implemented using a recent double machine learning estimator. The study proposes solutions to two highly practically relevant questions that arise for these new methods: (i) How to investigate sensitivity of estimates to tuning parameter choices in the machine learning part? (ii) How to assess covariate balancing in high-dimensional settings? The results show that improvements in objectively measured cognitive skills require at least medium intensity, while improvements in school grades are already observed for low intensity of practice.
A Double Machine Learning Approach to Estimate the Effects of Musical Practice on Student's Skills
2018-05-23 10:58:08
Michael C. Knaus
http://arxiv.org/abs/1805.10300v2, http://arxiv.org/pdf/1805.10300v2
econ.EM
29,932
em
We propose logit-based IV and augmented logit-based IV estimators that serve as alternatives to the traditionally used 2SLS estimator in the model where both the endogenous treatment variable and the corresponding instrument are binary. Our novel estimators are as easy to compute as the 2SLS estimator but have an advantage over the 2SLS estimator in terms of causal interpretability. In particular, in certain cases where the probability limits of both our estimators and the 2SLS estimator take the form of weighted-average treatment effects, our estimators are guaranteed to yield non-negative weights whereas the 2SLS estimator is not.
Logit-based alternatives to two-stage least squares
2023-12-16 08:47:43
Denis Chetverikov, Jinyong Hahn, Zhipeng Liao, Shuyang Sheng
http://arxiv.org/abs/2312.10333v1, http://arxiv.org/pdf/2312.10333v1
econ.EM
29,140
em
This article introduces two absolutely continuous global-local shrinkage priors to enable stochastic variable selection in the context of high-dimensional matrix exponential spatial specifications. Existing approaches as a means to dealing with overparameterization problems in spatial autoregressive specifications typically rely on computationally demanding Bayesian model-averaging techniques. The proposed shrinkage priors can be implemented using Markov chain Monte Carlo methods in a flexible and efficient way. A simulation study is conducted to evaluate the performance of each of the shrinkage priors. Results suggest that they perform particularly well in high-dimensional environments, especially when the number of parameters to estimate exceeds the number of observations. For an empirical illustration we use pan-European regional economic growth data.
Flexible shrinkage in high-dimensional Bayesian spatial autoregressive models
2018-05-28 12:01:55
Michael Pfarrhofer, Philipp Piribauer
http://dx.doi.org/10.1016/j.spasta.2018.10.004, http://arxiv.org/abs/1805.10822v1, http://arxiv.org/pdf/1805.10822v1
econ.EM
29,141
em
We propose a method that reconciles two popular approaches to structural estimation and inference: Using a complete - yet approximate model versus imposing a set of credible behavioral conditions. This is done by distorting the approximate model to satisfy these conditions. We provide the asymptotic theory and Monte Carlo evidence, and illustrate that counterfactual experiments are possible. We apply the methodology to the model of long run risks in aggregate consumption (Bansal and Yaron, 2004), where the complete model is generated using the Campbell and Shiller (1988) approximation. Using US data, we investigate the empirical importance of the neglected non-linearity. We find that distorting the model to satisfy the non-linear equilibrium condition is strongly preferred by the data while the quality of the approximation is yet another reason for the downward bias to estimates of the intertemporal elasticity of substitution and the upward bias in risk aversion.
Equilibrium Restrictions and Approximate Models -- With an application to Pricing Macroeconomic Risk
2018-05-28 14:27:20
Andreas Tryphonides
http://arxiv.org/abs/1805.10869v3, http://arxiv.org/pdf/1805.10869v3
econ.EM
29,142
em
The United States' power market is featured by the lack of judicial power at the federal level. The market thus provides a unique testing environment for the market organization structure. At the same time, the econometric modeling and forecasting of electricity market consumption become more challenging. Import and export, which generally follow simple rules in European countries, can be a result of direct market behaviors. This paper seeks to build a general model for power consumption and using the model to test several hypotheses.
Modeling the residential electricity consumption within a restructured power market
2018-05-28 22:19:00
Chelsea Sun
http://arxiv.org/abs/1805.11138v2, http://arxiv.org/pdf/1805.11138v2
econ.EM
29,143
em
The policy relevant treatment effect (PRTE) measures the average effect of switching from a status-quo policy to a counterfactual policy. Estimation of the PRTE involves estimation of multiple preliminary parameters, including propensity scores, conditional expectation functions of the outcome and covariates given the propensity score, and marginal treatment effects. These preliminary estimators can affect the asymptotic distribution of the PRTE estimator in complicated and intractable manners. In this light, we propose an orthogonal score for double debiased estimation of the PRTE, whereby the asymptotic distribution of the PRTE estimator is obtained without any influence of preliminary parameter estimators as far as they satisfy mild requirements of convergence rates. To our knowledge, this paper is the first to develop limit distribution theories for inference about the PRTE.
Estimation and Inference for Policy Relevant Treatment Effects
2018-05-29 17:34:35
Yuya Sasaki, Takuya Ura
http://arxiv.org/abs/1805.11503v4, http://arxiv.org/pdf/1805.11503v4
econ.EM
29,144
em
Partial mean with generated regressors arises in several econometric problems, such as the distribution of potential outcomes with continuous treatments and the quantile structural function in a nonseparable triangular model. This paper proposes a nonparametric estimator for the partial mean process, where the second step consists of a kernel regression on regressors that are estimated in the first step. The main contribution is a uniform expansion that characterizes in detail how the estimation error associated with the generated regressor affects the limiting distribution of the marginal integration estimator. The general results are illustrated with two examples: the generalized propensity score for a continuous treatment (Hirano and Imbens, 2004) and control variables in triangular models (Newey, Powell, and Vella, 1999; Imbens and Newey, 2009). An empirical application to the Job Corps program evaluation demonstrates the usefulness of the method.
Partial Mean Processes with Generated Regressors: Continuous Treatment Effects and Nonseparable Models
2018-11-01 02:37:25
Ying-Ying Lee
http://arxiv.org/abs/1811.00157v1, http://arxiv.org/pdf/1811.00157v1
econ.EM
29,145
em
I develop a new identification strategy for treatment effects when noisy measurements of unobserved confounding factors are available. I use proxy variables to construct a random variable conditional on which treatment variables become exogenous. The key idea is that, under appropriate conditions, there exists a one-to-one mapping between the distribution of unobserved confounding factors and the distribution of proxies. To ensure sufficient variation in the constructed control variable, I use an additional variable, termed excluded variable, which satisfies certain exclusion restrictions and relevance conditions. I establish asymptotic distributional results for semiparametric and flexible parametric estimators of causal parameters. I illustrate empirical relevance and usefulness of my results by estimating causal effects of attending selective college on earnings.
Treatment Effect Estimation with Noisy Conditioning Variables
2018-11-02 01:53:48
Kenichi Nagasawa
http://arxiv.org/abs/1811.00667v4, http://arxiv.org/pdf/1811.00667v4
econ.EM
29,146
em
We develop a new statistical procedure to test whether the dependence structure is identical between two groups. Rather than relying on a single index such as Pearson's correlation coefficient or Kendall's Tau, we consider the entire dependence structure by investigating the dependence functions (copulas). The critical values are obtained by a modified randomization procedure designed to exploit asymptotic group invariance conditions. Implementation of the test is intuitive and simple, and does not require any specification of a tuning parameter or weight function. At the same time, the test exhibits excellent finite sample performance, with the null rejection rates almost equal to the nominal level even when the sample size is extremely small. Two empirical applications concerning the dependence between income and consumption, and the Brexit effect on European financial market integration are provided.
Randomization Tests for Equality in Dependence Structure
2018-11-06 03:59:00
Juwon Seo
http://arxiv.org/abs/1811.02105v1, http://arxiv.org/pdf/1811.02105v1
econ.EM
29,147
em
Finite mixture models are useful in applied econometrics. They can be used to model unobserved heterogeneity, which plays major roles in labor economics, industrial organization and other fields. Mixtures are also convenient in dealing with contaminated sampling models and models with multiple equilibria. This paper shows that finite mixture models are nonparametrically identified under weak assumptions that are plausible in economic applications. The key is to utilize the identification power implied by information in covariates variation. First, three identification approaches are presented, under distinct and non-nested sets of sufficient conditions. Observable features of data inform us which of the three approaches is valid. These results apply to general nonparametric switching regressions, as well as to structural econometric models, such as auction models with unobserved heterogeneity. Second, some extensions of the identification results are developed. In particular, a mixture regression where the mixing weights depend on the value of the regressors in a fully unrestricted manner is shown to be nonparametrically identifiable. This means a finite mixture model with function-valued unobserved heterogeneity can be identified in a cross-section setting, without restricting the dependence pattern between the regressor and the unobserved heterogeneity. In this aspect it is akin to fixed effects panel data models which permit unrestricted correlation between unobserved heterogeneity and covariates. Third, the paper shows that fully nonparametric estimation of the entire mixture model is possible, by forming a sample analogue of one of the new identification strategies. The estimator is shown to possess a desirable polynomial rate of convergence as in a standard nonparametric estimation problem, despite nonregular features of the model.
Nonparametric Analysis of Finite Mixtures
2018-11-07 05:16:14
Yuichi Kitamura, Louise Laage
http://arxiv.org/abs/1811.02727v1, http://arxiv.org/pdf/1811.02727v1
econ.EM
29,148
em
Single index linear models for binary response with random coefficients have been extensively employed in many econometric settings under various parametric specifications of the distribution of the random coefficients. Nonparametric maximum likelihood estimation (NPMLE) as proposed by Cosslett (1983) and Ichimura and Thompson (1998), in contrast, has received less attention in applied work due primarily to computational difficulties. We propose a new approach to computation of NPMLEs for binary response models that significantly increase their computational tractability thereby facilitating greater flexibility in applications. Our approach, which relies on recent developments involving the geometry of hyperplane arrangements, is contrasted with the recently proposed deconvolution method of Gautier and Kitamura (2013). An application to modal choice for the journey to work in the Washington DC area illustrates the methods.
Nonparametric maximum likelihood methods for binary response models with random coefficients
2018-11-08 12:33:02
Jiaying Gu, Roger Koenker
http://arxiv.org/abs/1811.03329v3, http://arxiv.org/pdf/1811.03329v3
econ.EM
29,149
em
This study proposes a point estimator of the break location for a one-time structural break in linear regression models. If the break magnitude is small, the least-squares estimator of the break date has two modes at the ends of the finite sample period, regardless of the true break location. To solve this problem, I suggest an alternative estimator based on a modification of the least-squares objective function. The modified objective function incorporates estimation uncertainty that varies across potential break dates. The new break point estimator is consistent and has a unimodal finite sample distribution under small break magnitudes. A limit distribution is provided under an in-fill asymptotic framework. Monte Carlo simulation results suggest that the new estimator outperforms the least-squares estimator. I apply the method to estimate the break date in U.S. real GDP growth and U.S. and UK stock return prediction models.
Estimation of a Structural Break Point in Linear Regression Models
2018-11-09 03:10:11
Yaein Baek
http://arxiv.org/abs/1811.03720v3, http://arxiv.org/pdf/1811.03720v3
econ.EM
29,150
em
This paper analyses the use of bootstrap methods to test for parameter change in linear models estimated via Two Stage Least Squares (2SLS). Two types of test are considered: one where the null hypothesis is of no change and the alternative hypothesis involves discrete change at k unknown break-points in the sample; and a second test where the null hypothesis is that there is discrete parameter change at l break-points in the sample against an alternative in which the parameters change at l + 1 break-points. In both cases, we consider inferences based on a sup-Wald-type statistic using either the wild recursive bootstrap or the wild fixed bootstrap. We establish the asymptotic validity of these bootstrap tests under a set of general conditions that allow the errors to exhibit conditional and/or unconditional heteroskedasticity, and report results from a simulation study that indicate the tests yield reliable inferences in the sample sizes often encountered in macroeconomics. The analysis covers the cases where the first-stage estimation of 2SLS involves a model whose parameters are either constant or themselves subject to discrete parameter change. If the errors exhibit unconditional heteroskedasticity and/or the reduced form is unstable then the bootstrap methods are particularly attractive because the limiting distributions of the test statistics are not pivotal.
Bootstrapping Structural Change Tests
2018-11-09 23:15:33
Otilia Boldea, Adriana Cornea-Madeira, Alastair R. Hall
http://dx.doi.org/10.1016/j.jeconom.2019.05.019, http://arxiv.org/abs/1811.04125v1, http://arxiv.org/pdf/1811.04125v1
econ.EM
29,151
em
Identification of multinomial choice models is often established by using special covariates that have full support. This paper shows how these identification results can be extended to a large class of multinomial choice models when all covariates are bounded. I also provide a new $\sqrt{n}$-consistent asymptotically normal estimator of the finite-dimensional parameters of the model.
Identification and estimation of multinomial choice models with latent special covariates
2018-11-14 01:48:40
Nail Kashaev
http://arxiv.org/abs/1811.05555v3, http://arxiv.org/pdf/1811.05555v3
econ.EM
29,152
em
In this paper, we investigate seemingly unrelated regression (SUR) models that allow the number of equations (N) to be large, and to be comparable to the number of the observations in each equation (T). It is well known in the literature that the conventional SUR estimator, for example, the generalized least squares (GLS) estimator of Zellner (1962) does not perform well. As the main contribution of the paper, we propose a new feasible GLS estimator called the feasible graphical lasso (FGLasso) estimator. For a feasible implementation of the GLS estimator, we use the graphical lasso estimation of the precision matrix (the inverse of the covariance matrix of the equation system errors) assuming that the underlying unknown precision matrix is sparse. We derive asymptotic theories of the new estimator and investigate its finite sample properties via Monte-Carlo simulations.
Estimation of High-Dimensional Seemingly Unrelated Regression Models
2018-11-14 02:19:46
Lidan Tan, Khai X. Chiong, Hyungsik Roger Moon
http://arxiv.org/abs/1811.05567v1, http://arxiv.org/pdf/1811.05567v1
econ.EM
29,161
em
We study partial identification of the preference parameters in the one-to-one matching model with perfectly transferable utilities. We do so without imposing parametric distributional assumptions on the unobserved heterogeneity and with data on one large market. We provide a tractable characterisation of the identified set under various classes of nonparametric distributional assumptions on the unobserved heterogeneity. Using our methodology, we re-examine some of the relevant questions in the empirical literature on the marriage market, which have been previously studied under the Logit assumption. Our results reveal that many findings in the aforementioned literature are primarily driven by such parametric restrictions.
Partial Identification in Matching Models for the Marriage Market
2019-02-15 00:37:28
Cristina Gualdani, Shruti Sinha
http://arxiv.org/abs/1902.05610v6, http://arxiv.org/pdf/1902.05610v6
econ.EM
29,153
em
In this study, Bayesian inference is developed for structural vector autoregressive models in which the structural parameters are identified via Markov-switching heteroskedasticity. In such a model, restrictions that are just-identifying in the homoskedastic case, become over-identifying and can be tested. A set of parametric restrictions is derived under which the structural matrix is globally or partially identified and a Savage-Dickey density ratio is used to assess the validity of the identification conditions. The latter is facilitated by analytical derivations that make the computations fast and numerical standard errors small. As an empirical example, monetary models are compared using heteroskedasticity as an additional device for identification. The empirical results support models with money in the interest rate reaction function.
Bayesian Inference for Structural Vector Autoregressions Identified by Markov-Switching Heteroskedasticity
2018-11-20 13:29:18
Helmut Lütkepohl, Tomasz Woźniak
http://dx.doi.org/10.1016/j.jedc.2020.103862, http://arxiv.org/abs/1811.08167v1, http://arxiv.org/pdf/1811.08167v1
econ.EM
29,154
em
In this paper we aim to improve existing empirical exchange rate models by accounting for uncertainty with respect to the underlying structural representation. Within a flexible Bayesian non-linear time series framework, our modeling approach assumes that different regimes are characterized by commonly used structural exchange rate models, with their evolution being driven by a Markov process. We assume a time-varying transition probability matrix with transition probabilities depending on a measure of the monetary policy stance of the central bank at the home and foreign country. We apply this model to a set of eight exchange rates against the US dollar. In a forecasting exercise, we show that model evidence varies over time and a model approach that takes this empirical evidence seriously yields improvements in accuracy of density forecasts for most currency pairs considered.
Model instability in predictive exchange rate regressions
2018-11-21 19:40:00
Niko Hauzenberger, Florian Huber
http://arxiv.org/abs/1811.08818v2, http://arxiv.org/pdf/1811.08818v2
econ.EM
29,155
em
Volatilities, in high-dimensional panels of economic time series with a dynamic factor structure on the levels or returns, typically also admit a dynamic factor decomposition. We consider a two-stage dynamic factor model method recovering the common and idiosyncratic components of both levels and log-volatilities. Specifically, in a first estimation step, we extract the common and idiosyncratic shocks for the levels, from which a log-volatility proxy is computed. In a second step, we estimate a dynamic factor model, which is equivalent to a multiplicative factor structure for volatilities, for the log-volatility panel. By exploiting this two-stage factor approach, we build one-step-ahead conditional prediction intervals for large $n \times T$ panels of returns. Those intervals are based on empirical quantiles, not on conditional variances; they can be either equal- or unequal- tailed. We provide uniform consistency and consistency rates results for the proposed estimators as both $n$ and $T$ tend to infinity. We study the finite-sample properties of our estimators by means of Monte Carlo simulations. Finally, we apply our methodology to a panel of asset returns belonging to the S&P100 index in order to compute one-step-ahead conditional prediction intervals for the period 2006-2013. A comparison with the componentwise GARCH benchmark (which does not take advantage of cross-sectional information) demonstrates the superiority of our approach, which is genuinely multivariate (and high-dimensional), nonparametric, and model-free.
Generalized Dynamic Factor Models and Volatilities: Consistency, rates, and prediction intervals
2018-11-25 19:06:08
Matteo Barigozzi, Marc Hallin
http://dx.doi.org/10.1016/j.jeconom.2020.01.003, http://arxiv.org/abs/1811.10045v2, http://arxiv.org/pdf/1811.10045v2
econ.EM
29,156
em
This paper studies model selection in semiparametric econometric models. It develops a consistent series-based model selection procedure based on a Bayesian Information Criterion (BIC) type criterion to select between several classes of models. The procedure selects a model by minimizing the semiparametric Lagrange Multiplier (LM) type test statistic from Korolev (2018) but additionally rewards simpler models. The paper also develops consistent upward testing (UT) and downward testing (DT) procedures based on the semiparametric LM type specification test. The proposed semiparametric LM-BIC and UT procedures demonstrate good performance in simulations. To illustrate the use of these semiparametric model selection procedures, I apply them to the parametric and semiparametric gasoline demand specifications from Yatchew and No (2001). The LM-BIC procedure selects the semiparametric specification that is nonparametric in age but parametric in all other variables, which is in line with the conclusions in Yatchew and No (2001). The results of the UT and DT procedures heavily depend on the choice of tuning parameters and assumptions about the model errors.
LM-BIC Model Selection in Semiparametric Models
2018-11-26 23:29:18
Ivan Korolev
http://arxiv.org/abs/1811.10676v1, http://arxiv.org/pdf/1811.10676v1
econ.EM
29,157
em
This paper studies a fixed-design residual bootstrap method for the two-step estimator of Francq and Zako\"ian (2015) associated with the conditional Expected Shortfall. For a general class of volatility models the bootstrap is shown to be asymptotically valid under the conditions imposed by Beutner et al. (2018). A simulation study is conducted revealing that the average coverage rates are satisfactory for most settings considered. There is no clear evidence to have a preference for any of the three proposed bootstrap intervals. This contrasts results in Beutner et al. (2018) for the VaR, for which the reversed-tails interval has a superior performance.
A Residual Bootstrap for Conditional Expected Shortfall
2018-11-27 01:03:46
Alexander Heinemann, Sean Telg
http://arxiv.org/abs/1811.11557v1, http://arxiv.org/pdf/1811.11557v1
econ.EM
29,158
em
We provide a complete asymptotic distribution theory for clustered data with a large number of independent groups, generalizing the classic laws of large numbers, uniform laws, central limit theory, and clustered covariance matrix estimation. Our theory allows for clustered observations with heterogeneous and unbounded cluster sizes. Our conditions cleanly nest the classical results for i.n.i.d. observations, in the sense that our conditions specialize to the classical conditions under independent sampling. We use this theory to develop a full asymptotic distribution theory for estimation based on linear least-squares, 2SLS, nonlinear MLE, and nonlinear GMM.
Asymptotic Theory for Clustered Samples
2019-02-05 02:46:04
Bruce E. Hansen, Seojeong Lee
http://arxiv.org/abs/1902.01497v1, http://arxiv.org/pdf/1902.01497v1
econ.EM
29,159
em
In this paper we propose a general framework to analyze prediction in time series models and show how a wide class of popular time series models satisfies this framework. We postulate a set of high-level assumptions, and formally verify these assumptions for the aforementioned time series models. Our framework coincides with that of Beutner et al. (2019, arXiv:1710.00643) who establish the validity of conditional confidence intervals for predictions made in this framework. The current paper therefore complements the results in Beutner et al. (2019, arXiv:1710.00643) by providing practically relevant applications of their theory.
A General Framework for Prediction in Time Series Models
2019-02-05 13:06:04
Eric Beutner, Alexander Heinemann, Stephan Smeekes
http://arxiv.org/abs/1902.01622v1, http://arxiv.org/pdf/1902.01622v1
econ.EM
29,162
em
The identification of the network effect is based on either group size variation, the structure of the network or the relative position in the network. I provide easy-to-verify necessary conditions for identification of undirected network models based on the number of distinct eigenvalues of the adjacency matrix. Identification of network effects is possible; although in many empirical situations existing identification strategies may require the use of many instruments or instruments that could be strongly correlated with each other. The use of highly correlated instruments or many instruments may lead to weak identification or many instruments bias. This paper proposes regularized versions of the two-stage least squares (2SLS) estimators as a solution to these problems. The proposed estimators are consistent and asymptotically normal. A Monte Carlo study illustrates the properties of the regularized estimators. An empirical application, assessing a local government tax competition model, shows the empirical relevance of using regularization methods.
Weak Identification and Estimation of Social Interaction Models
2019-02-16 22:36:11
Guy Tchuente
http://arxiv.org/abs/1902.06143v1, http://arxiv.org/pdf/1902.06143v1
econ.EM
29,163
em
This paper is concerned with learning decision makers' preferences using data on observed choices from a finite set of risky alternatives. We propose a discrete choice model with unobserved heterogeneity in consideration sets and in standard risk aversion. We obtain sufficient conditions for the model's semi-nonparametric point identification, including in cases where consideration depends on preferences and on some of the exogenous variables. Our method yields an estimator that is easy to compute and is applicable in markets with large choice sets. We illustrate its properties using a dataset on property insurance purchases.
Discrete Choice under Risk with Limited Consideration
2019-02-18 19:05:32
Levon Barseghyan, Francesca Molinari, Matthew Thirkettle
http://arxiv.org/abs/1902.06629v3, http://arxiv.org/pdf/1902.06629v3
econ.EM
29,164
em
The synthetic control method is often used in treatment effect estimation with panel data where only a few units are treated and a small number of post-treatment periods are available. Current estimation and inference procedures for synthetic control methods do not allow for the existence of spillover effects, which are plausible in many applications. In this paper, we consider estimation and inference for synthetic control methods, allowing for spillover effects. We propose estimators for both direct treatment effects and spillover effects and show they are asymptotically unbiased. In addition, we propose an inferential procedure and show it is asymptotically unbiased. Our estimation and inference procedure applies to cases with multiple treated units or periods, and where the underlying factor model is either stationary or cointegrated. In simulations, we confirm that the presence of spillovers renders current methods biased and have distorted sizes, whereas our methods yield properly sized tests and retain reasonable power. We apply our method to a classic empirical example that investigates the effect of California's tobacco control program as in Abadie et al. (2010) and find evidence of spillovers.
Estimation and Inference for Synthetic Control Methods with Spillover Effects
2019-02-20 02:19:26
Jianfei Cao, Connor Dowd
http://arxiv.org/abs/1902.07343v2, http://arxiv.org/pdf/1902.07343v2
econ.EM
29,165
em
I show how to reveal ambiguity-sensitive preferences over a single natural event. In the proposed elicitation mechanism, agents mix binarized bets on the uncertain event and its complement under varying betting odds. The mechanism identifies the interval of relevant probabilities for maxmin and maxmax preferences. For variational preferences and smooth second-order preferences, the mechanism reveals inner bounds, that are sharp under high stakes. For small stakes, mixing under second-order preferences is dominated by the variance of the second-order distribution. Additionally, the mechanism can distinguish extreme ambiguity aversion as in maxmin preferences and moderate ambiguity aversion as in variational or smooth second-order preferences. An experimental implementation suggests that participants perceive almost as much ambiguity for the stock index and actions of other participants as for the Ellsberg urn, indicating the importance of ambiguity in real-world decision-making.
Eliciting ambiguity with mixing bets
2019-02-20 11:19:21
Patrick Schmidt
http://arxiv.org/abs/1902.07447v4, http://arxiv.org/pdf/1902.07447v4
econ.EM
29,166
em
Ordered probit and logit models have been frequently used to estimate the mean ranking of happiness outcomes (and other ordinal data) across groups. However, it has been recently highlighted that such ranking may not be identified in most happiness applications. We suggest researchers focus on median comparison instead of the mean. This is because the median rank can be identified even if the mean rank is not. Furthermore, median ranks in probit and logit models can be readily estimated using standard statistical softwares. The median ranking, as well as ranking for other quantiles, can also be estimated semiparametrically and we provide a new constrained mixed integer optimization procedure for implementation. We apply it to estimate a happiness equation using General Social Survey data of the US.
Robust Ranking of Happiness Outcomes: A Median Regression Perspective
2019-02-20 21:50:07
Le-Yu Chen, Ekaterina Oparina, Nattavudh Powdthavee, Sorawoot Srisuma
http://arxiv.org/abs/1902.07696v3, http://arxiv.org/pdf/1902.07696v3
econ.EM
29,167
em
We bound features of counterfactual choices in the nonparametric random utility model of demand, i.e. if observable choices are repeated cross-sections and one allows for unrestricted, unobserved heterogeneity. In this setting, tight bounds are developed on counterfactual discrete choice probabilities and on the expectation and c.d.f. of (functionals of) counterfactual stochastic demand.
Nonparametric Counterfactuals in Random Utility Models
2019-02-22 06:07:40
Yuichi Kitamura, Jörg Stoye
http://arxiv.org/abs/1902.08350v2, http://arxiv.org/pdf/1902.08350v2
econ.EM
29,168
em
We propose a counterfactual Kaplan-Meier estimator that incorporates exogenous covariates and unobserved heterogeneity of unrestricted dimensionality in duration models with random censoring. Under some regularity conditions, we establish the joint weak convergence of the proposed counterfactual estimator and the unconditional Kaplan-Meier (1958) estimator. Applying the functional delta method, we make inference on the cumulative hazard policy effect, that is, the change of duration dependence in response to a counterfactual policy. We also evaluate the finite sample performance of the proposed counterfactual estimation method in a Monte Carlo study.
Counterfactual Inference in Duration Models with Random Censoring
2019-02-22 17:17:05
Jiun-Hua Su
http://arxiv.org/abs/1902.08502v1, http://arxiv.org/pdf/1902.08502v1
econ.EM
29,169
em
We show that when a high-dimensional data matrix is the sum of a low-rank matrix and a random error matrix with independent entries, the low-rank component can be consistently estimated by solving a convex minimization problem. We develop a new theoretical argument to establish consistency without assuming sparsity or the existence of any moments of the error matrix, so that fat-tailed continuous random errors such as Cauchy are allowed. The results are illustrated by simulations.
Robust Principal Component Analysis with Non-Sparse Errors
2019-02-23 07:55:29
Jushan Bai, Junlong Feng
http://arxiv.org/abs/1902.08735v2, http://arxiv.org/pdf/1902.08735v2
econ.EM