id,category,text,title,published,author,link,primary_category 29425,em,"This paper provides an introduction to structural estimation methods for matching markets with transferable utility.",Structural Estimation of Matching Markets with Transferable Utility,2021-09-16 15:29:14,"Alfred Galichon, Bernard Salanié","http://arxiv.org/abs/2109.07932v1, http://arxiv.org/pdf/2109.07932v1",econ.EM 29133,em,"In this paper, we propose a model which simulates odds distributions of pari-mutuel betting system under two hypotheses on the behavior of bettors: 1. The amount of bets increases very rapidly as the deadline for betting comes near. 2. Each bettor bets on a horse which gives the largest expectation value of the benefit. The results can be interpreted as such efficient behaviors do not serve to extinguish the FL bias but even produce stronger FL bias.",Efficiency in Micro-Behaviors and FL Bias,2018-05-11 05:17:42,"Kurihara Kazutaka, Yohei Tutiya","http://arxiv.org/abs/1805.04225v1, http://arxiv.org/pdf/1805.04225v1",econ.EM 29931,em,"This paper derives the efficiency bound for estimating the parameters of dynamic panel data models in the presence of an increasing number of incidental parameters. We study the efficiency problem by formulating the dynamic panel as a simultaneous equations system, and show that the quasi-maximum likelihood estimator (QMLE) applied to the system achieves the efficiency bound. Comparison of QMLE with fixed effects estimators is made.",Efficiency of QMLE for dynamic panel data models with interactive effects,2023-12-13 06:56:34,Jushan Bai,"http://arxiv.org/abs/2312.07881v1, http://arxiv.org/pdf/2312.07881v1",econ.EM 28940,em,"Endogeneity and missing data are common issues in empirical research. We investigate how both jointly affect inference on causal parameters. Conventional methods to estimate the variance, which treat the imputed data as if it was observed in the first place, are not reliable. We derive the asymptotic variance and propose a heteroskedasticity robust variance estimator for two-stage least squares which accounts for the imputation. Monte Carlo simulations support our theoretical findings.",On the Effect of Imputation on the 2SLS Variance,2019-03-26 19:42:59,"Helmut Farbmacher, Alexander Kann","http://arxiv.org/abs/1903.11004v1, http://arxiv.org/pdf/1903.11004v1",econ.EM 28947,em,"We propose a model selection criterion to detect purely causal from purely noncausal models in the framework of quantile autoregressions (QAR). We also present asymptotics for the i.i.d. case with regularly varying distributed innovations in QAR. This new modelling perspective is appealing for investigating the presence of bubbles in economic and financial time series, and is an alternative to approximate maximum likelihood methods. We illustrate our analysis using hyperinflation episodes in Latin American countries.",Identification of Noncausal Models by Quantile Autoregressions,2019-04-11 23:49:57,"Alain Hecq, Li Sun","http://arxiv.org/abs/1904.05952v1, http://arxiv.org/pdf/1904.05952v1",econ.EM 28965,em,"Complex functions have multiple uses in various fields of study, so analyze their characteristics it is of extensive interest to other sciences. This work begins with a particular class of rational functions of a complex variable; over this is deduced two elementals properties concerning the residues and is proposed one results which establishes one lower bound for the p-norm of the residues vector. Applications to the autoregressive processes are presented and the exemplifications are indicated in historical data of electric generation and econometric series.",On the residues vectors of a rational class of complex functions. Application to autoregressive processes,2019-07-12 23:46:56,"Guillermo Daniel Scheidereiter, Omar Roberto Faure","http://arxiv.org/abs/1907.05949v1, http://arxiv.org/pdf/1907.05949v1",econ.EM 29080,em,"Many econometric models can be analyzed as finite mixtures. We focus on two-component mixtures and we show that they are nonparametrically point identified by a combination of an exclusion restriction and tail restrictions. Our identification analysis suggests simple closed-form estimators of the component distributions and mixing proportions, as well as a specification test. We derive their asymptotic properties using results on tail empirical processes and we present a simulation study that documents their finite-sample performance.",Inference on two component mixtures under tail restrictions,2021-02-11 22:27:47,"Marc Henry, Koen Jochmans, Bernard Salanié","http://arxiv.org/abs/2102.06232v1, http://arxiv.org/pdf/2102.06232v1",econ.EM 29093,em,"This paper establishes an extended representation theorem for unit-root VARs. A specific algebraic technique is devised to recover stationarity from the solution of the model in the form of a cointegrating transformation. Closed forms of the results of interest are derived for integrated processes up to the 4-th order. An extension to higher-order processes turns out to be within the reach on an induction argument.",Cointegrated Solutions of Unit-Root VARs: An Extended Representation Theorem,2021-02-21 18:28:20,"Mario Faliva, Maria Grazia Zoia","http://arxiv.org/abs/2102.10626v1, http://arxiv.org/pdf/2102.10626v1",econ.EM 29015,em,"The randomization inference literature studying randomized controlled trials (RCTs) assumes that units' potential outcomes are deterministic. This assumption is unlikely to hold, as stochastic shocks may take place during the experiment. In this paper, we consider the case of an RCT with individual-level treatment assignment, and we allow for individual-level and cluster-level (e.g. village-level) shocks. We show that one can draw inference on the ATE conditional on the realizations of the cluster-level shocks, using heteroskedasticity-robust standard errors, or on the ATE netted out of those shocks, using cluster-robust standard errors.",Clustering and External Validity in Randomized Controlled Trials,2019-12-02 22:30:25,"Antoine Deeb, Clément de Chaisemartin","http://arxiv.org/abs/1912.01052v7, http://arxiv.org/pdf/1912.01052v7",econ.EM 28777,em,"This article reviews recent advances in fixed effect estimation of panel data models for long panels, where the number of time periods is relatively large. We focus on semiparametric models with unobserved individual and time effects, where the distribution of the outcome variable conditional on covariates and unobserved effects is specified parametrically, while the distribution of the unobserved effects is left unrestricted. Compared to existing reviews on long panels (Arellano and Hahn 2007; a section in Arellano and Bonhomme 2011) we discuss models with both individual and time effects, split-panel Jackknife bias corrections, unbalanced panels, distribution and quantile effects, and other extensions. Understanding and correcting the incidental parameter bias caused by the estimation of many fixed effects is our main focus, and the unifying theme is that the order of this bias is given by the simple formula p/n for all models discussed, with p the number of estimated parameters and n the total sample size.",Fixed Effect Estimation of Large T Panel Data Models,2017-09-26 15:46:13,"Iván Fernández-Val, Martin Weidner","http://arxiv.org/abs/1709.08980v2, http://arxiv.org/pdf/1709.08980v2",econ.EM 28778,em,"This paper considers the identification of treatment effects on conditional transition probabilities. We show that even under random assignment only the instantaneous average treatment effect is point identified. Since treated and control units drop out at different rates, randomization only ensures the comparability of treatment and controls at the time of randomization, so that long-run average treatment effects are not point identified. Instead we derive informative bounds on these average treatment effects. Our bounds do not impose (semi)parametric restrictions, for example, proportional hazards. We also explore various assumptions such as monotone treatment response, common shocks and positively correlated outcomes that tighten the bounds.",Bounds On Treatment Effects On Transitions,2017-09-26 15:46:40,"Johan Vikström, Geert Ridder, Martin Weidner","http://arxiv.org/abs/1709.08981v1, http://arxiv.org/pdf/1709.08981v1",econ.EM 28779,em,"We propose an inference procedure for estimators defined by mathematical programming problems, focusing on the important special cases of linear programming (LP) and quadratic programming (QP). In these settings, the coefficients in both the objective function and the constraints of the mathematical programming problem may be estimated from data and hence involve sampling error. Our inference approach exploits the characterization of the solutions to these programming problems by complementarity conditions; by doing so, we can transform the problem of doing inference on the solution of a constrained optimization problem (a non-standard inference problem) into one involving inference based on a set of inequalities with pre-estimated coefficients, which is much better understood. We evaluate the performance of our procedure in several Monte Carlo simulations and an empirical application to the classic portfolio selection problem in finance.",Inference on Estimators defined by Mathematical Programming,2017-09-26 19:24:52,"Yu-Wei Hsieh, Xiaoxia Shi, Matthew Shum","http://arxiv.org/abs/1709.09115v1, http://arxiv.org/pdf/1709.09115v1",econ.EM 28780,em,"We analyze the empirical content of the Roy model, stripped down to its essential features, namely sector specific unobserved heterogeneity and self-selection on the basis of potential outcomes. We characterize sharp bounds on the joint distribution of potential outcomes and testable implications of the Roy self-selection model under an instrumental constraint on the joint distribution of potential outcomes we call stochastically monotone instrumental variable (SMIV). We show that testing the Roy model selection is equivalent to testing stochastic monotonicity of observed outcomes relative to the instrument. We apply our sharp bounds to the derivation of a measure of departure from Roy self-selection to identify values of observable characteristics that induce the most costly misallocation of talent and sector and are therefore prime targets for intervention. Special emphasis is put on the case of binary outcomes, which has received little attention in the literature to date. For richer sets of outcomes, we emphasize the distinction between pointwise sharp bounds and functional sharp bounds, and its importance, when constructing sharp bounds on functional features, such as inequality measures. We analyze a Roy model of college major choice in Canada and Germany within this framework, and we take a new look at the under-representation of women in~STEM.",Sharp bounds and testability of a Roy model of STEM major choices,2017-09-27 02:25:35,"Ismael Mourifie, Marc Henry, Romuald Meango","http://arxiv.org/abs/1709.09284v2, http://arxiv.org/pdf/1709.09284v2",econ.EM 28781,em,"The ongoing net neutrality debate has generated a lot of heated discussions on whether or not monetary interactions should be regulated between content and access providers. Among the several topics discussed, `differential pricing' has recently received attention due to `zero-rating' platforms proposed by some service providers. In the differential pricing scheme, Internet Service Providers (ISPs) can exempt data access charges for on content from certain CPs (zero-rated) while no exemption is on content from other CPs. This allows the possibility for Content Providers (CPs) to make `sponsorship' agreements to zero-rate their content and attract more user traffic. In this paper, we study the effect of differential pricing on various players in the Internet. We first consider a model with a monopolistic ISP and multiple CPs where users select CPs based on the quality of service (QoS) and data access charges. We show that in a differential pricing regime 1) a CP offering low QoS can make have higher surplus than a CP offering better QoS through sponsorships. 2) Overall QoS (mean delay) for end users can degrade under differential pricing schemes. In the oligopolistic market with multiple ISPs, users tend to select the ISP with lowest ISP resulting in same type of conclusions as in the monopolistic market. We then study how differential pricing effects the revenue of ISPs.",Zero-rating of Content and its Effect on the Quality of Service in the Internet,2017-09-27 07:51:32,"Manjesh K. Hanawal, Fehmina Malik, Yezekael Hayel","http://arxiv.org/abs/1709.09334v2, http://arxiv.org/pdf/1709.09334v2",econ.EM 28782,em,"The uncertainty and robustness of Computable General Equilibrium models can be assessed by conducting a Systematic Sensitivity Analysis. Different methods have been used in the literature for SSA of CGE models such as Gaussian Quadrature and Monte Carlo methods. This paper explores the use of Quasi-random Monte Carlo methods based on the Halton and Sobol' sequences as means to improve the efficiency over regular Monte Carlo SSA, thus reducing the computational requirements of the SSA. The findings suggest that by using low-discrepancy sequences, the number of simulations required by the regular MC SSA methods can be notably reduced, hence lowering the computational time required for SSA of CGE models.",Quasi-random Monte Carlo application in CGE systematic sensitivity analysis,2017-09-28 01:54:30,Theodoros Chatzivasileiadis,"http://arxiv.org/abs/1709.09755v1, http://arxiv.org/pdf/1709.09755v1",econ.EM 28783,em,"We propose a method of estimating the linear-in-means model of peer effects in which the peer group, defined by a social network, is endogenous in the outcome equation for peer effects. Endogeneity is due to unobservable individual characteristics that influence both link formation in the network and the outcome of interest. We propose two estimators of the peer effect equation that control for the endogeneity of the social connections using a control function approach. We leave the functional form of the control function unspecified and treat it as unknown. To estimate the model, we use a sieve semiparametric approach, and we establish asymptotics of the semiparametric estimator.",Estimation of Peer Effects in Endogenous Social Networks: Control Function Approach,2017-09-28 18:41:48,"Ida Johnsson, Hyungsik Roger Moon","http://arxiv.org/abs/1709.10024v3, http://arxiv.org/pdf/1709.10024v3",econ.EM 28784,em,"This paper considers the problem of forecasting a collection of short time series using cross sectional information in panel data. We construct point predictors using Tweedie's formula for the posterior mean of heterogeneous coefficients under a correlated random effects distribution. This formula utilizes cross-sectional information to transform the unit-specific (quasi) maximum likelihood estimator into an approximation of the posterior mean under a prior distribution that equals the population distribution of the random coefficients. We show that the risk of a predictor based on a non-parametric estimate of the Tweedie correction is asymptotically equivalent to the risk of a predictor that treats the correlated-random-effects distribution as known (ratio-optimality). Our empirical Bayes predictor performs well compared to various competitors in a Monte Carlo study. In an empirical application we use the predictor to forecast revenues for a large panel of bank holding companies and compare forecasts that condition on actual and severely adverse macroeconomic conditions.",Forecasting with Dynamic Panel Data Models,2017-09-29 01:46:48,"Laura Liu, Hyungsik Roger Moon, Frank Schorfheide","http://arxiv.org/abs/1709.10193v1, http://arxiv.org/pdf/1709.10193v1",econ.EM 28785,em,"There is a fast growing literature that set-identifies structural vector autoregressions (SVARs) by imposing sign restrictions on the responses of a subset of the endogenous variables to a particular structural shock (sign-restricted SVARs). Most methods that have been used to construct pointwise coverage bands for impulse responses of sign-restricted SVARs are justified only from a Bayesian perspective. This paper demonstrates how to formulate the inference problem for sign-restricted SVARs within a moment-inequality framework. In particular, it develops methods of constructing confidence bands for impulse response functions of sign-restricted SVARs that are valid from a frequentist perspective. The paper also provides a comparison of frequentist and Bayesian coverage bands in the context of an empirical application - the former can be substantially wider than the latter.",Inference for VARs Identified with Sign Restrictions,2017-09-29 02:25:13,"Eleonora Granziera, Hyungsik Roger Moon, Frank Schorfheide","http://arxiv.org/abs/1709.10196v2, http://arxiv.org/pdf/1709.10196v2",econ.EM 28786,em,"We systematically investigate the effect heterogeneity of job search programmes for unemployed workers. To investigate possibly heterogeneous employment effects, we combine non-experimental causal empirical models with Lasso-type estimators. The empirical analyses are based on rich administrative data from Swiss social security records. We find considerable heterogeneities only during the first six months after the start of training. Consistent with previous results of the literature, unemployed persons with fewer employment opportunities profit more from participating in these programmes. Furthermore, we also document heterogeneous employment effects by residence status. Finally, we show the potential of easy-to-implement programme participation rules for improving average employment effects of these active labour market programmes.",Heterogeneous Employment Effects of Job Search Programmes: A Machine Learning Approach,2017-09-29 11:21:08,"Michael Knaus, Michael Lechner, Anthony Strittmatter","http://dx.doi.org/10.3368/jhr.57.2.0718-9615R1, http://arxiv.org/abs/1709.10279v2, http://arxiv.org/pdf/1709.10279v2",econ.EM 28787,em,"Dynamic contracts with multiple agents is a classical decentralized decision-making problem with asymmetric information. In this paper, we extend the single-agent dynamic incentive contract model in continuous-time to a multi-agent scheme in finite horizon and allow the terminal reward to be dependent on the history of actions and incentives. We first derive a set of sufficient conditions for the existence of optimal contracts in the most general setting and conditions under which they form a Nash equilibrium. Then we show that the principal's problem can be converted to solving Hamilton-Jacobi-Bellman (HJB) equation requiring a static Nash equilibrium. Finally, we provide a framework to solve this problem by solving partial differential equations (PDE) derived from backward stochastic differential equations (BSDE).",A Note on the Multi-Agent Contracts in Continuous Time,2017-10-01 20:07:08,"Qi Luo, Romesh Saigal","http://arxiv.org/abs/1710.00377v2, http://arxiv.org/pdf/1710.00377v2",econ.EM 28788,em,"This paper presents a new estimator of the intercept of a linear regression model in cases where the outcome varaible is observed subject to a selection rule. The intercept is often in this context of inherent interest; for example, in a program evaluation context, the difference between the intercepts in outcome equations for participants and non-participants can be interpreted as the difference in average outcomes of participants and their counterfactual average outcomes if they had chosen not to participate. The new estimator can under mild conditions exhibit a rate of convergence in probability equal to $n^{-p/(2p+1)}$, where $p\ge 2$ is an integer that indexes the strength of certain smoothness assumptions. This rate of convergence is shown in this context to be the optimal rate of convergence for estimation of the intercept parameter in terms of a minimax criterion. The new estimator, unlike other proposals in the literature, is under mild conditions consistent and asymptotically normal with a rate of convergence that is the same regardless of the degree to which selection depends on unobservables in the outcome equation. Simulation evidence and an empirical example are included.",Rate-Optimal Estimation of the Intercept in a Semiparametric Sample-Selection Model,2017-10-04 03:02:22,Chuan Goh,"http://arxiv.org/abs/1710.01423v3, http://arxiv.org/pdf/1710.01423v3",econ.EM 28789,em,"Gale, Kuhn and Tucker (1950) introduced two ways to reduce a zero-sum game by packaging some strategies with respect to a probability distribution on them. In terms of value, they gave conditions for a desirable reduction. We show that a probability distribution for a desirable reduction relies on optimal strategies in the original game. Also, we correct an improper example given by them to show that the reverse of a theorem does not hold.","A Note on Gale, Kuhn, and Tucker's Reductions of Zero-Sum Games",2017-10-06 12:45:42,Shuige Liu,"http://arxiv.org/abs/1710.02326v1, http://arxiv.org/pdf/1710.02326v1",econ.EM 28790,em,"This study proposes a simple technique for propensity score matching for multiple treatment levels under the strong unconfoundedness assumption with the help of the Aitchison distance proposed in the field of compositional data analysis (CODA).",Propensity score matching for multiple treatment levels: A CODA-based contribution,2017-10-24 03:27:47,"Hajime Seya, Takahiro Yoshida","http://arxiv.org/abs/1710.08558v1, http://arxiv.org/pdf/1710.08558v1",econ.EM 28791,em,"We consider an index model of dyadic link formation with a homophily effect index and a degree heterogeneity index. We provide nonparametric identification results in a single large network setting for the potentially nonparametric homophily effect function, the realizations of unobserved individual fixed effects and the unknown distribution of idiosyncratic pairwise shocks, up to normalization, for each possible true value of the unknown parameters. We propose a novel form of scale normalization on an arbitrary interquantile range, which is not only theoretically robust but also proves particularly convenient for the identification analysis, as quantiles provide direct linkages between the observable conditional probabilities and the unknown index values. We then use an inductive ""in-fill and out-expansion"" algorithm to establish our main results, and consider extensions to more general settings that allow nonseparable dependence between homophily and degree heterogeneity, as well as certain extents of network sparsity and weaker assumptions on the support of unobserved heterogeneity. As a byproduct, we also propose a concept called ""modeling equivalence"" as a refinement of ""observational equivalence"", and use it to provide a formal discussion about normalization, identification and their interplay with counterfactuals.",Nonparametric Identification in Index Models of Link Formation,2017-10-30 23:32:12,Wayne Yuan Gao,"http://arxiv.org/abs/1710.11230v5, http://arxiv.org/pdf/1710.11230v5",econ.EM 28792,em,"Web search data are a valuable source of business and economic information. Previous studies have utilized Google Trends web search data for economic forecasting. We expand this work by providing algorithms to combine and aggregate search volume data, so that the resulting data is both consistent over time and consistent between data series. We give a brand equity example, where Google Trends is used to analyze shopping data for 100 top ranked brands and these data are used to nowcast economic variables. We describe the importance of out of sample prediction and show how principal component analysis (PCA) can be used to improve the signal to noise ratio and prevent overfitting in nowcasting models. We give a finance example, where exploratory data analysis and classification is used to analyze the relationship between Google Trends searches and stock prices.",Aggregating Google Trends: Multivariate Testing and Analysis,2017-12-08 19:18:10,"Stephen L. France, Yuying Shi","http://arxiv.org/abs/1712.03152v2, http://arxiv.org/pdf/1712.03152v2",econ.EM 28793,em,"We propose a new inferential methodology for dynamic economies that is robust to misspecification of the mechanism generating frictions. Economies with frictions are treated as perturbations of a frictionless economy that are consistent with a variety of mechanisms. We derive a representation for the law of motion for such economies and we characterize parameter set identification. We derive a link from model aggregate predictions to distributional information contained in qualitative survey data and specify conditions under which the identified set is refined. The latter is used to semi-parametrically estimate distortions due to frictions in macroeconomic variables. Based on these estimates, we propose a novel test for complete models. Using consumer and business survey data collected by the European Commission, we apply our method to estimate distortions due to financial frictions in the Spanish economy. We investigate the implications of these estimates for the adequacy of the standard model of financial frictions SW-BGG (Smets and Wouters (2007), Bernanke, Gertler, and Gilchrist (1999)).",Set Identified Dynamic Economies and Robustness to Misspecification,2017-12-11 11:41:11,Andreas Tryphonides,"http://arxiv.org/abs/1712.03675v2, http://arxiv.org/pdf/1712.03675v2",econ.EM 28794,em,"This paper defines the class of $\mathcal{H}$-valued autoregressive (AR) processes with a unit root of finite type, where $\mathcal{H}$ is an infinite dimensional separable Hilbert space, and derives a generalization of the Granger-Johansen Representation Theorem valid for any integration order $d=1,2,\dots$. An existence theorem shows that the solution of an AR with a unit root of finite type is necessarily integrated of some finite integer $d$ and displays a common trends representation with a finite number of common stochastic trends of the type of (cumulated) bilateral random walks and an infinite dimensional cointegrating space. A characterization theorem clarifies the connections between the structure of the AR operators and $(i)$ the order of integration, $(ii)$ the structure of the attractor space and the cointegrating space, $(iii)$ the expression of the cointegrating relations, and $(iv)$ the Triangular representation of the process. Except for the fact that the number of cointegrating relations that are integrated of order 0 is infinite, the representation of $\mathcal{H}$-valued ARs with a unit root of finite type coincides with that of usual finite dimensional VARs, which corresponds to the special case $\mathcal{H}=\mathbb{R}^p$.",Cointegration in functional autoregressive processes,2017-12-20 18:23:20,"Massimo Franchi, Paolo Paruolo","http://dx.doi.org/10.1017/S0266466619000306, http://arxiv.org/abs/1712.07522v2, http://arxiv.org/pdf/1712.07522v2",econ.EM 28795,em,"High-dimensional linear models with endogenous variables play an increasingly important role in recent econometric literature. In this work we allow for models with many endogenous variables and many instrument variables to achieve identification. Because of the high-dimensionality in the second stage, constructing honest confidence regions with asymptotically correct coverage is non-trivial. Our main contribution is to propose estimators and confidence regions that would achieve that. The approach relies on moment conditions that have an additional orthogonal property with respect to nuisance parameters. Moreover, estimation of high-dimension nuisance parameters is carried out via new pivotal procedures. In order to achieve simultaneously valid confidence regions we use a multiplier bootstrap procedure to compute critical values and establish its validity.",Simultaneous Confidence Intervals for High-dimensional Linear Models with Many Endogenous Variables,2017-12-21 20:33:40,"Alexandre Belloni, Christian Hansen, Whitney Newey","http://arxiv.org/abs/1712.08102v4, http://arxiv.org/pdf/1712.08102v4",econ.EM 28796,em,"This paper investigates the impacts of major natural resource discoveries since 1960 on life expectancy in the nations that they were resource poor prior to the discoveries. Previous literature explains the relation between nations wealth and life expectancy, but it has been silent about the impacts of resource discoveries on life expectancy. We attempt to fill this gap in this study. An important advantage of this study is that as the previous researchers argued resource discovery could be an exogenous variable. We use longitudinal data from 1960 to 2014 and we apply three modern empirical methods including Difference-in-Differences, Event studies, and Synthetic Control approach, to investigate the main question of the research which is 'how resource discoveries affect life expectancy?'. The findings show that resource discoveries in Ecuador, Yemen, Oman, and Equatorial Guinea have positive and significant impacts on life expectancy, but the effects for the European countries are mostly negative.",Resource Abundance and Life Expectancy,2018-01-01 01:43:39,Bahram Sanginabadi,"http://arxiv.org/abs/1801.00369v1, http://arxiv.org/pdf/1801.00369v1",econ.EM 28797,em,"In this paper we estimate a Bayesian vector autoregressive model with factor stochastic volatility in the error term to assess the effects of an uncertainty shock in the Euro area. This allows us to treat macroeconomic uncertainty as a latent quantity during estimation. Only a limited number of contributions to the literature estimate uncertainty and its macroeconomic consequences jointly, and most are based on single country models. We analyze the special case of a shock restricted to the Euro area, where member states are highly related by construction. We find significant results of a decrease in real activity for all countries over a period of roughly a year following an uncertainty shock. Moreover, equity prices, short-term interest rates and exports tend to decline, while unemployment levels increase. Dynamic responses across countries differ slightly in magnitude and duration, with Ireland, Slovakia and Greece exhibiting different reactions for some macroeconomic fundamentals.",Implications of macroeconomic volatility in the Euro area,2018-01-09 16:20:42,"Niko Hauzenberger, Maximilian Böck, Michael Pfarrhofer, Anna Stelzer, Gregor Zens","http://arxiv.org/abs/1801.02925v2, http://arxiv.org/pdf/1801.02925v2",econ.EM 28798,em,"We report a new result on lotteries --- that a well-funded syndicate has a purely mechanical strategy to achieve expected returns of 10\% to 25\% in an equiprobable lottery with no take and no carryover pool. We prove that an optimal strategy (Nash equilibrium) in a game between the syndicate and other players consists of betting one of each ticket (the ""trump ticket""), and extend that result to proportional ticket selection in non-equiprobable lotteries. The strategy can be adjusted to accommodate lottery taxes and carryover pools. No ""irrationality"" need be involved for the strategy to succeed --- it requires only that a large group of non-syndicate bettors each choose a few tickets independently.",A Method for Winning at Lotteries,2018-01-05 22:35:17,"Steven D. Moffitt, William T. Ziemba","http://arxiv.org/abs/1801.02958v1, http://arxiv.org/pdf/1801.02958v1",econ.EM 28799,em,"Despite its unusual payout structure, the Canadian 6/49 Lotto is one of the few government sponsored lotteries that has the potential for a favorable strategy we call ""buying the pot."" By buying the pot we mean that a syndicate buys each ticket in the lottery, ensuring that it holds a jackpot winner. We assume that the other bettors independently buy small numbers of tickets. This paper presents (1) a formula for the syndicate's expected return, (2) conditions under which buying the pot produces a significant positive expected return, and (3) the implications of these findings for lottery design.",Does it Pay to Buy the Pot in the Canadian 6/49 Lotto? Implications for Lottery Design,2018-01-06 00:58:18,"Steven D. Moffitt, William T. Ziemba","http://arxiv.org/abs/1801.02959v1, http://arxiv.org/pdf/1801.02959v1",econ.EM 28800,em,"Dynamic Discrete Choice Models (DDCMs) are important in the structural estimation literature. Since the structural errors are practically always continuous and unbounded in nature, researchers often use the expected value function. The idea to solve for the expected value function made solution more practical and estimation feasible. However, as we show in this paper, the expected value function is impractical compared to an alternative: the integrated (ex ante) value function. We provide brief descriptions of the inefficacy of the former, and benchmarks on actual problems with varying cardinality of the state space and number of decisions. Though the two approaches solve the same problem in theory, the benchmarks support the claim that the integrated value function is preferred in practice.",Solving Dynamic Discrete Choice Models: Integrated or Expected Value Function?,2018-01-11 23:26:00,Patrick Kofod Mogensen,"http://arxiv.org/abs/1801.03978v1, http://arxiv.org/pdf/1801.03978v1",econ.EM 28801,em,"This paper develops a new model and estimation procedure for panel data that allows us to identify heterogeneous structural breaks. We model individual heterogeneity using a grouped pattern. For each group, we allow common structural breaks in the coefficients. However, the number, timing, and size of these breaks can differ across groups. We develop a hybrid estimation procedure of the grouped fixed effects approach and adaptive group fused Lasso. We show that our method can consistently identify the latent group structure, detect structural breaks, and estimate the regression parameters. Monte Carlo results demonstrate the good performance of the proposed method in finite samples. An empirical application to the relationship between income and democracy illustrates the importance of considering heterogeneous structural breaks.",Heterogeneous structural breaks in panel data models,2018-01-15 09:19:28,"Ryo Okui, Wendun Wang","http://arxiv.org/abs/1801.04672v2, http://arxiv.org/pdf/1801.04672v2",econ.EM 28802,em,"We characterize common assumption of rationality of 2-person games within an incomplete information framework. We use the lexicographic model with incomplete information and show that a belief hierarchy expresses common assumption of rationality within a complete information framework if and only if there is a belief hierarchy within the corresponding incomplete information framework that expresses common full belief in caution, rationality, every good choice is supported, and prior belief in the original utility functions.",Characterizing Assumption of Rationality by Incomplete Information,2018-01-15 12:48:20,Shuige Liu,"http://arxiv.org/abs/1801.04714v1, http://arxiv.org/pdf/1801.04714v1",econ.EM 28803,em,"We first show (1) the importance of investigating health expenditure process using the order two Markov chain model, rather than the standard order one model, which is widely used in the literature. Markov chain of order two is the minimal framework that is capable of distinguishing those who experience a certain health expenditure level for the first time from those who have been experiencing that or other levels for some time. In addition, using the model we show (2) that the probability of encountering a health shock first de- creases until around age 10, and then increases with age, particularly, after age 40, (3) that health shock distributions among different age groups do not differ until their percentiles reach the median range, but that above the median the health shock distributions of older age groups gradually start to first-order dominate those of younger groups, and (4) that the persistency of health shocks also shows a U-shape in relation to age.",Quantifying Health Shocks Over the Life Cycle,2018-01-26 13:35:38,"Taiyo Fukai, Hidehiko Ichimura, Kyogo Kanazawa","http://arxiv.org/abs/1801.08746v1, http://arxiv.org/pdf/1801.08746v1",econ.EM 28804,em,"We define a modification of the standard Kripke model, called the ordered Kripke model, by introducing a linear order on the set of accessible states of each state. We first show this model can be used to describe the lexicographic belief hierarchy in epistemic game theory, and perfect rationalizability can be characterized within this model. Then we show that each ordered Kripke model is the limit of a sequence of standard probabilistic Kripke models with a modified (common) belief operator, in the senses of structure and the (epsilon-)permissibilities characterized within them.","Ordered Kripke Model, Permissibility, and Convergence of Probabilistic Kripke Model",2018-01-26 14:46:28,Shuige Liu,"http://arxiv.org/abs/1801.08767v1, http://arxiv.org/pdf/1801.08767v1",econ.EM 28805,em,"Why women avoid participating in a competition and how can we encourage them to participate in it? In this paper, we investigate how social image concerns affect women's decision to compete. We first construct a theoretical model and show that participating in a competition, even under affirmative action policies favoring women, is costly for women under public observability since it deviates from traditional female gender norms, resulting in women's low appearance in competitive environments. We propose and theoretically show that introducing prosocial incentives in the competitive environment is effective and robust to public observability since (i) it induces women who are intrinsically motivated by prosocial incentives to the competitive environment and (ii) it makes participating in a competition not costly for women from social image point of view. We conduct a laboratory experiment where we randomly manipulate the public observability of decisions to compete and test our theoretical predictions. The results of the experiment are fairly consistent with our theoretical predictions. We suggest that when designing policies to promote gender equality in competitive environments, using prosocial incentives through company philanthropy or other social responsibility policies, either as substitutes or as complements to traditional affirmative action policies, could be promising.",How Can We Induce More Women to Competitions?,2018-01-27 11:51:44,"Masayuki Yagasaki, Mitsunosuke Morishita","http://arxiv.org/abs/1801.10518v1, http://arxiv.org/pdf/1801.10518v1",econ.EM 28806,em,"The rational choice theory is based on this idea that people rationally pursue goals for increasing their personal interests. In most conditions, the behavior of an actor is not independent of the person and others' behavior. Here, we present a new concept of rational choice as a hyper-rational choice which in this concept, the actor thinks about profit or loss of other actors in addition to his personal profit or loss and then will choose an action which is desirable to him. We implement the hyper-rational choice to generalize and expand the game theory. Results of this study will help to model the behavior of people considering environmental conditions, the kind of behavior interactive, valuation system of itself and others and system of beliefs and internal values of societies. Hyper-rationality helps us understand how human decision makers behave in interactive decisions.",Hyper-rational choice theory,2018-01-12 02:16:09,"Madjid Eshaghi Gordji, Gholamreza Askari","http://arxiv.org/abs/1801.10520v2, http://arxiv.org/pdf/1801.10520v2",econ.EM 28807,em,"We develop a new VAR model for structural analysis with mixed-frequency data. The MIDAS-SVAR model allows to identify structural dynamic links exploiting the information contained in variables sampled at different frequencies. It also provides a general framework to test homogeneous frequency-based representations versus mixed-frequency data models. A set of Monte Carlo experiments suggests that the test performs well both in terms of size and power. The MIDAS-SVAR is then used to study how monetary policy and financial market volatility impact on the dynamics of gross capital inflows to the US. While no relation is found when using standard quarterly data, exploiting the variability present in the series within the quarter shows that the effect of an interest rate shock is greater the longer the time lag between the month of the shock and the end of the quarter",Structural analysis with mixed-frequency data: A MIDAS-SVAR model of US capital flows,2018-02-02 21:12:12,"Emanuele Bacchiocchi, Andrea Bastianin, Alessandro Missale, Eduardo Rossi","http://arxiv.org/abs/1802.00793v1, http://arxiv.org/pdf/1802.00793v1",econ.EM 28808,em,"The development and deployment of matching procedures that incentivize truthful preference reporting is considered one of the major successes of market design research. In this study, we test the degree to which these procedures succeed in eliminating preference misrepresentation. We administered an online experiment to 1,714 medical students immediately after their participation in the medical residency match--a leading field application of strategy-proof market design. When placed in an analogous, incentivized matching task, we find that 23% of participants misrepresent their preferences. We explore the factors that predict preference misrepresentation, including cognitive ability, strategic positioning, overconfidence, expectations, advice, and trust. We discuss the implications of this behavior for the design of allocation mechanisms and the social welfare in markets that use them.",An Experimental Investigation of Preference Misrepresentation in the Residency Match,2018-02-05 20:51:55,"Alex Rees-Jones, Samuel Skowronek","http://dx.doi.org/10.1073/pnas.1803212115, http://arxiv.org/abs/1802.01990v2, http://arxiv.org/pdf/1802.01990v2",econ.EM 28809,em,"Consumers are creatures of habit, often periodic, tied to work, shopping and other schedules. We analyzed one month of data from the world's largest bike-sharing company to elicit demand behavioral cycles, initially using models from animal tracking that showed large customers fit an Ornstein-Uhlenbeck model with demand peaks at periodicities of 7, 12, 24 hour and 7-days. Lorenz curves of bicycle demand showed that the majority of customer usage was infrequent, and demand cycles from time-series models would strongly overfit the data yielding unreliable models. Analysis of thresholded wavelets for the space-time tensor of bike-sharing contracts was able to compress the data into a 56-coefficient model with little loss of information, suggesting that bike-sharing demand behavior is exceptionally strong and regular. Improvements to predicted demand could be made by adjusting for 'noise' filtered by our model from air quality and weather information and demand from infrequent riders.",Prediction of Shared Bicycle Demand with Wavelet Thresholding,2018-02-08 04:17:27,"J. Christopher Westland, Jian Mou, Dafei Yin","http://arxiv.org/abs/1802.02683v1, http://arxiv.org/pdf/1802.02683v1",econ.EM 28810,em,"This paper describes a numerical method to solve for mean product qualities which equates the real market share to the market share predicted by a discrete choice model. The method covers a general class of discrete choice model, including the pure characteristics model in Berry and Pakes(2007) and the random coefficient logit model in Berry et al.(1995) (hereafter BLP). The method transforms the original market share inversion problem to an unconstrained convex minimization problem, so that any convex programming algorithm can be used to solve the inversion. Moreover, such results also imply that the computational complexity of inverting a demand model should be no more than that of a convex programming problem. In simulation examples, I show the method outperforms the contraction mapping algorithm in BLP. I also find the method remains robust in pure characteristics models with near-zero market shares.",A General Method for Demand Inversion,2018-02-13 05:50:46,Lixiong Li,"http://arxiv.org/abs/1802.04444v3, http://arxiv.org/pdf/1802.04444v3",econ.EM 29016,em,"This paper develops a set of test statistics based on bilinear forms in the context of the extremum estimation framework with particular interest in nonlinear hypothesis. We show that the proposed statistic converges to a conventional chi-square limit. A Monte Carlo experiment suggests that the test statistic works well in finite samples.",Bilinear form test statistics for extremum estimation,2019-12-03 17:32:49,"Federico Crudu, Felipe Osorio","http://dx.doi.org/10.1016/j.econlet.2019.108885, http://arxiv.org/abs/1912.01410v1, http://arxiv.org/pdf/1912.01410v1",econ.EM 28811,em,"We provide an epistemic foundation for cooperative games by proof theory via studying the knowledge for players unanimously accepting only core payoffs. We first transform each cooperative game into a decision problem where a player can accept or reject any payoff vector offered to her based on her knowledge about available cooperation. Then we use a modified KD-system in epistemic logic, which can be regarded as a counterpart of the model for non-cooperative games in Bonanno (2008), (2015), to describe a player's knowledge, decision-making criterion, and reasoning process; especially, a formula called C-acceptability is defined to capture the criterion for accepting a core payoff vector. Within this syntactical framework, we characterize the core of a cooperative game in terms of players' knowledge. Based on that result, we discuss an epistemic inconsistency behind Debreu-Scarf Theorem, that is, the increase of the number of replicas has invariant requirement on each participant's knowledge from the aspect of competitive market, while requires unbounded epistemic ability players from the aspect of cooperative game.",Knowledge and Unanimous Acceptance of Core Payoffs: An Epistemic Foundation for Cooperative Game Theory,2018-02-13 15:49:12,Shuige Liu,"http://arxiv.org/abs/1802.04595v4, http://arxiv.org/pdf/1802.04595v4",econ.EM 28812,em,"In this study interest centers on regional differences in the response of housing prices to monetary policy shocks in the US. We address this issue by analyzing monthly home price data for metropolitan regions using a factor-augmented vector autoregression (FAVAR) model. Bayesian model estimation is based on Gibbs sampling with Normal-Gamma shrinkage priors for the autoregressive coefficients and factor loadings, while monetary policy shocks are identified using high-frequency surprises around policy announcements as external instruments. The empirical results indicate that monetary policy actions typically have sizeable and significant positive effects on regional housing prices, revealing differences in magnitude and duration. The largest effects are observed in regions located in states on both the East and West Coasts, notably California, Arizona and Florida.",The dynamic impact of monetary policy on regional housing prices in the US: Evidence based on factor-augmented vector autoregressions,2018-02-16 12:08:34,"Manfred M. Fischer, Florian Huber, Michael Pfarrhofer, Petra Staufer-Steinnocher","http://arxiv.org/abs/1802.05870v1, http://arxiv.org/pdf/1802.05870v1",econ.EM 28813,em,"We study the asymptotic properties of a class of estimators of the structural parameters in dynamic discrete choice games. We consider K-stage policy iteration (PI) estimators, where K denotes the number of policy iterations employed in the estimation. This class nests several estimators proposed in the literature such as those in Aguirregabiria and Mira (2002, 2007), Pesendorfer and Schmidt-Dengler (2008), and Pakes et al. (2007). First, we establish that the K-PML estimator is consistent and asymptotically normal for all K. This complements findings in Aguirregabiria and Mira (2007), who focus on K=1 and K large enough to induce convergence of the estimator. Furthermore, we show under certain conditions that the asymptotic variance of the K-PML estimator can exhibit arbitrary patterns as a function of K. Second, we establish that the K-MD estimator is consistent and asymptotically normal for all K. For a specific weight matrix, the K-MD estimator has the same asymptotic distribution as the K-PML estimator. Our main result provides an optimal sequence of weight matrices for the K-MD estimator and shows that the optimally weighted K-MD estimator has an asymptotic distribution that is invariant to K. The invariance result is especially unexpected given the findings in Aguirregabiria and Mira (2007) for K-PML estimators. Our main result implies two new corollaries about the optimal 1-MD estimator (derived by Pesendorfer and Schmidt-Dengler (2008)). First, the optimal 1-MD estimator is optimal in the class of K-MD estimators. In other words, additional policy iterations do not provide asymptotic efficiency gains relative to the optimal 1-MD estimator. Second, the optimal 1-MD estimator is more or equally asymptotically efficient than any K-PML estimator for all K. Finally, the appendix provides appropriate conditions under which the optimal 1-MD estimator is asymptotically efficient.",On the iterated estimation of dynamic discrete choice games,2018-02-19 18:19:35,"Federico A. Bugni, Jackson Bunting","http://arxiv.org/abs/1802.06665v4, http://arxiv.org/pdf/1802.06665v4",econ.EM 28814,em,"This paper proposes nonparametric kernel-smoothing estimation for panel data to examine the degree of heterogeneity across cross-sectional units. We first estimate the sample mean, autocovariances, and autocorrelations for each unit and then apply kernel smoothing to compute their density functions. The dependence of the kernel estimator on bandwidth makes asymptotic bias of very high order affect the required condition on the relative magnitudes of the cross-sectional sample size (N) and the time-series length (T). In particular, it makes the condition on N and T stronger and more complicated than those typically observed in the long-panel literature without kernel smoothing. We also consider a split-panel jackknife method to correct bias and construction of confidence intervals. An empirical application and Monte Carlo simulations illustrate our procedure in finite samples.",Kernel Estimation for Panel Data with Heterogeneous Dynamics,2018-02-24 12:45:50,"Ryo Okui, Takahide Yanagi","http://arxiv.org/abs/1802.08825v4, http://arxiv.org/pdf/1802.08825v4",econ.EM 28815,em,"People reason heuristically in situations resembling inferential puzzles such as Bertrand's box paradox and the Monty Hall problem. The practical significance of that fact for economic decision making is uncertain because a departure from sound reasoning may, but does not necessarily, result in a ""cognitively biased"" outcome different from what sound reasoning would have produced. Criteria are derived here, applicable to both experimental and non-experimental situations, for heuristic reasoning in an inferential-puzzle situations to result, or not to result, in cognitively bias. In some situations, neither of these criteria is satisfied, and whether or not agents' posterior probability assessments or choices are cognitively biased cannot be determined.",Identifying the occurrence or non occurrence of cognitive bias in situations resembling the Monty Hall problem,2018-02-25 03:28:11,"Fatemeh Borhani, Edward J. Green","http://arxiv.org/abs/1802.08935v1, http://arxiv.org/pdf/1802.08935v1",econ.EM 28816,em,"I analyse the solution method for the variational optimisation problem in the rational inattention framework proposed by Christopher A. Sims. The solution, in general, does not exist, although it may exist in exceptional cases. I show that the solution does not exist for the quadratic and the logarithmic objective functions analysed by Sims (2003, 2006). For a linear-quadratic objective function a solution can be constructed under restrictions on all but one of its parameters. This approach is, therefore, unlikely to be applicable to a wider set of economic models.",On the solution of the variational optimisation in the rational inattention framework,2018-02-27 16:21:46,Nigar Hashimzade,"http://arxiv.org/abs/1802.09869v2, http://arxiv.org/pdf/1802.09869v2",econ.EM 28830,em,"This paper proposes a model-free approach to analyze panel data with heterogeneous dynamic structures across observational units. We first compute the sample mean, autocovariances, and autocorrelations for each unit, and then estimate the parameters of interest based on their empirical distributions. We then investigate the asymptotic properties of our estimators using double asymptotics and propose split-panel jackknife bias correction and inference based on the cross-sectional bootstrap. We illustrate the usefulness of our procedures by studying the deviation dynamics of the law of one price. Monte Carlo simulations confirm that the proposed bias correction is effective and yields valid inference in small samples.",Panel Data Analysis with Heterogeneous Dynamics,2018-03-26 10:53:47,"Ryo Okui, Takahide Yanagi","http://arxiv.org/abs/1803.09452v2, http://arxiv.org/pdf/1803.09452v2",econ.EM 28817,em,"Many macroeconomic policy questions may be assessed in a case study framework, where the time series of a treated unit is compared to a counterfactual constructed from a large pool of control units. I provide a general framework for this setting, tailored to predict the counterfactual by minimizing a tradeoff between underfitting (bias) and overfitting (variance). The framework nests recently proposed structural and reduced form machine learning approaches as special cases. Furthermore, difference-in-differences with matching and the original synthetic control are restrictive cases of the framework, in general not minimizing the bias-variance objective. Using simulation studies I find that machine learning methods outperform traditional methods when the number of potential controls is large or the treated unit is substantially different from the controls. Equipped with a toolbox of approaches, I revisit a study on the effect of economic liberalisation on economic growth. I find effects for several countries where no effect was found in the original study. Furthermore, I inspect how a systematically important bank respond to increasing capital requirements by using a large pool of banks to estimate the counterfactual. Finally, I assess the effect of a changing product price on product sales using a novel scanner dataset.",Synthetic Control Methods and Big Data,2018-03-01 00:32:09,Daniel Kinn,"http://arxiv.org/abs/1803.00096v1, http://arxiv.org/pdf/1803.00096v1",econ.EM 28818,em,"It is widely known that geographically weighted regression(GWR) is essentially same as varying-coefficient model. In the former research about varying-coefficient model, scholars tend to use multidimensional-kernel-based locally weighted estimation(MLWE) so that information of both distance and direction is considered. However, when we construct the local weight matrix of geographically weighted estimation, distance among the locations in the neighbor is the only factor controlling the value of entries of weight matrix. In other word, estimation of GWR is distance-kernel-based. Thus, in this paper, under stationary and limited dependent data with multidimensional subscripts, we analyze the local mean squared properties of without any assumption of the form of coefficient functions and compare it with MLWE. According to the theoretical and simulation results, geographically-weighted locally linear estimation(GWLE) is asymptotically more efficient than MLWE. Furthermore, a relationship between optimal bandwith selection and design of scale parameters is also obtained.",An Note on Why Geographically Weighted Regression Overcomes Multidimensional-Kernel-Based Varying-Coefficient Model,2018-03-04 21:50:17,Zihao Yuan,"http://arxiv.org/abs/1803.01402v2, http://arxiv.org/pdf/1803.01402v2",econ.EM 28819,em,"We study three pricing mechanisms' performance and their effects on the participants in the data industry from the data supply chain perspective. A win-win pricing strategy for the players in the data supply chain is proposed. We obtain analytical solutions in each pricing mechanism, including the decentralized and centralized pricing, Nash Bargaining pricing, and revenue sharing mechanism.",Pricing Mechanism in Information Goods,2018-03-05 10:37:06,"Xinming Li, Huaqing Wang","http://arxiv.org/abs/1803.01530v1, http://arxiv.org/pdf/1803.01530v1",econ.EM 28820,em,"Spatial association and heterogeneity are two critical areas in the research about spatial analysis, geography, statistics and so on. Though large amounts of outstanding methods has been proposed and studied, there are few of them tend to study spatial association under heterogeneous environment. Additionally, most of the traditional methods are based on distance statistic and spatial weighted matrix. However, in some abstract spatial situations, distance statistic can not be applied since we can not even observe the geographical locations directly. Meanwhile, under these circumstances, due to invisibility of spatial positions, designing of weight matrix can not absolutely avoid subjectivity. In this paper, a new entropy-based method, which is data-driven and distribution-free, has been proposed to help us investigate spatial association while fully taking the fact that heterogeneity widely exist. Specifically, this method is not bounded with distance statistic or weight matrix. Asymmetrical dependence is adopted to reflect the heterogeneity in spatial association for each individual and the whole discussion in this paper is performed on spatio-temporal data with only assuming stationary m-dependent over time.",A Nonparametric Approach to Measure the Heterogeneous Spatial Association: Under Spatial Temporal Data,2018-03-06 21:46:49,Zihao Yuan,"http://arxiv.org/abs/1803.02334v2, http://arxiv.org/pdf/1803.02334v2",econ.EM 28821,em,"The last technical barriers to trade(TBT) between countries are Non-Tariff Barriers(NTBs), meaning all trade barriers are possible other than Tariff Barriers. And the most typical examples are (TBT), which refer to measure Technical Regulation, Standards, Procedure for Conformity Assessment, Test & Certification etc. Therefore, in order to eliminate TBT, WTO has made all membership countries automatically enter into an agreement on TBT",A study of strategy to the remove and ease TBT for increasing export in GCC6 countries,2018-03-09 09:39:31,YongJae Kim,"http://arxiv.org/abs/1803.03394v3, http://arxiv.org/pdf/1803.03394v3",econ.EM 28822,em,"Understanding the effectiveness of alternative approaches to water conservation is crucially important for ensuring the security and reliability of water services for urban residents. We analyze data from one of the longest-running ""cash for grass"" policies - the Southern Nevada Water Authority's Water Smart Landscapes program, where homeowners are paid to replace grass with xeric landscaping. We use a twelve year long panel dataset of monthly water consumption records for 300,000 households in Las Vegas, Nevada. Utilizing a panel difference-in-differences approach, we estimate the average water savings per square meter of turf removed. We find that participation in this program reduced the average treated household's consumption by 18 percent. We find no evidence that water savings degrade as the landscape ages, or that water savings per unit area are influenced by the value of the rebate. Depending on the assumed time horizon of benefits from turf removal, we find that the WSL program cost the water authority about $1.62 per thousand gallons of water saved, which compares favorably to alternative means of water conservation or supply augmentation.",How Smart Are `Water Smart Landscapes'?,2018-03-13 05:00:07,"Christa Brelsford, Joshua K. Abbott","http://arxiv.org/abs/1803.04593v1, http://arxiv.org/pdf/1803.04593v1",econ.EM 28823,em,"The business cycles are generated by the oscillating macro-/micro-/nano- economic output variables in the economy of the scale and the scope in the amplitude/frequency/phase/time domains in the economics. The accurate forward looking assumptions on the business cycles oscillation dynamics can optimize the financial capital investing and/or borrowing by the economic agents in the capital markets. The book's main objective is to study the business cycles in the economy of the scale and the scope, formulating the Ledenyov unified business cycles theory in the Ledenyov classic and quantum econodynamics.",Business Cycles in Economics,2018-03-16 11:24:05,"Viktor O. Ledenyov, Dimitri O. Ledenyov","http://dx.doi.org/10.2139/ssrn.3134655, http://arxiv.org/abs/1803.06108v1, http://arxiv.org/pdf/1803.06108v1",econ.EM 28824,em,"Unobserved heterogeneous treatment effects have been emphasized in the recent policy evaluation literature (see e.g., Heckman and Vytlacil, 2005). This paper proposes a nonparametric test for unobserved heterogeneous treatment effects in a treatment effect model with a binary treatment assignment, allowing for individuals' self-selection to the treatment. Under the standard local average treatment effects assumptions, i.e., the no defiers condition, we derive testable model restrictions for the hypothesis of unobserved heterogeneous treatment effects. Also, we show that if the treatment outcomes satisfy a monotonicity assumption, these model restrictions are also sufficient. Then, we propose a modified Kolmogorov-Smirnov-type test which is consistent and simple to implement. Monte Carlo simulations show that our test performs well in finite samples. For illustration, we apply our test to study heterogeneous treatment effects of the Job Training Partnership Act on earnings and the impacts of fertility on family income, where the null hypothesis of homogeneous treatment effects gets rejected in the second case but fails to be rejected in the first application.",Testing for Unobserved Heterogeneous Treatment Effects with Observational Data,2018-03-20 19:30:07,"Yu-Chin Hsu, Ta-Cheng Huang, Haiqing Xu","http://arxiv.org/abs/1803.07514v2, http://arxiv.org/pdf/1803.07514v2",econ.EM 28825,em,"In the regression discontinuity design (RDD), it is common practice to assess the credibility of the design by testing the continuity of the density of the running variable at the cut-off, e.g., McCrary (2008). In this paper we propose an approximate sign test for continuity of a density at a point based on the so-called g-order statistics, and study its properties under two complementary asymptotic frameworks. In the first asymptotic framework, the number q of observations local to the cut-off is fixed as the sample size n diverges to infinity, while in the second framework q diverges to infinity slowly as n diverges to infinity. Under both of these frameworks, we show that the test we propose is asymptotically valid in the sense that it has limiting rejection probability under the null hypothesis not exceeding the nominal level. More importantly, the test is easy to implement, asymptotically valid under weaker conditions than those used by competing methods, and exhibits finite sample validity under stronger conditions than those needed for its asymptotic validity. In a simulation study, we find that the approximate sign test provides good control of the rejection probability under the null hypothesis while remaining competitive under the alternative hypothesis. We finally apply our test to the design in Lee (2008), a well-known application of the RDD to study incumbency advantage.",Testing Continuity of a Density via g-order statistics in the Regression Discontinuity Design,2018-03-21 17:52:59,"Federico A. Bugni, Ivan A. Canay","http://arxiv.org/abs/1803.07951v6, http://arxiv.org/pdf/1803.07951v6",econ.EM 28826,em,"In this paper, we propose the use of causal inference techniques for survival function estimation and prediction for subgroups of the data, upto individual units. Tree ensemble methods, specifically random forests were modified for this purpose. A real world healthcare dataset was used with about 1800 patients with breast cancer, which has multiple patient covariates as well as disease free survival days (DFS) and a death event binary indicator (y). We use the type of cancer curative intervention as the treatment variable (T=0 or 1, binary treatment case in our example). The algorithm is a 2 step approach. In step 1, we estimate heterogeneous treatment effects using a causalTree with the DFS as the dependent variable. Next, in step 2, for each selected leaf of the causalTree with distinctly different average treatment effect (with respect to survival), we fit a survival forest to all the patients in that leaf, one forest each for treatment T=0 as well as T=1 to get estimated patient level survival curves for each treatment (more generally, any model can be used at this step). Then, we subtract the patient level survival curves to get the differential survival curve for a given patient, to compare the survival function as a result of the 2 treatments. The path to a selected leaf also gives us the combination of patient features and their values which are causally important for the treatment effect difference at the leaf.",Causal Inference for Survival Analysis,2018-03-22 06:22:19,Vikas Ramachandra,"http://arxiv.org/abs/1803.08218v1, http://arxiv.org/pdf/1803.08218v1",econ.EM 28827,em,"Linear regressions with period and group fixed effects are widely used to estimate treatment effects. We show that they estimate weighted sums of the average treatment effects (ATE) in each group and period, with weights that may be negative. Due to the negative weights, the linear regression coefficient may for instance be negative while all the ATEs are positive. We propose another estimator that solves this issue. In the two applications we revisit, it is significantly different from the linear regression estimator.",Two-way fixed effects estimators with heterogeneous treatment effects,2018-03-22 01:56:07,"Clément de Chaisemartin, Xavier D'Haultfœuille","http://dx.doi.org/10.1257/aer.20181169, http://arxiv.org/abs/1803.08807v7, http://arxiv.org/pdf/1803.08807v7",econ.EM 28828,em,"We examine the effects of monetary policy on income inequality in Japan using a novel econometric approach that jointly estimates the Gini coefficient based on micro-level grouped data of households and the dynamics of macroeconomic quantities. Our results indicate different effects on income inequality for different types of households: A monetary tightening increases inequality when income data is based on households whose head is employed (workers' households), while the effect reverses over the medium term when considering a broader definition of households. Differences in the relative strength of the transmission channels can account for this finding. Finally we demonstrate that the proposed joint estimation strategy leads to more informative inference while results based on the frequently used two-step estimation approach yields inconclusive results.",How does monetary policy affect income inequality in Japan? Evidence from grouped data,2018-03-23 19:28:23,"Martin Feldkircher, Kazuhiko Kakamu","http://dx.doi.org/10.1007/s00181-021-02102-7, http://arxiv.org/abs/1803.08868v2, http://arxiv.org/pdf/1803.08868v2",econ.EM 28829,em,"We develop inference for a two-sided matching model where the characteristics of agents on one side of the market are endogenous due to pre-matching investments. The model can be used to measure the impact of frictions in labour markets using a single cross-section of matched employer-employee data. The observed matching of workers to firms is the outcome of a discrete, two-sided matching process where firms with heterogeneous preferences over education sequentially choose workers according to an index correlated with worker preferences over firms. The distribution of education arises in equilibrium from a Bayesian game: workers, knowing the distribution of worker and firm types, invest in education prior to the matching process. Although the observed matching exhibits strong cross-sectional dependence due to the matching process, we propose an asymptotically valid inference procedure that combines discrete choice methods with simulation.","Schooling Choice, Labour Market Matching, and Wages",2018-03-24 03:41:09,Jacob Schwartz,"http://arxiv.org/abs/1803.09020v6, http://arxiv.org/pdf/1803.09020v6",econ.EM 28831,em,"In this paper, we assess the impact of climate shocks on futures markets for agricultural commodities and a set of macroeconomic quantities for multiple high-income economies. To capture relations among countries, markets, and climate shocks, this paper proposes parsimonious methods to estimate high-dimensional panel VARs. We assume that coefficients associated with domestic lagged endogenous variables arise from a Gaussian mixture model while further parsimony is achieved using suitable global-local shrinkage priors on several regions of the parameter space. Our results point towards pronounced global reactions of key macroeconomic quantities to climate shocks. Moreover, the empirical findings highlight substantial linkages between regionally located climate shifts and global commodity markets.",A Bayesian panel VAR model to analyze the impact of climate change on high-income economies,2018-04-04 21:23:10,"Florian Huber, Tamás Krisztin, Michael Pfarrhofer","http://arxiv.org/abs/1804.01554v3, http://arxiv.org/pdf/1804.01554v3",econ.EM 28832,em,"This paper provides a new methodology to analyze unobserved heterogeneity when observed characteristics are modeled nonlinearly. The proposed model builds on varying random coefficients (VRC) that are determined by nonlinear functions of observed regressors and additively separable unobservables. This paper proposes a novel estimator of the VRC density based on weighted sieve minimum distance. The main example of sieve bases are Hermite functions which yield a numerically stable estimation procedure. This paper shows inference results that go beyond what has been shown in ordinary RC models. We provide in each case rates of convergence and also establish pointwise limit theory of linear functionals, where a prominent example is the density of potential outcomes. In addition, a multiplier bootstrap procedure is proposed to construct uniform confidence bands. A Monte Carlo study examines finite sample properties of the estimator and shows that it performs well even when the regressors associated to RC are far from being heavy tailed. Finally, the methodology is applied to analyze heterogeneity in income elasticity of demand for housing.",Varying Random Coefficient Models,2018-04-09 20:16:52,Christoph Breunig,"http://arxiv.org/abs/1804.03110v4, http://arxiv.org/pdf/1804.03110v4",econ.EM 28833,em,"We develop point-identification for the local average treatment effect when the binary treatment contains a measurement error. The standard instrumental variable estimator is inconsistent for the parameter since the measurement error is non-classical by construction. We correct the problem by identifying the distribution of the measurement error based on the use of an exogenous variable that can even be a binary covariate. The moment conditions derived from the identification lead to generalized method of moments estimation with asymptotically valid inferences. Monte Carlo simulations and an empirical illustration demonstrate the usefulness of the proposed procedure.",Inference on Local Average Treatment Effects for Misclassified Treatment,2018-04-10 08:57:30,Takahide Yanagi,"http://arxiv.org/abs/1804.03349v1, http://arxiv.org/pdf/1804.03349v1",econ.EM 28834,em,"This paper re-examines the Shapley value methods for attribution analysis in the area of online advertising. As a credit allocation solution in cooperative game theory, Shapley value method directly quantifies the contribution of online advertising inputs to the advertising key performance indicator (KPI) across multiple channels. We simplify its calculation by developing an alternative mathematical formulation. The new formula significantly improves the computational efficiency and therefore extends the scope of applicability. Based on the simplified formula, we further develop the ordered Shapley value method. The proposed method is able to take into account the order of channels visited by users. We claim that it provides a more comprehensive insight by evaluating the attribution of channels at different stages of user conversion journeys. The proposed approaches are illustrated using a real-world online advertising campaign dataset.",Shapley Value Methods for Attribution Modeling in Online Advertising,2018-04-15 12:19:25,"Kaifeng Zhao, Seyed Hanif Mahboobi, Saeed R. Bagheri","http://arxiv.org/abs/1804.05327v1, http://arxiv.org/pdf/1804.05327v1",econ.EM 28835,em,"To estimate the dynamic effects of an absorbing treatment, researchers often use two-way fixed effects regressions that include leads and lags of the treatment. We show that in settings with variation in treatment timing across units, the coefficient on a given lead or lag can be contaminated by effects from other periods, and apparent pretrends can arise solely from treatment effects heterogeneity. We propose an alternative estimator that is free of contamination, and illustrate the relative shortcomings of two-way fixed effects regressions with leads and lags through an empirical application.",Estimating Dynamic Treatment Effects in Event Studies with Heterogeneous Treatment Effects,2018-04-16 19:54:46,"Liyang Sun, Sarah Abraham","http://arxiv.org/abs/1804.05785v2, http://arxiv.org/pdf/1804.05785v2",econ.EM 28836,em,"This paper offers a two-pronged critique of the empirical investigation of the income distribution performed by physicists over the past decade. Their finding rely on the graphical analysis of the observed distribution of normalized incomes. Two central observations lead to the conclusion that the majority of incomes are exponentially distributed, but neither each individual piece of evidence nor their concurrent observation robustly proves that the thermal and superthermal mixture fits the observed distribution of incomes better than reasonable alternatives. A formal analysis using popular measures of fit shows that while an exponential distribution with a power-law tail provides a better fit of the IRS income data than the log-normal distribution (often assumed by economists), the thermal and superthermal mixture's fit can be improved upon further by adding a log-normal component. The economic implications of the thermal and superthermal distribution of incomes, and the expanded mixture are explored in the paper.",Revisiting the thermal and superthermal two-class distribution of incomes: A critical perspective,2018-04-17 19:09:59,Markus P. A. Schneider,"http://dx.doi.org/10.1140/epjb/e2014-50501-x, http://arxiv.org/abs/1804.06341v1, http://arxiv.org/pdf/1804.06341v1",econ.EM 28837,em,"Researchers increasingly leverage movement across multiple treatments to estimate causal effects. While these ""mover regressions"" are often motivated by a linear constant-effects model, it is not clear what they capture under weaker quasi-experimental assumptions. I show that binary treatment mover regressions recover a convex average of four difference-in-difference comparisons and are thus causally interpretable under a standard parallel trends assumption. Estimates from multiple-treatment models, however, need not be causal without stronger restrictions on the heterogeneity of treatment effects and time-varying shocks. I propose a class of two-step estimators to isolate and combine the large set of difference-in-difference quasi-experiments generated by a mover design, identifying mover average treatment effects under conditional-on-covariate parallel trends and effect homogeneity restrictions. I characterize the efficient estimators in this class and derive specification tests based on the model's overidentifying restrictions. Future drafts will apply the theory to the Finkelstein et al. (2016) movers design, analyzing the causal effects of geography on healthcare utilization.",Estimating Treatment Effects in Mover Designs,2018-04-18 16:42:55,Peter Hull,"http://arxiv.org/abs/1804.06721v1, http://arxiv.org/pdf/1804.06721v1",econ.EM 28838,em,"The study aims to identify the institutional flaws of the current EU waste management model by analysing the economic model of extended producer responsibility and collective waste management systems and to create a model for measuring the transaction costs borne by waste recovery organizations. The model was approbated by analysing the Bulgarian collective waste management systems that have been complying with the EU legislation for the last 10 years. The analysis focuses on waste oils because of their economic importance and the limited number of studies and analyses in this field as the predominant body of research to date has mainly addressed packaging waste, mixed household waste or discarded electrical and electronic equipment. The study aims to support the process of establishing a circular economy in the EU, which was initiated in 2015.",Transaction Costs in Collective Waste Recovery Systems in the EU,2018-04-18 18:40:15,Shteryo Nozharov,"http://arxiv.org/abs/1804.06792v1, http://arxiv.org/pdf/1804.06792v1",econ.EM 28839,em,"We study the foundations of empirical equilibrium, a refinement of Nash equilibrium that is based on a non-parametric characterization of empirical distributions of behavior in games (Velez and Brown,2020b arXiv:1907.12408). The refinement can be alternatively defined as those Nash equilibria that do not refute the regular QRE theory of Goeree, Holt, and Palfrey (2005). By contrast, some empirical equilibria may refute monotone additive randomly disturbed payoff models. As a by product, we show that empirical equilibrium does not coincide with refinements based on approximation by monotone additive randomly disturbed payoff models, and further our understanding of the empirical content of these models.",Empirical Equilibrium,2018-04-21 18:38:24,"Rodrigo A. Velez, Alexander L. Brown","http://arxiv.org/abs/1804.07986v3, http://arxiv.org/pdf/1804.07986v3",econ.EM 28840,em,"We analyze an operational policy for a multinational manufacturer to hedge against exchange rate uncertainties and competition. We consider a single product and single period. Because of long-lead times, the capacity investment must done before the selling season begins when the exchange rate between the two countries is uncertain. we consider a duopoly competition in the foreign country. We model the exchange rate as a random variable. We investigate the impact of competition and exchange rate on optimal capacities and optimal prices. We show how competition can impact the decision of the home manufacturer to enter the foreign market.",Price Competition with Geometric Brownian motion in Exchange Rate Uncertainty,2018-04-22 21:33:53,"Murat Erkoc, Huaqing Wang, Anas Ahmed","http://arxiv.org/abs/1804.08153v1, http://arxiv.org/pdf/1804.08153v1",econ.EM 28841,em,"Call centers' managers are interested in obtaining accurate point and distributional forecasts of call arrivals in order to achieve an optimal balance between service quality and operating costs. We present a strategy for selecting forecast models of call arrivals which is based on three pillars: (i) flexibility of the loss function; (ii) statistical evaluation of forecast accuracy; (iii) economic evaluation of forecast performance using money metrics. We implement fourteen time series models and seven forecast combination schemes on three series of daily call arrivals. Although we focus mainly on point forecasts, we also analyze density forecast evaluation. We show that second moments modeling is important both for point and density forecasting and that the simple Seasonal Random Walk model is always outperformed by more general specifications. Our results suggest that call center managers should invest in the use of forecast models which describe both first and second moments of call arrivals.",Statistical and Economic Evaluation of Time Series Models for Forecasting Arrivals at Call Centers,2018-04-23 12:57:42,"Andrea Bastianin, Marzio Galeotti, Matteo Manera","http://dx.doi.org/10.1007/s00181-018-1475-y, http://arxiv.org/abs/1804.08315v1, http://arxiv.org/pdf/1804.08315v1",econ.EM 28842,em,"Economic inequality is one of the pivotal issues for most of economic and social policy makers across the world to insure the sustainable economic growth and justice. In the mainstream school of economics, namely neoclassical theories, economic issues are dealt with in a mechanistic manner. Such a mainstream framework is majorly focused on investigating a socio-economic system based on an axiomatic scheme where reductionism approach plays a vital role. The major limitations of such theories include unbounded rationality of economic agents, reducing the economic aggregates to a set of predictable factors and lack of attention to adaptability and the evolutionary nature of economic agents. In tackling deficiencies of conventional economic models, in the past two decades, some new approaches have been recruited. One of those novel approaches is the Complex adaptive systems (CAS) framework which has shown a very promising performance in action. In contrast to mainstream school, under this framework, the economic phenomena are studied in an organic manner where the economic agents are supposed to be both boundedly rational and adaptive. According to it, the economic aggregates emerge out of the ways agents of a system decide and interact. As a powerful way of modeling CASs, Agent-based models (ABMs) has found a growing application among academicians and practitioners. ABMs show that how simple behavioral rules of agents and local interactions among them at micro-scale can generate surprisingly complex patterns at macro-scale. In this paper, ABMs have been used to show (1) how an economic inequality emerges in a system and to explain (2) how sadaqah as an Islamic charity rule can majorly help alleviating the inequality and how resource allocation strategies taken by charity entities can accelerate this alleviation.",Economic inequality and Islamic Charity: An exploratory agent-based modeling approach,2018-04-25 01:43:11,"Hossein Sabzian, Alireza Aliahmadi, Adel Azar, Madjid Mirzaee","http://arxiv.org/abs/1804.09284v1, http://arxiv.org/pdf/1804.09284v1",econ.EM 28843,em,"This paper is concerned with inference about low-dimensional components of a high-dimensional parameter vector $\beta^0$ which is identified through instrumental variables. We allow for eigenvalues of the expected outer product of included and excluded covariates, denoted by $M$, to shrink to zero as the sample size increases. We propose a novel estimator based on desparsification of an instrumental variable Lasso estimator, which is a regularized version of 2SLS with an additional correction term. This estimator converges to $\beta^0$ at a rate depending on the mapping properties of $M$ captured by a sparse link condition. Linear combinations of our estimator of $\beta^0$ are shown to be asymptotically normally distributed. Based on consistent covariance estimation, our method allows for constructing confidence intervals and statistical tests for single or low-dimensional components of $\beta^0$. In Monte-Carlo simulations we analyze the finite sample behavior of our estimator.",Ill-posed Estimation in High-Dimensional Models with Instrumental Variables,2018-06-02 19:41:24,"Christoph Breunig, Enno Mammen, Anna Simoni","http://arxiv.org/abs/1806.00666v2, http://arxiv.org/pdf/1806.00666v2",econ.EM 28863,em,"By recasting indirect inference estimation as a prediction rather than a minimization and by using regularized regressions, we can bypass the three major problems of estimation: selecting the summary statistics, defining the distance function and minimizing it numerically. By substituting regression with classification we can extend this approach to model selection as well. We present three examples: a statistical fit, the parametrization of a simple real business cycle model and heuristics selection in a fishery agent-based model. The outcome is a method that automatically chooses summary statistics, weighs them and use them to parametrize models without running any direct minimization.",Indirect inference through prediction,2018-07-04 16:52:24,"Ernesto Carrella, Richard M. Bailey, Jens Koed Madsen","http://dx.doi.org/10.18564/jasss.4150, http://arxiv.org/abs/1807.01579v1, http://arxiv.org/pdf/1807.01579v1",econ.EM 28844,em,"I propose a nonparametric iid bootstrap procedure for the empirical likelihood, the exponential tilting, and the exponentially tilted empirical likelihood estimators that achieves asymptotic refinements for t tests and confidence intervals, and Wald tests and confidence regions based on such estimators. Furthermore, the proposed bootstrap is robust to model misspecification, i.e., it achieves asymptotic refinements regardless of whether the assumed moment condition model is correctly specified or not. This result is new, because asymptotic refinements of the bootstrap based on these estimators have not been established in the literature even under correct model specification. Monte Carlo experiments are conducted in dynamic panel data setting to support the theoretical finding. As an application, bootstrap confidence intervals for the returns to schooling of Hellerstein and Imbens (1999) are calculated. The result suggests that the returns to schooling may be higher.",Asymptotic Refinements of a Misspecification-Robust Bootstrap for Generalized Empirical Likelihood Estimators,2018-06-04 07:54:48,Seojeong Lee,"http://dx.doi.org/10.1016/j.jeconom.2015.11.003, http://arxiv.org/abs/1806.00953v2, http://arxiv.org/pdf/1806.00953v2",econ.EM 28845,em,"Many studies use shift-share (or ``Bartik'') instruments, which average a set of shocks with exposure share weights. We provide a new econometric framework for shift-share instrumental variable (SSIV) regressions in which identification follows from the quasi-random assignment of shocks, while exposure shares are allowed to be endogenous. The framework is motivated by an equivalence result: the orthogonality between a shift-share instrument and an unobserved residual can be represented as the orthogonality between the underlying shocks and a shock-level unobservable. SSIV regression coefficients can similarly be obtained from an equivalent shock-level regression, motivating shock-level conditions for their consistency. We discuss and illustrate several practical insights of this framework in the setting of Autor et al. (2013), estimating the effect of Chinese import competition on manufacturing employment across U.S. commuting zones.",Quasi-Experimental Shift-Share Research Designs,2018-06-04 20:03:07,"Kirill Borusyak, Peter Hull, Xavier Jaravel","http://arxiv.org/abs/1806.01221v9, http://arxiv.org/pdf/1806.01221v9",econ.EM 28846,em,"The implementation of a supervision and incentive process for identical workers may lead to wage variance that stems from employer and employee optimization. The harder it is to assess the nature of the labor output, the more important such a process becomes, and the influence of such a process on wage development growth. The dynamic model presented in this paper shows that an employer will choose to pay a worker a starting wage that is less than what he deserves, resulting in a wage profile that fits the classic profile in the human-capital literature. The wage profile and wage variance rise at times of technological advancements, which leads to increased turnover as older workers are replaced by younger workers due to a rise in the relative marginal cost of the former.",The Impact of Supervision and Incentive Process in Explaining Wage Profile and Variance,2018-06-04 22:05:37,"Nitsa Kasir, Idit Sohlberg","http://arxiv.org/abs/1806.01332v1, http://arxiv.org/pdf/1806.01332v1",econ.EM 28847,em,"I propose a nonparametric iid bootstrap that achieves asymptotic refinements for t tests and confidence intervals based on GMM estimators even when the model is misspecified. In addition, my bootstrap does not require recentering the moment function, which has been considered as critical for GMM. Regardless of model misspecification, the proposed bootstrap achieves the same sharp magnitude of refinements as the conventional bootstrap methods which establish asymptotic refinements by recentering in the absence of misspecification. The key idea is to link the misspecified bootstrap moment condition to the large sample theory of GMM under misspecification of Hall and Inoue (2003). Two examples are provided: Combining data sets and invalid instrumental variables.",Asymptotic Refinements of a Misspecification-Robust Bootstrap for Generalized Method of Moments Estimators,2018-06-05 04:13:06,Seojeong Lee,"http://dx.doi.org/10.1016/j.jeconom.2013.05.008, http://arxiv.org/abs/1806.01450v1, http://arxiv.org/pdf/1806.01450v1",econ.EM 28848,em,"Under treatment effect heterogeneity, an instrument identifies the instrument-specific local average treatment effect (LATE). With multiple instruments, two-stage least squares (2SLS) estimand is a weighted average of different LATEs. What is often overlooked in the literature is that the postulated moment condition evaluated at the 2SLS estimand does not hold unless those LATEs are the same. If so, the conventional heteroskedasticity-robust variance estimator would be inconsistent, and 2SLS standard errors based on such estimators would be incorrect. I derive the correct asymptotic distribution, and propose a consistent asymptotic variance estimator by using the result of Hall and Inoue (2003, Journal of Econometrics) on misspecified moment condition models. This can be used to correctly calculate the standard errors regardless of whether there is more than one LATE or not.",A Consistent Variance Estimator for 2SLS When Instruments Identify Different LATEs,2018-06-05 04:36:49,Seojeong Lee,"http://dx.doi.org/10.1080/07350015.2016.1186555, http://arxiv.org/abs/1806.01457v1, http://arxiv.org/pdf/1806.01457v1",econ.EM 28849,em,"We propose leave-out estimators of quadratic forms designed for the study of linear models with unrestricted heteroscedasticity. Applications include analysis of variance and tests of linear restrictions in models with many regressors. An approximation algorithm is provided that enables accurate computation of the estimator in very large datasets. We study the large sample properties of our estimator allowing the number of regressors to grow in proportion to the number of observations. Consistency is established in a variety of settings where plug-in methods and estimators predicated on homoscedasticity exhibit first-order biases. For quadratic forms of increasing rank, the limiting distribution can be represented by a linear combination of normal and non-central $\chi^2$ random variables, with normality ensuing under strong identification. Standard error estimators are proposed that enable tests of linear restrictions and the construction of uniformly valid confidence intervals for quadratic forms of interest. We find in Italian social security records that leave-out estimates of a variance decomposition in a two-way fixed effects model of wage determination yield substantially different conclusions regarding the relative contribution of workers, firms, and worker-firm sorting to wage inequality than conventional methods. Monte Carlo exercises corroborate the accuracy of our asymptotic approximations, with clear evidence of non-normality emerging when worker mobility between blocks of firms is limited.",Leave-out estimation of variance components,2018-06-05 07:59:27,"Patrick Kline, Raffaele Saggio, Mikkel Sølvsten","http://arxiv.org/abs/1806.01494v2, http://arxiv.org/pdf/1806.01494v2",econ.EM 28850,em,"Autonomous ships (AS) used for cargo transport have gained a considerable amount of attention in recent years. They promise benefits such as reduced crew costs, increased safety and increased flexibility. This paper explores the effects of a faster increase in technological performance in maritime shipping achieved by leveraging fast-improving technological domains such as computer processors, and advanced energy storage. Based on historical improvement rates of several modes of transport (Cargo Ships, Air, Rail, Trucking) a simplified Markov-chain Monte-Carlo (MCMC) simulation of an intermodal transport model (IMTM) is used to explore the effects of differing technological improvement rates for AS. The results show that the annual improvement rates of traditional shipping (Ocean Cargo Ships = 2.6%, Air Cargo = 5.5%, Trucking = 0.6%, Rail = 1.9%, Inland Water Transport = 0.4%) improve at lower rates than technologies associated with automation such as Computer Processors (35.6%), Fuel Cells (14.7%) and Automotive Autonomous Hardware (27.9%). The IMTM simulations up to the year 2050 show that the introduction of any mode of autonomous transport will increase competition in lower cost shipping options, but is unlikely to significantly alter the overall distribution of transport mode costs. Secondly, if all forms of transport end up converting to autonomous systems, then the uncertainty surrounding the improvement rates yields a complex intermodal transport solution involving several options, all at a much lower cost over time. Ultimately, the research shows a need for more accurate measurement of current autonomous transport costs and how they are changing over time.",A Quantitative Analysis of Possible Futures of Autonomous Transport,2018-06-05 17:00:58,"Christopher L. Benson, Pranav D Sumanth, Alina P Colling","http://arxiv.org/abs/1806.01696v1, http://arxiv.org/pdf/1806.01696v1",econ.EM 28851,em,"A standard growth model is modified in a straightforward way to incorporate what Keynes (1936) suggests in the ""essence"" of his general theory. The theoretical essence is the idea that exogenous changes in investment cause changes in employment and unemployment. We implement this idea by assuming the path for capital growth rate is exogenous in the growth model. The result is a growth model that can explain both long term trends and fluctuations around the trend. The modified growth model was tested using the U.S. economic data from 1947 to 2014. The hypothesized inverse relationship between the capital growth and changes in unemployment was confirmed, and the structurally estimated model fits fluctuations in unemployment reasonably well.",A Growth Model with Unemployment,2018-06-11 23:29:04,"Mina Mahmoudi, Mark Pingle","http://arxiv.org/abs/1806.04228v1, http://arxiv.org/pdf/1806.04228v1",econ.EM 28852,em,"This study provides the theoretical framework and empirical model for productivity growth evaluations in agricultural sector as one of the most important sectors in Iran's economic development plan. We use the Solow residual model to measure the productivity growth share in the value-added growth of the agricultural sector. Our time series data includes value-added per worker, employment, and capital in this sector. The results show that the average total factor productivity growth rate in the agricultural sector is -0.72% during 1991-2010. Also, during this period, the share of total factor productivity growth in the value-added growth is -19.6%, while it has been forecasted to be 33.8% in the fourth development plan. Considering the effective role of capital in the agricultural low productivity, we suggest applying productivity management plans (especially in regards of capital productivity) to achieve future growth goals.",The Role of Agricultural Sector Productivity in Economic Growth: The Case of Iran's Economic Development Plan,2018-06-11 23:43:32,"Morteza Tahamipour, Mina Mahmoudi","http://arxiv.org/abs/1806.04235v1, http://arxiv.org/pdf/1806.04235v1",econ.EM 28853,em,"Tariff liberalization and its impact on tax revenue is an important consideration for developing countries, because they are increasingly facing the difficult task of implementing and harmonizing regional and international trade commitments. The tariff reform and its costs for Iranian government is one of the issues that are examined in this study. Another goal of this paper is, estimating the cost of trade liberalization. On this regard, imports value of agricultural sector in Iran in 2010 was analyzed according to two scenarios. For reforming nuisance tariff, a VAT policy is used in both scenarios. In this study, TRIST method is used. In the first scenario, imports' value decreased to a level equal to the second scenario and higher tariff revenue will be created. The results show that reducing the average tariff rate does not always result in the loss of tariff revenue. This paper is a witness that different forms of tariff can generate different amount of income when they have same level of liberalization and equal effect on producers. Therefore, using a good tariff regime can help a government to generate income when increases social welfare by liberalization.",Estimating Trade-Related Adjustment Costs in the Agricultural Sector in Iran,2018-06-11 23:44:02,"Omid Karami, Mina Mahmoudi","http://arxiv.org/abs/1806.04238v1, http://arxiv.org/pdf/1806.04238v1",econ.EM 28854,em,"We consider the relation between Sion's minimax theorem for a continuous function and a Nash equilibrium in an asymmetric multi-players zero-sum game in which only one player is different from other players, and the game is symmetric for the other players. Then, 1. The existence of a Nash equilibrium, which is symmetric for players other than one player, implies Sion's minimax theorem for pairs of this player and one of other players with symmetry for the other players. 2. Sion's minimax theorem for pairs of one player and one of other players with symmetry for the other players implies the existence of a Nash equilibrium which is symmetric for the other players. Thus, they are equivalent.",On the relation between Sion's minimax theorem and existence of Nash equilibrium in asymmetric multi-players zero-sum game with only one alien,2018-06-17 04:11:55,"Atsuhiro Satoh, Yasuhito Tanaka","http://arxiv.org/abs/1806.07253v1, http://arxiv.org/pdf/1806.07253v1",econ.EM 28855,em,"It is common practice in empirical work to employ cluster-robust standard errors when using the linear regression model to estimate some structural/causal effect of interest. Researchers also often include a large set of regressors in their model specification in order to control for observed and unobserved confounders. In this paper we develop inference methods for linear regression models with many controls and clustering. We show that inference based on the usual cluster-robust standard errors by Liang and Zeger (1986) is invalid in general when the number of controls is a non-vanishing fraction of the sample size. We then propose a new clustered standard errors formula that is robust to the inclusion of many controls and allows to carry out valid inference in a variety of high-dimensional linear regression models, including fixed effects panel data models and the semiparametric partially linear model. Monte Carlo evidence supports our theoretical results and shows that our proposed variance estimator performs well in finite samples. The proposed method is also illustrated with an empirical application that re-visits Donohue III and Levitt's (2001) study of the impact of abortion on crime.",Cluster-Robust Standard Errors for Linear Regression Models with Many Controls,2018-06-19 18:48:50,Riccardo D'Adamo,"http://arxiv.org/abs/1806.07314v3, http://arxiv.org/pdf/1806.07314v3",econ.EM 28856,em,"We study inference in shift-share regression designs, such as when a regional outcome is regressed on a weighted average of sectoral shocks, using regional sector shares as weights. We conduct a placebo exercise in which we estimate the effect of a shift-share regressor constructed with randomly generated sectoral shocks on actual labor market outcomes across U.S. Commuting Zones. Tests based on commonly used standard errors with 5\% nominal significance level reject the null of no effect in up to 55\% of the placebo samples. We use a stylized economic model to show that this overrejection problem arises because regression residuals are correlated across regions with similar sectoral shares, independently of their geographic location. We derive novel inference methods that are valid under arbitrary cross-regional correlation in the regression residuals. We show using popular applications of shift-share designs that our methods may lead to substantially wider confidence intervals in practice.",Shift-Share Designs: Theory and Inference,2018-06-20 21:57:10,"Rodrigo Adão, Michal Kolesár, Eduardo Morales","http://dx.doi.org/10.1093/qje/qjz025, http://arxiv.org/abs/1806.07928v5, http://arxiv.org/pdf/1806.07928v5",econ.EM 28857,em,"In this paper, we explore the relationship between state-level household income inequality and macroeconomic uncertainty in the United States. Using a novel large-scale macroeconometric model, we shed light on regional disparities of inequality responses to a national uncertainty shock. The results suggest that income inequality decreases in most states, with a pronounced degree of heterogeneity in terms of shapes and magnitudes of the dynamic responses. By contrast, some few states, mostly located in the West and South census region, display increasing levels of income inequality over time. We find that this directional pattern in responses is mainly driven by the income composition and labor market fundamentals. In addition, forecast error variance decompositions allow for a quantitative assessment of the importance of uncertainty shocks in explaining income inequality. The findings highlight that volatility shocks account for a considerable fraction of forecast error variance for most states considered. Finally, a regression-based analysis sheds light on the driving forces behind differences in state-specific inequality responses.",The transmission of uncertainty shocks on income inequality: State-level evidence from the United States,2018-06-21 17:57:45,"Manfred M. Fischer, Florian Huber, Michael Pfarrhofer","http://arxiv.org/abs/1806.08278v1, http://arxiv.org/pdf/1806.08278v1",econ.EM 28858,em,"We propose a new class of unit root tests that exploits invariance properties in the Locally Asymptotically Brownian Functional limit experiment associated to the unit root model. The invariance structures naturally suggest tests that are based on the ranks of the increments of the observations, their average, and an assumed reference density for the innovations. The tests are semiparametric in the sense that they are valid, i.e., have the correct (asymptotic) size, irrespective of the true innovation density. For a correctly specified reference density, our test is point-optimal and nearly efficient. For arbitrary reference densities, we establish a Chernoff-Savage type result, i.e., our test performs as well as commonly used tests under Gaussian innovations but has improved power under other, e.g., fat-tailed or skewed, innovation distributions. To avoid nonparametric estimation, we propose a simplified version of our test that exhibits the same asymptotic properties, except for the Chernoff-Savage result that we are only able to demonstrate by means of simulations.",Semiparametrically Point-Optimal Hybrid Rank Tests for Unit Roots,2018-06-25 10:03:48,"Bo Zhou, Ramon van den Akker, Bas J. M. Werker","http://dx.doi.org/10.1214/18-AOS1758, http://arxiv.org/abs/1806.09304v1, http://arxiv.org/pdf/1806.09304v1",econ.EM 28859,em,"In this article we introduce a general nonparametric point-identification result for nonseparable triangular models with a multivariate first- and second stage. Based on this we prove point-identification of Hedonic models with multivariate heterogeneity and endogenous observable characteristics, extending and complementing identification results from the literature which all require exogeneity. As an additional application of our theoretical result, we show that the BLP model (Berry et al. 1995) can also be identified without index restrictions.",Point-identification in multivariate nonseparable triangular models,2018-06-25 22:36:39,Florian Gunsilius,"http://arxiv.org/abs/1806.09680v1, http://arxiv.org/pdf/1806.09680v1",econ.EM 28860,em,"Historical examination of the Bretton Woods system allows comparisons to be made with the current evolution of the EMS.",The Bretton Woods Experience and ERM,2018-07-02 03:00:20,Chris Kirrane,"http://arxiv.org/abs/1807.00418v1, http://arxiv.org/pdf/1807.00418v1",econ.EM 28861,em,"This paper describes the opportunities and also the difficulties of EMU with regard to international monetary cooperation. Even though the institutional and intellectual assistance to the coordination of monetary policy in the EU will probably be strengthened with the EMU, among the shortcomings of the Maastricht Treaty concerns the relationship between the founder members and those countries who wish to remain outside monetary union.",Maastricht and Monetary Cooperation,2018-07-02 03:01:08,Chris Kirrane,"http://arxiv.org/abs/1807.00419v1, http://arxiv.org/pdf/1807.00419v1",econ.EM 28862,em,"This paper proposes a hierarchical modeling approach to perform stochastic model specification in Markov switching vector error correction models. We assume that a common distribution gives rise to the regime-specific regression coefficients. The mean as well as the variances of this distribution are treated as fully stochastic and suitable shrinkage priors are used. These shrinkage priors enable to assess which coefficients differ across regimes in a flexible manner. In the case of similar coefficients, our model pushes the respective regions of the parameter space towards the common distribution. This allows for selecting a parsimonious model while still maintaining sufficient flexibility to control for sudden shifts in the parameters, if necessary. We apply our modeling approach to real-time Euro area data and assume transition probabilities between expansionary and recessionary regimes to be driven by the cointegration errors. The results suggest that the regime allocation is governed by a subset of short-run adjustment coefficients and regime-specific variance-covariance matrices. These findings are complemented by an out-of-sample forecast exercise, illustrating the advantages of the model for predicting Euro area inflation in real time.",Stochastic model specification in Markov switching vector error correction models,2018-07-02 11:36:11,"Niko Hauzenberger, Florian Huber, Michael Pfarrhofer, Thomas O. Zörner","http://arxiv.org/abs/1807.00529v2, http://arxiv.org/pdf/1807.00529v2",econ.EM 28864,em,"This paper studies the identifying content of the instrument monotonicity assumption of Imbens and Angrist (1994) on the distribution of potential outcomes in a model with a binary outcome, a binary treatment and an exogenous binary instrument. Specifically, I derive necessary and sufficient conditions on the distribution of the data under which the identified set for the distribution of potential outcomes when the instrument monotonicity assumption is imposed can be a strict subset of that when it is not imposed.",On the Identifying Content of Instrument Monotonicity,2018-07-04 19:25:35,Vishal Kamat,"http://arxiv.org/abs/1807.01661v2, http://arxiv.org/pdf/1807.01661v2",econ.EM 28865,em,"This paper develops an inferential theory for state-varying factor models of large dimensions. Unlike constant factor models, loadings are general functions of some recurrent state process. We develop an estimator for the latent factors and state-varying loadings under a large cross-section and time dimension. Our estimator combines nonparametric methods with principal component analysis. We derive the rate of convergence and limiting normal distribution for the factors, loadings and common components. In addition, we develop a statistical test for a change in the factor structure in different states. We apply the estimator to U.S. Treasury yields and S&P500 stock returns. The systematic factor structure in treasury yields differs in times of booms and recessions as well as in periods of high market volatility. State-varying factors based on the VIX capture significantly more variation and pricing information in individual stocks than constant factor models.",State-Varying Factor Models of Large Dimensions,2018-07-06 07:05:40,"Markus Pelger, Ruoxuan Xiong","http://arxiv.org/abs/1807.02248v4, http://arxiv.org/pdf/1807.02248v4",econ.EM 28866,em,"The methods of new institutional economics for identifying the transaction costs of trade litigations in Bulgaria are used in the current paper. For the needs of the research, an indicative model, measuring this type of costs on microeconomic level, is applied in the study. The main purpose of the model is to forecast the rational behavior of trade litigation parties in accordance with the transaction costs in the process of enforcing the execution of the signed commercial contract. The application of the model is related to the more accurate measurement of the transaction costs on microeconomic level, which fact could lead to better prediction and management of these costs in order market efficiency and economic growth to be achieved. In addition, it is made an attempt to be analysed the efficiency of the institutional change of the commercial justice system and the impact of the reform of the judicial system over the economic turnover. The augmentation or lack of reduction of the transaction costs in trade litigations would mean inefficiency of the reform of the judicial system. JEL Codes: O43, P48, D23, K12",Transaction costs and institutional change of trade litigations in Bulgaria,2018-07-09 13:34:56,"Shteryo Nozharov, Petya Koralova-Nozharova","http://arxiv.org/abs/1807.03034v1, http://arxiv.org/pdf/1807.03034v1",econ.EM 28867,em,"The meaning of public messages such as ""One in x people gets cancer"" or ""One in y people gets cancer by age z"" can be improved. One assumption commonly invoked is that there is no other cause of death, a confusing assumption. We develop a light bulb model to clarify cumulative risk and we use Markov chain modeling, incorporating the assumption widely in place, to evaluate transition probabilities. Age-progression in the cancer risk is then reported on Australian data. Future modelling can elicit realistic assumptions.",Cancer Risk Messages: A Light Bulb Model,2018-07-09 13:58:20,"Ka C. Chan, Ruth F. G. Williams, Christopher T. Lenard, Terence M. Mills","http://arxiv.org/abs/1807.03040v2, http://arxiv.org/pdf/1807.03040v2",econ.EM 28868,em,"Statements for public health purposes such as ""1 in 2 will get cancer by age 85"" have appeared in public spaces. The meaning drawn from such statements affects economic welfare, not just public health. Both markets and government use risk information on all kinds of risks, useful information can, in turn, improve economic welfare, however inaccuracy can lower it. We adapt the contingency table approach so that a quoted risk is cross-classified with the states of nature. We show that bureaucratic objective functions regarding the accuracy of a reported cancer risk can then be stated.",Cancer Risk Messages: Public Health and Economic Welfare,2018-07-09 14:18:01,"Ruth F. G. Williams, Ka C. Chan, Christopher T. Lenard, Terence M. Mills","http://arxiv.org/abs/1807.03045v2, http://arxiv.org/pdf/1807.03045v2",econ.EM 28869,em,"This paper applies economic concepts from measuring income inequality to an exercise in assessing spatial inequality in cancer service access in regional areas. We propose a mathematical model for accessing chemotherapy among local government areas (LGAs). Our model incorporates a distance factor. With a simulation we report results for a single inequality measure: the Lorenz curve is depicted for our illustrative data. We develop this approach in order to move incrementally towards its application to actual data and real-world health service regions. We seek to develop the exercises that can lead policy makers to relevant policy information on the most useful data collections to be collected and modeling for cancer service access in regional areas.",Simulation Modelling of Inequality in Cancer Service Access,2018-07-09 14:25:38,"Ka C. Chan, Ruth F. G. Williams, Christopher T. Lenard, Terence M. Mills","http://dx.doi.org/10.1080/27707571.2022.2127188, http://arxiv.org/abs/1807.03048v1, http://arxiv.org/pdf/1807.03048v1",econ.EM 28870,em,"The data mining technique of time series clustering is well established in many fields. However, as an unsupervised learning method, it requires making choices that are nontrivially influenced by the nature of the data involved. The aim of this paper is to verify usefulness of the time series clustering method for macroeconomics research, and to develop the most suitable methodology. By extensively testing various possibilities, we arrive at a choice of a dissimilarity measure (compression-based dissimilarity measure, or CDM) which is particularly suitable for clustering macroeconomic variables. We check that the results are stable in time and reflect large-scale phenomena such as crises. We also successfully apply our findings to analysis of national economies, specifically to identifying their structural relations.",Clustering Macroeconomic Time Series,2018-07-11 11:51:41,"Iwo Augustyński, Paweł Laskoś-Grabowski","http://dx.doi.org/10.15611/eada.2018.2.06, http://arxiv.org/abs/1807.04004v2, http://arxiv.org/pdf/1807.04004v2",econ.EM 28871,em,"This paper re-examines the problem of estimating risk premia in linear factor pricing models. Typically, the data used in the empirical literature are characterized by weakness of some pricing factors, strong cross-sectional dependence in the errors, and (moderately) high cross-sectional dimensionality. Using an asymptotic framework where the number of assets/portfolios grows with the time span of the data while the risk exposures of weak factors are local-to-zero, we show that the conventional two-pass estimation procedure delivers inconsistent estimates of the risk premia. We propose a new estimation procedure based on sample-splitting instrumental variables regression. The proposed estimator of risk premia is robust to weak included factors and to the presence of strong unaccounted cross-sectional error dependence. We derive the many-asset weak factor asymptotic distribution of the proposed estimator, show how to construct its standard errors, verify its performance in simulations, and revisit some empirical studies.","Factor models with many assets: strong factors, weak factors, and the two-pass procedure",2018-07-11 14:53:19,"Stanislav Anatolyev, Anna Mikusheva","http://arxiv.org/abs/1807.04094v2, http://arxiv.org/pdf/1807.04094v2",econ.EM 28872,em,"This paper analyzes the bank lending channel and the heterogeneous effects on the euro area, providing evidence that the channel is indeed working. The analysis of the transmission mechanism is based on structural impulse responses to an unconventional monetary policy shock on bank loans. The Bank Lending Survey (BLS) is exploited in order to get insights on developments of loan demand and supply. The contribution of this paper is to use country-specific data to analyze the consequences of unconventional monetary policy, instead of taking an aggregate stance by using euro area data. This approach provides a deeper understanding of the bank lending channel and its effects. That is, an expansionary monetary policy shock leads to an increase in loan demand, supply and output growth. A small north-south disparity between the countries can be observed.",Heterogeneous Effects of Unconventional Monetary Policy on Loan Demand and Supply. Insights from the Bank Lending Survey,2018-07-11 17:36:21,Martin Guth,"http://arxiv.org/abs/1807.04161v1, http://arxiv.org/pdf/1807.04161v1",econ.EM 28873,em,"I present a dynamic, voluntary contribution mechanism, public good game and derive its potential outcomes. In each period, players endogenously determine contribution productivity by engaging in costly investment. The level of contribution productivity carries from period to period, creating a dynamic link between periods. The investment mimics investing in the stock of technology for producing public goods such as national defense or a clean environment. After investing, players decide how much of their remaining money to contribute to provision of the public good, as in traditional public good games. I analyze three kinds of outcomes of the game: the lowest payoff outcome, the Nash Equilibria, and socially optimal behavior. In the lowest payoff outcome, all players receive payoffs of zero. Nash Equilibrium occurs when players invest any amount and contribute all or nothing depending on the contribution productivity. Therefore, there are infinitely many Nash Equilibria strategies. Finally, the socially optimal result occurs when players invest everything in early periods, then at some point switch to contributing everything. My goal is to discover and explain this point. I use mathematical analysis and computer simulation to derive the results.",Analysis of a Dynamic Voluntary Contribution Mechanism Public Good Game,2018-07-12 17:13:41,Dmytro Bogatov,"http://arxiv.org/abs/1807.04621v2, http://arxiv.org/pdf/1807.04621v2",econ.EM 28874,em,"Wang and Tchetgen Tchetgen (2017) studied identification and estimation of the average treatment effect when some confounders are unmeasured. Under their identification condition, they showed that the semiparametric efficient influence function depends on five unknown functionals. They proposed to parameterize all functionals and estimate the average treatment effect from the efficient influence function by replacing the unknown functionals with estimated functionals. They established that their estimator is consistent when certain functionals are correctly specified and attains the semiparametric efficiency bound when all functionals are correctly specified. In applications, it is likely that those functionals could all be misspecified. Consequently their estimator could be inconsistent or consistent but not efficient. This paper presents an alternative estimator that does not require parameterization of any of the functionals. We establish that the proposed estimator is always consistent and always attains the semiparametric efficiency bound. A simple and intuitive estimator of the asymptotic variance is presented, and a small scale simulation study reveals that the proposed estimation outperforms the existing alternatives in finite samples.",A Simple and Efficient Estimation of the Average Treatment Effect in the Presence of Unmeasured Confounders,2018-07-16 07:42:01,"Chunrong Ai, Lukang Huang, Zheng Zhang","http://arxiv.org/abs/1807.05678v1, http://arxiv.org/pdf/1807.05678v1",econ.EM 28875,em,"This paper analyzes how the legalization of same-sex marriage in the U.S. affected gay and lesbian couples in the labor market. Results from a difference-in-difference model show that both partners in same-sex couples were more likely to be employed, to have a full-time contract, and to work longer hours in states that legalized same-sex marriage. In line with a theoretical search model of discrimination, suggestive empirical evidence supports the hypothesis that marriage equality led to an improvement in employment outcomes among gays and lesbians and lower occupational segregation thanks to a decrease in discrimination towards sexual minorities.","Pink Work: Same-Sex Marriage, Employment and Discrimination",2018-07-18 01:57:39,Dario Sansone,"http://arxiv.org/abs/1807.06698v1, http://arxiv.org/pdf/1807.06698v1",econ.EM 28876,em,"Regression quantiles have asymptotic variances that depend on the conditional densities of the response variable given regressors. This paper develops a new estimate of the asymptotic variance of regression quantiles that leads any resulting Wald-type test or confidence region to behave as well in large samples as its infeasible counterpart in which the true conditional response densities are embedded. We give explicit guidance on implementing the new variance estimator to control adaptively the size of any resulting Wald-type test. Monte Carlo evidence indicates the potential of our approach to deliver powerful tests of heterogeneity of quantile treatment effects in covariates with good size performance over different quantile levels, data-generating processes and sample sizes. We also include an empirical example. Supplementary material is available online.",Quantile-Regression Inference With Adaptive Control of Size,2018-07-18 17:40:36,"Juan Carlos Escanciano, Chuan Goh","http://dx.doi.org/10.1080/01621459.2018.1505624, http://arxiv.org/abs/1807.06977v2, http://arxiv.org/pdf/1807.06977v2",econ.EM 28877,em,"The accumulation of knowledge required to produce economic value is a process that often relates to nations economic growth. Such a relationship, however, is misleading when the proxy of such accumulation is the average years of education. In this paper, we show that the predictive power of this proxy started to dwindle in 1990 when nations schooling began to homogenized. We propose a metric of human capital that is less sensitive than average years of education and remains as a significant predictor of economic growth when tested with both cross-section data and panel data. We argue that future research on economic growth will discard educational variables based on quantity as predictor given the thresholds that these variables are reaching.",A New Index of Human Capital to Predict Economic Growth,2018-07-18 20:34:27,"Henry Laverde, Juan C. Correa, Klaus Jaffe","http://arxiv.org/abs/1807.07051v1, http://arxiv.org/pdf/1807.07051v1",econ.EM 28878,em,"The public debt and deficit ceilings of the Maastricht Treaty are the subject of recurring controversy. First, there is debate about the role and impact of these criteria in the initial phase of the introduction of the single currency. Secondly, it must be specified how these will then be applied, in a permanent regime, when the single currency is well established.",Stability in EMU,2018-07-20 10:53:14,Theo Peeters,"http://arxiv.org/abs/1807.07730v1, http://arxiv.org/pdf/1807.07730v1",econ.EM 28879,em,"If multiway cluster-robust standard errors are used routinely in applied economics, surprisingly few theoretical results justify this practice. This paper aims to fill this gap. We first prove, under nearly the same conditions as with i.i.d. data, the weak convergence of empirical processes under multiway clustering. This result implies central limit theorems for sample averages but is also key for showing the asymptotic normality of nonlinear estimators such as GMM estimators. We then establish consistency of various asymptotic variance estimators, including that of Cameron et al. (2011) but also a new estimator that is positive by construction. Next, we show the general consistency, for linear and nonlinear estimators, of the pigeonhole bootstrap, a resampling scheme adapted to multiway clustering. Monte Carlo simulations suggest that inference based on our two preferred methods may be accurate even with very few clusters, and significantly improve upon inference based on Cameron et al. (2011).",Asymptotic results under multiway clustering,2018-07-20 19:33:13,"Laurent Davezies, Xavier D'Haultfoeuille, Yannick Guyonvarch","http://arxiv.org/abs/1807.07925v2, http://arxiv.org/pdf/1807.07925v2",econ.EM 28880,em,"In dynamical framework the conflict between government and the central bank according to the exchange Rate of payment of fixed rates and fixed rates of fixed income (EMU) convergence criteria such that the public debt / GDP ratio The method consists of calculating private public debt management in a public debt management system purpose there is no mechanism to allow naturally for this adjustment.",EMU and ECB Conflicts,2018-07-21 09:57:15,William Mackenzie,"http://arxiv.org/abs/1807.08097v1, http://arxiv.org/pdf/1807.08097v1",econ.EM 28881,em,"Dynamic discrete choice models often discretize the state vector and restrict its dimension in order to achieve valid inference. I propose a novel two-stage estimator for the set-identified structural parameter that incorporates a high-dimensional state space into the dynamic model of imperfect competition. In the first stage, I estimate the state variable's law of motion and the equilibrium policy function using machine learning tools. In the second stage, I plug the first-stage estimates into a moment inequality and solve for the structural parameter. The moment function is presented as the sum of two components, where the first one expresses the equilibrium assumption and the second one is a bias correction term that makes the sum insensitive (i.e., orthogonal) to first-stage bias. The proposed estimator uniformly converges at the root-N rate and I use it to construct confidence regions. The results developed here can be used to incorporate high-dimensional state space into classic dynamic discrete choice models, for example, those considered in Rust (1987), Bajari et al. (2007), and Scott (2013).",Machine Learning for Dynamic Discrete Choice,2018-08-08 01:23:50,Vira Semenova,"http://arxiv.org/abs/1808.02569v2, http://arxiv.org/pdf/1808.02569v2",econ.EM 28882,em,"This paper presents a weighted optimization framework that unifies the binary,multi-valued, continuous, as well as mixture of discrete and continuous treatment, under the unconfounded treatment assignment. With a general loss function, the framework includes the average, quantile and asymmetric least squares causal effect of treatment as special cases. For this general framework, we first derive the semiparametric efficiency bound for the causal effect of treatment, extending the existing bound results to a wider class of models. We then propose a generalized optimization estimation for the causal effect with weights estimated by solving an expanding set of equations. Under some sufficient conditions, we establish consistency and asymptotic normality of the proposed estimator of the causal effect and show that the estimator attains our semiparametric efficiency bound, thereby extending the existing literature on efficient estimation of causal effect to a wider class of applications. Finally, we discuss etimation of some causal effect functionals such as the treatment effect curve and the average outcome. To evaluate the finite sample performance of the proposed procedure, we conduct a small scale simulation study and find that the proposed estimation has practical value. To illustrate the applicability of the procedure, we revisit the literature on campaign advertise and campaign contributions. Unlike the existing procedures which produce mixed results, we find no evidence of campaign advertise on campaign contribution.",A Unified Framework for Efficient Estimation of General Treatment Models,2018-08-15 04:32:29,"Chunrong Ai, Oliver Linton, Kaiji Motegi, Zheng Zhang","http://arxiv.org/abs/1808.04936v2, http://arxiv.org/pdf/1808.04936v2",econ.EM 28883,em,"Recent years have seen many attempts to combine expenditure-side estimates of U.S. real output (GDE) growth with income-side estimates (GDI) to improve estimates of real GDP growth. We show how to incorporate information from multiple releases of noisy data to provide more precise estimates while avoiding some of the identifying assumptions required in earlier work. This relies on a new insight: using multiple data releases allows us to distinguish news and noise measurement errors in situations where a single vintage does not. Our new measure, GDP++, fits the data better than GDP+, the GDP growth measure of Aruoba et al. (2016) published by the Federal Reserve Bank of Philadephia. Historical decompositions show that GDE releases are more informative than GDI, while the use of multiple data releases is particularly important in the quarters leading up to the Great Recession.",Can GDP measurement be further improved? Data revision and reconciliation,2018-08-15 07:48:26,"Jan P. A. M. Jacobs, Samad Sarferaz, Jan-Egbert Sturm, Simon van Norden","http://arxiv.org/abs/1808.04970v1, http://arxiv.org/pdf/1808.04970v1",econ.EM 28884,em,"While investments in renewable energy sources (RES) are incentivized around the world, the policy tools that do so are still poorly understood, leading to costly misadjustments in many cases. As a case study, the deployment dynamics of residential solar photovoltaics (PV) invoked by the German feed-in tariff legislation are investigated. Here we report a model showing that the question of when people invest in residential PV systems is found to be not only determined by profitability, but also by profitability's change compared to the status quo. This finding is interpreted in the light of loss aversion, a concept developed in Kahneman and Tversky's Prospect Theory. The model is able to reproduce most of the dynamics of the uptake with only a few financial and behavioral assumptions",When Do Households Invest in Solar Photovoltaics? An Application of Prospect Theory,2018-08-16 19:29:55,"Martin Klein, Marc Deissenroth","http://dx.doi.org/10.1016/j.enpol.2017.06.067, http://arxiv.org/abs/1808.05572v1, http://arxiv.org/pdf/1808.05572v1",econ.EM 28910,em,"Kitamura and Stoye (2014) develop a nonparametric test for linear inequality constraints, when these are are represented as vertices of a polyhedron instead of its faces. They implement this test for an application to nonparametric tests of Random Utility Models. As they note in their paper, testing such models is computationally challenging. In this paper, we develop and implement more efficient algorithms, based on column generation, to carry out the test. These improved algorithms allow us to tackle larger datasets.",Column Generation Algorithms for Nonparametric Analysis of Random Utility Models,2018-12-04 16:28:33,Bart Smeulders,"http://arxiv.org/abs/1812.01400v1, http://arxiv.org/pdf/1812.01400v1",econ.EM 28885,em,"The purpose of this paper is to provide guidelines for empirical researchers who use a class of bivariate threshold crossing models with dummy endogenous variables. A common practice employed by the researchers is the specification of the joint distribution of the unobservables as a bivariate normal distribution, which results in a bivariate probit model. To address the problem of misspecification in this practice, we propose an easy-to-implement semiparametric estimation framework with parametric copula and nonparametric marginal distributions. We establish asymptotic theory, including root-n normality, for the sieve maximum likelihood estimators that can be used to conduct inference on the individual structural parameters and the average treatment effect (ATE). In order to show the practical relevance of the proposed framework, we conduct a sensitivity analysis via extensive Monte Carlo simulation exercises. The results suggest that the estimates of the parameters, especially the ATE, are sensitive to parametric specification, while semiparametric estimation exhibits robustness to underlying data generating processes. We then provide an empirical illustration where we estimate the effect of health insurance on doctor visits. In this paper, we also show that the absence of excluded instruments may result in identification failure, in contrast to what some practitioners believe.",Estimation in a Generalization of Bivariate Probit Models with Dummy Endogenous Regressors,2018-08-17 11:34:04,"Sukjin Han, Sungwon Lee","http://arxiv.org/abs/1808.05792v2, http://arxiv.org/pdf/1808.05792v2",econ.EM 28886,em,"Under suitable conditions, one-step generalized method of moments (GMM) based on the first-difference (FD) transformation is numerically equal to one-step GMM based on the forward orthogonal deviations (FOD) transformation. However, when the number of time periods ($T$) is not small, the FOD transformation requires less computational work. This paper shows that the computational complexity of the FD and FOD transformations increases with the number of individuals ($N$) linearly, but the computational complexity of the FOD transformation increases with $T$ at the rate $T^{4}$ increases, while the computational complexity of the FD transformation increases at the rate $T^{6}$ increases. Simulations illustrate that calculations exploiting the FOD transformation are performed orders of magnitude faster than those using the FD transformation. The results in the paper indicate that, when one-step GMM based on the FD and FOD transformations are the same, Monte Carlo experiments can be conducted much faster if the FOD version of the estimator is used.",Quantifying the Computational Advantage of Forward Orthogonal Deviations,2018-08-17 23:57:31,Robert F. Phillips,"http://arxiv.org/abs/1808.05995v1, http://arxiv.org/pdf/1808.05995v1",econ.EM 28887,em,"There is generally a need to deal with quality change and new goods in the consumer price index due to the underlying dynamic item universe. Traditionally axiomatic tests are defined for a fixed universe. We propose five tests explicitly formulated for a dynamic item universe, and motivate them both from the perspectives of a cost-of-goods index and a cost-of-living index. None of the indices satisfies all the tests at the same time, which are currently available for making use of scanner data that comprises the whole item universe. The set of tests provides a rigorous diagnostic for whether an index is completely appropriate in a dynamic item universe, as well as pointing towards the directions of possible remedies. We thus outline a large index family that potentially can satisfy all the tests.",Tests for price indices in a dynamic item universe,2018-08-27 22:01:08,"Li-Chun Zhang, Ingvild Johansen, Ragnhild Nygaard","http://arxiv.org/abs/1808.08995v2, http://arxiv.org/pdf/1808.08995v2",econ.EM 28888,em,"A fixed-design residual bootstrap method is proposed for the two-step estimator of Francq and Zako\""ian (2015) associated with the conditional Value-at-Risk. The bootstrap's consistency is proven for a general class of volatility models and intervals are constructed for the conditional Value-at-Risk. A simulation study reveals that the equal-tailed percentile bootstrap interval tends to fall short of its nominal value. In contrast, the reversed-tails bootstrap interval yields accurate coverage. We also compare the theoretically analyzed fixed-design bootstrap with the recursive-design bootstrap. It turns out that the fixed-design bootstrap performs equally well in terms of average coverage, yet leads on average to shorter intervals in smaller samples. An empirical application illustrates the interval estimation.",A Residual Bootstrap for Conditional Value-at-Risk,2018-08-28 08:34:36,"Eric Beutner, Alexander Heinemann, Stephan Smeekes","http://arxiv.org/abs/1808.09125v4, http://arxiv.org/pdf/1808.09125v4",econ.EM 28889,em,"Kotlarski's identity has been widely used in applied economic research. However, how to conduct inference based on this popular identification approach has been an open question for two decades. This paper addresses this open problem by constructing a novel confidence band for the density function of a latent variable in repeated measurement error model. The confidence band builds on our finding that we can rewrite Kotlarski's identity as a system of linear moment restrictions. The confidence band controls the asymptotic size uniformly over a class of data generating processes, and it is consistent against all fixed alternatives. Simulation studies support our theoretical results.",Inference based on Kotlarski's Identity,2018-08-28 18:54:59,"Kengo Kato, Yuya Sasaki, Takuya Ura","http://arxiv.org/abs/1808.09375v3, http://arxiv.org/pdf/1808.09375v3",econ.EM 28890,em,"This study considers various semiparametric difference-in-differences models under different assumptions on the relation between the treatment group identifier, time and covariates for cross-sectional and panel data. The variance lower bound is shown to be sensitive to the model assumptions imposed implying a robustness-efficiency trade-off. The obtained efficient influence functions lead to estimators that are rate double robust and have desirable asymptotic properties under weak first stage convergence conditions. This enables to use sophisticated machine-learning algorithms that can cope with settings where common trend confounding is high-dimensional. The usefulness of the proposed estimators is assessed in an empirical example. It is shown that the efficiency-robustness trade-offs and the choice of first stage predictors can lead to divergent empirical results in practice.",Efficient Difference-in-Differences Estimation with High-Dimensional Common Trend Confounding,2018-09-05 20:41:34,Michael Zimmert,"http://arxiv.org/abs/1809.01643v5, http://arxiv.org/pdf/1809.01643v5",econ.EM 28918,em,"We study estimation, pointwise and simultaneous inference, and confidence intervals for many average partial effects of lasso Logit. Focusing on high-dimensional, cluster-sampling environments, we propose a new average partial effect estimator and explore its asymptotic properties. Practical penalty choices compatible with our asymptotic theory are also provided. The proposed estimator allow for valid inference without requiring oracle property. We provide easy-to-implement algorithms for cluster-robust high-dimensional hypothesis testing and construction of simultaneously valid confidence intervals using a multiplier cluster bootstrap. We apply the proposed algorithms to the text regression model of Wu (2018) to examine the presence of gendered language on the internet.",Many Average Partial Effects: with An Application to Text Regression,2018-12-22 01:35:51,Harold D. Chiang,"http://arxiv.org/abs/1812.09397v5, http://arxiv.org/pdf/1812.09397v5",econ.EM 28891,em,"The bootstrap is a method for estimating the distribution of an estimator or test statistic by re-sampling the data or a model estimated from the data. Under conditions that hold in a wide variety of econometric applications, the bootstrap provides approximations to distributions of statistics, coverage probabilities of confidence intervals, and rejection probabilities of hypothesis tests that are more accurate than the approximations of first-order asymptotic distribution theory. The reductions in the differences between true and nominal coverage or rejection probabilities can be very large. In addition, the bootstrap provides a way to carry out inference in certain settings where obtaining analytic distributional approximations is difficult or impossible. This article explains the usefulness and limitations of the bootstrap in contexts of interest in econometrics. The presentation is informal and expository. It provides an intuitive understanding of how the bootstrap works. Mathematical details are available in references that are cited.",Bootstrap Methods in Econometrics,2018-09-11 19:39:03,Joel L. Horowitz,"http://arxiv.org/abs/1809.04016v1, http://arxiv.org/pdf/1809.04016v1",econ.EM 28892,em,"A method for implicit variable selection in mixture of experts frameworks is proposed. We introduce a prior structure where information is taken from a set of independent covariates. Robust class membership predictors are identified using a normal gamma prior. The resulting model setup is used in a finite mixture of Bernoulli distributions to find homogenous clusters of women in Mozambique based on their information sources on HIV. Fully Bayesian inference is carried out via the implementation of a Gibbs sampler.",Bayesian shrinkage in mixture of experts models: Identifying robust determinants of class membership,2018-09-13 12:30:21,Gregor Zens,"http://arxiv.org/abs/1809.04853v2, http://arxiv.org/pdf/1809.04853v2",econ.EM 28893,em,"Time averaging has been the traditional approach to handle mixed sampling frequencies. However, it ignores information possibly embedded in high frequency. Mixed data sampling (MIDAS) regression models provide a concise way to utilize the additional information in high-frequency variables. In this paper, we propose a specification test to choose between time averaging and MIDAS models, based on a Durbin-Wu-Hausman test. In particular, a set of instrumental variables is proposed and theoretically validated when the frequency ratio is large. As a result, our method tends to be more powerful than existing methods, as reconfirmed through the simulations.",On the Choice of Instruments in Mixed Frequency Specification Tests,2018-09-14 19:59:44,"Yun Liu, Yeonwoo Rho","http://arxiv.org/abs/1809.05503v1, http://arxiv.org/pdf/1809.05503v1",econ.EM 28894,em,"This article deals with asimple issue: if we have grouped data with a binary dependent variable and want to include fixed effects (group specific intercepts) in the specification, is Ordinary Least Squares (OLS) in any way superior to a (conditional) logit form? In particular, what are the consequences of using OLS instead of a fixed effects logit model with respect to the latter dropping all units which show no variability in the dependent variable while the former allows for estimation using all units. First, we show that the discussion of fthe incidental parameters problem is based on an assumption about the kinds of data being studied; for what appears to be the common use of fixed effect models in political science the incidental parameters issue is illusory. Turning to linear models, we see that OLS yields a linear combination of the estimates for the units with and without variation in the dependent variable, and so the coefficient estimates must be carefully interpreted. The article then compares two methods of estimating logit models with fixed effects, and shows that the Chamberlain conditional logit is as good as or better than a logit analysis which simply includes group specific intercepts (even though the conditional logit technique was designed to deal with the incidental parameters problem!). Related to this, the article discusses the estimation of marginal effects using both OLS and logit. While it appears that a form of logit with fixed effects can be used to estimate marginal effects, this method can be improved by starting with conditional logit and then using the those parameter estimates to constrain the logit with fixed effects model. This method produces estimates of sample average marginal effects that are at least as good as OLS, and much better when group size is small or the number of groups is large. .",Estimating grouped data models with a binary dependent variable and fixed effects: What are the issues,2018-09-18 05:25:25,Nathaniel Beck,"http://arxiv.org/abs/1809.06505v1, http://arxiv.org/pdf/1809.06505v1",econ.EM 28895,em,"We provide new results for nonparametric identification, estimation, and inference of causal effects using `proxy controls': observables that are noisy but informative proxies for unobserved confounding factors. Our analysis applies to cross-sectional settings but is particularly well-suited to panel models. Our identification results motivate a simple and `well-posed' nonparametric estimator. We derive convergence rates for the estimator and construct uniform confidence bands with asymptotically correct size. In panel settings, our methods provide a novel approach to the difficult problem of identification with non-separable, general heterogeneity and fixed $T$. In panels, observations from different periods serve as proxies for unobserved heterogeneity and our key identifying assumptions follow from restrictions on the serial dependence structure. We apply our methods to two empirical settings. We estimate consumer demand counterfactuals using panel data and we estimate causal effects of grade retention on cognitive performance.",Proxy Controls and Panel Data,2018-09-30 03:38:11,Ben Deaner,"http://arxiv.org/abs/1810.00283v8, http://arxiv.org/pdf/1810.00283v8",econ.EM 28896,em,"We consider the problem of regression with selectively observed covariates in a nonparametric framework. Our approach relies on instrumental variables that explain variation in the latent covariates but have no direct effect on selection. The regression function of interest is shown to be a weighted version of observed conditional expectation where the weighting function is a fraction of selection probabilities. Nonparametric identification of the fractional probability weight (FPW) function is achieved via a partial completeness assumption. We provide primitive functional form assumptions for partial completeness to hold. The identification result is constructive for the FPW series estimator. We derive the rate of convergence and also the pointwise asymptotic distribution. In both cases, the asymptotic performance of the FPW series estimator does not suffer from the inverse problem which derives from the nonparametric instrumental variable approach. In a Monte Carlo study, we analyze the finite sample properties of our estimator and we compare our approach to inverse probability weighting, which can be used alternatively for unconditional moment estimation. In the empirical application, we focus on two different applications. We estimate the association between income and health using linked data from the SHARE survey and administrative pension information and use pension entitlements as an instrument. In the second application we revisit the question how income affects the demand for housing based on data from the German Socio-Economic Panel Study (SOEP). In this application we use regional income information on the residential block level as an instrument. In both applications we show that income is selectively missing and we demonstrate that standard methods that do not account for the nonrandom selection process lead to significantly biased estimates for individuals with low income.",Nonparametric Regression with Selectively Missing Covariates,2018-09-30 18:52:54,"Christoph Breunig, Peter Haan","http://arxiv.org/abs/1810.00411v4, http://arxiv.org/pdf/1810.00411v4",econ.EM 28897,em,"The intention of this paper is to discuss the mathematical model of causality introduced by C.W.J. Granger in 1969. The Granger's model of causality has become well-known and often used in various econometric models describing causal systems, e.g., between commodity prices and exchange rates. Our paper presents a new mathematical model of causality between two measured objects. We have slightly modified the well-known Kolmogorovian probability model. In particular, we use the horizontal sum of set $\sigma$-algebras instead of their direct product.",Granger causality on horizontal sum of Boolean algebras,2018-10-03 12:27:43,"M. Bohdalová, M. Kalina, O. Nánásiová","http://arxiv.org/abs/1810.01654v1, http://arxiv.org/pdf/1810.01654v1",econ.EM 28898,em,"Explanatory variables in a predictive regression typically exhibit low signal strength and various degrees of persistence. Variable selection in such a context is of great importance. In this paper, we explore the pitfalls and possibilities of the LASSO methods in this predictive regression framework. In the presence of stationary, local unit root, and cointegrated predictors, we show that the adaptive LASSO cannot asymptotically eliminate all cointegrating variables with zero regression coefficients. This new finding motivates a novel post-selection adaptive LASSO, which we call the twin adaptive LASSO (TAlasso), to restore variable selection consistency. Accommodating the system of heterogeneous regressors, TAlasso achieves the well-known oracle property. In contrast, conventional LASSO fails to attain coefficient estimation consistency and variable screening in all components simultaneously. We apply these LASSO methods to evaluate the short- and long-horizon predictability of S\&P 500 excess returns.",On LASSO for Predictive Regression,2018-10-07 16:19:07,"Ji Hyung Lee, Zhentao Shi, Zhan Gao","http://arxiv.org/abs/1810.03140v4, http://arxiv.org/pdf/1810.03140v4",econ.EM 28899,em,"This paper proposes a new approach to obtain uniformly valid inference for linear functionals or scalar subvectors of a partially identified parameter defined by linear moment inequalities. The procedure amounts to bootstrapping the value functions of randomly perturbed linear programming problems, and does not require the researcher to grid over the parameter space. The low-level conditions for uniform validity rely on genericity results for linear programs. The unconventional perturbation approach produces a confidence set with a coverage probability of 1 over the identified set, but obtains exact coverage on an outer set, is valid under weak assumptions, and is computationally simple to implement.",Simple Inference on Functionals of Set-Identified Parameters Defined by Linear Moments,2018-10-07 20:03:14,"JoonHwan Cho, Thomas M. Russell","http://arxiv.org/abs/1810.03180v10, http://arxiv.org/pdf/1810.03180v10",econ.EM 28900,em,"In this paper we consider the properties of the Pesaran (2004, 2015a) CD test for cross-section correlation when applied to residuals obtained from panel data models with many estimated parameters. We show that the presence of period-specific parameters leads the CD test statistic to diverge as length of the time dimension of the sample grows. This result holds even if cross-section dependence is correctly accounted for and hence constitutes an example of the Incidental Parameters Problem. The relevance of this problem is investigated both for the classical Time Fixed Effects estimator as well as the Common Correlated Effects estimator of Pesaran (2006). We suggest a weighted CD test statistic which re-establishes standard normal inference under the null hypothesis. Given the widespread use of the CD test statistic to test for remaining cross-section correlation, our results have far reaching implications for empirical researchers.",The Incidental Parameters Problem in Testing for Remaining Cross-section Correlation,2018-10-09 00:48:52,"Arturas Juodis, Simon Reese","http://arxiv.org/abs/1810.03715v4, http://arxiv.org/pdf/1810.03715v4",econ.EM 28901,em,"This paper studies nonparametric identification and counterfactual bounds for heterogeneous firms that can be ranked in terms of productivity. Our approach works when quantities and prices are latent, rendering standard approaches inapplicable. Instead, we require observation of profits or other optimizing-values such as costs or revenues, and either prices or price proxies of flexibly chosen variables. We extend classical duality results for price-taking firms to a setup with discrete heterogeneity, endogeneity, and limited variation in possibly latent prices. Finally, we show that convergence results for nonparametric estimators may be directly converted to convergence results for production sets.","Prices, Profits, Proxies, and Production",2018-10-10 21:15:29,"Victor H. Aguiar, Nail Kashaev, Roy Allen","http://arxiv.org/abs/1810.04697v4, http://arxiv.org/pdf/1810.04697v4",econ.EM 28902,em,"A long-standing question about consumer behavior is whether individuals' observed purchase decisions satisfy the revealed preference (RP) axioms of the utility maximization theory (UMT). Researchers using survey or experimental panel data sets on prices and consumption to answer this question face the well-known problem of measurement error. We show that ignoring measurement error in the RP approach may lead to overrejection of the UMT. To solve this problem, we propose a new statistical RP framework for consumption panel data sets that allows for testing the UMT in the presence of measurement error. Our test is applicable to all consumer models that can be characterized by their first-order conditions. Our approach is nonparametric, allows for unrestricted heterogeneity in preferences, and requires only a centering condition on measurement error. We develop two applications that provide new evidence about the UMT. First, we find support in a survey data set for the dynamic and time-consistent UMT in single-individual households, in the presence of \emph{nonclassical} measurement error in consumption. In the second application, we cannot reject the static UMT in a widely used experimental data set in which measurement error in prices is assumed to be the result of price misperception due to the experimental design. The first finding stands in contrast to the conclusions drawn from the deterministic RP test of Browning (1989). The second finding reverses the conclusions drawn from the deterministic RP test of Afriat (1967) and Varian (1982).",Stochastic Revealed Preferences with Measurement Error,2018-10-12 02:25:24,"Victor H. Aguiar, Nail Kashaev","http://arxiv.org/abs/1810.05287v2, http://arxiv.org/pdf/1810.05287v2",econ.EM 28903,em,"In this paper, we study estimation of nonlinear models with cross sectional data using two-step generalized estimating equations (GEE) in the quasi-maximum likelihood estimation (QMLE) framework. In the interest of improving efficiency, we propose a grouping estimator to account for the potential spatial correlation in the underlying innovations. We use a Poisson model and a Negative Binomial II model for count data and a Probit model for binary response data to demonstrate the GEE procedure. Under mild weak dependency assumptions, results on estimation consistency and asymptotic normality are provided. Monte Carlo simulations show efficiency gain of our approach in comparison of different estimation methods for count data and binary response data. Finally we apply the GEE approach to study the determinants of the inflow foreign direct investment (FDI) to China.",Using generalized estimating equations to estimate nonlinear models with spatial data,2018-10-13 15:58:41,"Cuicui Lu, Weining Wang, Jeffrey M. Wooldridge","http://arxiv.org/abs/1810.05855v1, http://arxiv.org/pdf/1810.05855v1",econ.EM 28925,em,"Nonparametric Instrumental Variables (NPIV) analysis is based on a conditional moment restriction. We show that if this moment condition is even slightly misspecified, say because instruments are not quite valid, then NPIV estimates can be subject to substantial asymptotic error and the identified set under a relaxed moment condition may be large. Imposing strong a priori smoothness restrictions mitigates the problem but induces bias if the restrictions are too strong. In order to manage this trade-off we develop a methods for empirical sensitivity analysis and apply them to the consumer demand data previously analyzed in Blundell (2007) and Horowitz (2011).",Nonparametric Instrumental Variables Estimation Under Misspecification,2019-01-04 21:52:59,Ben Deaner,"http://arxiv.org/abs/1901.01241v7, http://arxiv.org/pdf/1901.01241v7",econ.EM 28904,em,"This paper develops a consistent heteroskedasticity robust Lagrange Multiplier (LM) type specification test for semiparametric conditional mean models. Consistency is achieved by turning a conditional moment restriction into a growing number of unconditional moment restrictions using series methods. The proposed test statistic is straightforward to compute and is asymptotically standard normal under the null. Compared with the earlier literature on series-based specification tests in parametric models, I rely on the projection property of series estimators and derive a different normalization of the test statistic. Compared with the recent test in Gupta (2018), I use a different way of accounting for heteroskedasticity. I demonstrate using Monte Carlo studies that my test has superior finite sample performance compared with the existing tests. I apply the test to one of the semiparametric gasoline demand specifications from Yatchew and No (2001) and find no evidence against it.",A Consistent Heteroskedasticity Robust LM Type Specification Test for Semiparametric Models,2018-10-17 18:37:02,Ivan Korolev,"http://arxiv.org/abs/1810.07620v3, http://arxiv.org/pdf/1810.07620v3",econ.EM 28905,em,"This study considers treatment effect models in which others' treatment decisions can affect both one's own treatment and outcome. Focusing on the case of two-player interactions, we formulate treatment decision behavior as a complete information game with multiple equilibria. Using a latent index framework and assuming a stochastic equilibrium selection, we prove that the marginal treatment effect from one's own treatment and that from the partner are identifiable on the conditional supports of certain threshold variables determined through the game model. Based on our constructive identification results, we propose a two-step semiparametric procedure for estimating the marginal treatment effects using series approximation. We show that the proposed estimator is uniformly consistent and asymptotically normally distributed. As an empirical illustration, we investigate the impacts of risky behaviors on adolescents' academic performance.",Treatment Effect Models with Strategic Interaction in Treatment Decisions,2018-10-19 06:51:42,"Tadao Hoshino, Takahide Yanagi","http://arxiv.org/abs/1810.08350v11, http://arxiv.org/pdf/1810.08350v11",econ.EM 28906,em,"In this paper we include dependency structures for electricity price forecasting and forecasting evaluation. We work with off-peak and peak time series from the German-Austrian day-ahead price, hence we analyze bivariate data. We first estimate the mean of the two time series, and then in a second step we estimate the residuals. The mean equation is estimated by OLS and elastic net and the residuals are estimated by maximum likelihood. Our contribution is to include a bivariate jump component on a mean reverting jump diffusion model in the residuals. The models' forecasts are evaluated using four different criteria, including the energy score to measure whether the correlation structure between the time series is properly included or not. In the results it is observed that the models with bivariate jumps provide better results with the energy score, which means that it is important to consider this structure in order to properly forecast correlated time series.",Probabilistic Forecasting in Day-Ahead Electricity Markets: Simulating Peak and Off-Peak Prices,2018-10-19 12:27:16,"Peru Muniain, Florian Ziel","http://dx.doi.org/10.1016/j.ijforecast.2019.11.006, http://arxiv.org/abs/1810.08418v2, http://arxiv.org/pdf/1810.08418v2",econ.EM 28907,em,"We propose a novel two-regime regression model where regime switching is driven by a vector of possibly unobservable factors. When the factors are latent, we estimate them by the principal component analysis of a panel data set. We show that the optimization problem can be reformulated as mixed integer optimization, and we present two alternative computational algorithms. We derive the asymptotic distribution of the resulting estimator under the scheme that the threshold effect shrinks to zero. In particular, we establish a phase transition that describes the effect of first-stage factor estimation as the cross-sectional dimension of panel data increases relative to the time-series dimension. Moreover, we develop bootstrap inference and illustrate our methods via numerical studies.",Factor-Driven Two-Regime Regression,2018-10-26 00:12:52,"Sokbae Lee, Yuan Liao, Myung Hwan Seo, Youngki Shin","http://dx.doi.org/10.1214/20-AOS2017, http://arxiv.org/abs/1810.11109v4, http://arxiv.org/pdf/1810.11109v4",econ.EM 28908,em,"Let Y be an outcome of interest, X a vector of treatment measures, and W a vector of pre-treatment control variables. Here X may include (combinations of) continuous, discrete, and/or non-mutually exclusive ""treatments"". Consider the linear regression of Y onto X in a subpopulation homogenous in W = w (formally a conditional linear predictor). Let b0(w) be the coefficient vector on X in this regression. We introduce a semiparametrically efficient estimate of the average beta0 = E[b0(W)]. When X is binary-valued (multi-valued) our procedure recovers the (a vector of) average treatment effect(s). When X is continuously-valued, or consists of multiple non-exclusive treatments, our estimand coincides with the average partial effect (APE) of X on Y when the underlying potential response function is linear in X, but otherwise heterogenous across agents. When the potential response function takes a general nonlinear/heterogenous form, and X is continuously-valued, our procedure recovers a weighted average of the gradient of this response across individuals and values of X. We provide a simple, and semiparametrically efficient, method of covariate adjustment for settings with complicated treatment regimes. Our method generalizes familiar methods of covariate adjustment used for program evaluation as well as methods of semiparametric regression (e.g., the partially linear regression model).",Semiparametrically efficient estimation of the average linear regression function,2018-10-30 06:26:33,"Bryan S. Graham, Cristine Campos de Xavier Pinto","http://arxiv.org/abs/1810.12511v1, http://arxiv.org/pdf/1810.12511v1",econ.EM 28909,em,"We investigate the finite sample performance of causal machine learning estimators for heterogeneous causal effects at different aggregation levels. We employ an Empirical Monte Carlo Study that relies on arguably realistic data generation processes (DGPs) based on actual data. We consider 24 different DGPs, eleven different causal machine learning estimators, and three aggregation levels of the estimated effects. In the main DGPs, we allow for selection into treatment based on a rich set of observable covariates. We provide evidence that the estimators can be categorized into three groups. The first group performs consistently well across all DGPs and aggregation levels. These estimators have multiple steps to account for the selection into the treatment and the outcome process. The second group shows competitive performance only for particular DGPs. The third group is clearly outperformed by the other estimators.",Machine Learning Estimation of Heterogeneous Causal Effects: Empirical Monte Carlo Evidence,2018-10-31 15:10:25,"Michael C. Knaus, Michael Lechner, Anthony Strittmatter","http://dx.doi.org/10.1093/ectj/utaa014, http://arxiv.org/abs/1810.13237v2, http://arxiv.org/pdf/1810.13237v2",econ.EM 28911,em,"This article proposes doubly robust estimators for the average treatment effect on the treated (ATT) in difference-in-differences (DID) research designs. In contrast to alternative DID estimators, the proposed estimators are consistent if either (but not necessarily both) a propensity score or outcome regression working models are correctly specified. We also derive the semiparametric efficiency bound for the ATT in DID designs when either panel or repeated cross-section data are available, and show that our proposed estimators attain the semiparametric efficiency bound when the working models are correctly specified. Furthermore, we quantify the potential efficiency gains of having access to panel data instead of repeated cross-section data. Finally, by paying articular attention to the estimation method used to estimate the nuisance parameters, we show that one can sometimes construct doubly robust DID estimators for the ATT that are also doubly robust for inference. Simulation studies and an empirical application illustrate the desirable finite-sample performance of the proposed estimators. Open-source software for implementing the proposed policy evaluation tools is available.",Doubly Robust Difference-in-Differences Estimators,2018-11-30 00:18:26,"Pedro H. C. Sant'Anna, Jun B. Zhao","http://arxiv.org/abs/1812.01723v3, http://arxiv.org/pdf/1812.01723v3",econ.EM 28912,em,"This paper examines a commonly used measure of persuasion whose precise interpretation has been obscure in the literature. By using the potential outcome framework, we define the causal persuasion rate by a proper conditional probability of taking the action of interest with a persuasive message conditional on not taking the action without the message. We then formally study identification under empirically relevant data scenarios and show that the commonly adopted measure generally does not estimate, but often overstates, the causal rate of persuasion. We discuss several new parameters of interest and provide practical methods for causal inference.",Identifying the Effect of Persuasion,2018-12-06 03:20:35,"Sung Jae Jun, Sokbae Lee","http://arxiv.org/abs/1812.02276v6, http://arxiv.org/pdf/1812.02276v6",econ.EM 28913,em,"We develop a uniform test for detecting and dating explosive behavior of a strictly stationary GARCH$(r,s)$ (generalized autoregressive conditional heteroskedasticity) process. Namely, we test the null hypothesis of a globally stable GARCH process with constant parameters against an alternative where there is an 'abnormal' period with changed parameter values. During this period, the change may lead to an explosive behavior of the volatility process. It is assumed that both the magnitude and the timing of the breaks are unknown. We develop a double supreme test for the existence of a break, and then provide an algorithm to identify the period of change. Our theoretical results hold under mild moment assumptions on the innovations of the GARCH process. Technically, the existing properties for the QMLE in the GARCH model need to be reinvestigated to hold uniformly over all possible periods of change. The key results involve a uniform weak Bahadur representation for the estimated parameters, which leads to weak convergence of the test statistic to the supreme of a Gaussian Process. In simulations we show that the test has good size and power for reasonably large time series lengths. We apply the test to Apple asset returns and Bitcoin returns.",A supreme test for periodic explosive GARCH,2018-12-09 15:51:14,"Stefan Richter, Weining Wang, Wei Biao Wu","http://arxiv.org/abs/1812.03475v1, http://arxiv.org/pdf/1812.03475v1",econ.EM 28914,em,"Recent studies have proposed causal machine learning (CML) methods to estimate conditional average treatment effects (CATEs). In this study, I investigate whether CML methods add value compared to conventional CATE estimators by re-evaluating Connecticut's Jobs First welfare experiment. This experiment entails a mix of positive and negative work incentives. Previous studies show that it is hard to tackle the effect heterogeneity of Jobs First by means of CATEs. I report evidence that CML methods can provide support for the theoretical labor supply predictions. Furthermore, I document reasons why some conventional CATE estimators fail and discuss the limitations of CML methods.",What Is the Value Added by Using Causal Machine Learning Methods in a Welfare Experiment Evaluation?,2018-12-16 23:24:02,Anthony Strittmatter,"http://arxiv.org/abs/1812.06533v3, http://arxiv.org/pdf/1812.06533v3",econ.EM 28915,em,"This paper explores the use of a fuzzy regression discontinuity design where multiple treatments are applied at the threshold. The identification results show that, under the very strong assumption that the change in the probability of treatment at the cutoff is equal across treatments, a difference-in-discontinuities estimator identifies the treatment effect of interest. The point estimates of the treatment effect using a simple fuzzy difference-in-discontinuities design are biased if the change in the probability of a treatment applying at the cutoff differs across treatments. Modifications of the fuzzy difference-in-discontinuities approach that rely on milder assumptions are also proposed. Our results suggest caution is needed when applying before-and-after methods in the presence of fuzzy discontinuities. Using data from the National Health Interview Survey, we apply this new identification strategy to evaluate the causal effect of the Affordable Care Act (ACA) on older Americans' health care access and utilization.",Fuzzy Difference-in-Discontinuities: Identification Theory and Application to the Affordable Care Act,2018-12-17 00:27:54,"Hector Galindo-Silva, Nibene Habib Some, Guy Tchuente","http://arxiv.org/abs/1812.06537v3, http://arxiv.org/pdf/1812.06537v3",econ.EM 28916,em,"We propose convenient inferential methods for potentially nonstationary multivariate unobserved components models with fractional integration and cointegration. Based on finite-order ARMA approximations in the state space representation, maximum likelihood estimation can make use of the EM algorithm and related techniques. The approximation outperforms the frequently used autoregressive or moving average truncation, both in terms of computational costs and with respect to approximation quality. Monte Carlo simulations reveal good estimation properties of the proposed methods for processes of different complexity and dimension.",Approximate State Space Modelling of Unobserved Fractional Components,2018-12-21 17:25:45,"Tobias Hartl, Roland Weigand","http://dx.doi.org/10.1080/07474938.2020.1841444, http://arxiv.org/abs/1812.09142v3, http://arxiv.org/pdf/1812.09142v3",econ.EM 28917,em,"We propose a setup for fractionally cointegrated time series which is formulated in terms of latent integrated and short-memory components. It accommodates nonstationary processes with different fractional orders and cointegration of different strengths and is applicable in high-dimensional settings. In an application to realized covariance matrices, we find that orthogonal short- and long-memory components provide a reasonable fit and competitive out-of-sample performance compared to several competing methods.",Multivariate Fractional Components Analysis,2018-12-21 17:33:27,"Tobias Hartl, Roland Weigand","http://arxiv.org/abs/1812.09149v2, http://arxiv.org/pdf/1812.09149v2",econ.EM 28919,em,"Consider a setting in which a policy maker assigns subjects to treatments, observing each outcome before the next subject arrives. Initially, it is unknown which treatment is best, but the sequential nature of the problem permits learning about the effectiveness of the treatments. While the multi-armed-bandit literature has shed much light on the situation when the policy maker compares the effectiveness of the treatments through their mean, much less is known about other targets. This is restrictive, because a cautious decision maker may prefer to target a robust location measure such as a quantile or a trimmed mean. Furthermore, socio-economic decision making often requires targeting purpose specific characteristics of the outcome distribution, such as its inherent degree of inequality, welfare or poverty. In the present paper we introduce and study sequential learning algorithms when the distributional characteristic of interest is a general functional of the outcome distribution. Minimax expected regret optimality results are obtained within the subclass of explore-then-commit policies, and for the unrestricted class of all policies.",Functional Sequential Treatment Allocation,2018-12-22 02:18:13,"Anders Bredahl Kock, David Preinerstorfer, Bezirgen Veliyev","http://arxiv.org/abs/1812.09408v8, http://arxiv.org/pdf/1812.09408v8",econ.EM 28920,em,"In many applications common in testing for convergence the number of cross-sectional units is large and the number of time periods are few. In these situations asymptotic tests based on an omnibus null hypothesis are characterised by a number of problems. In this paper we propose a multiple pairwise comparisons method based on an a recursive bootstrap to test for convergence with no prior information on the composition of convergence clubs. Monte Carlo simulations suggest that our bootstrap-based test performs well to correctly identify convergence clubs when compared with other similar tests that rely on asymptotic arguments. Across a potentially large number of regions, using both cross-country and regional data for the European Union, we find that the size distortion which afflicts standard tests and results in a bias towards finding less convergence, is ameliorated when we utilise our bootstrap test.",Robust Tests for Convergence Clubs,2018-12-22 15:11:04,"Luisa Corrado, Melvyn Weeks, Thanasis Stengos, M. Ege Yazgan","http://arxiv.org/abs/1812.09518v1, http://arxiv.org/pdf/1812.09518v1",econ.EM 28921,em,"We propose a practical and robust method for making inferences on average treatment effects estimated by synthetic controls. We develop a $K$-fold cross-fitting procedure for bias-correction. To avoid the difficult estimation of the long-run variance, inference is based on a self-normalized $t$-statistic, which has an asymptotically pivotal $t$-distribution. Our $t$-test is easy to implement, provably robust against misspecification, valid with non-stationary data, and demonstrates an excellent small sample performance. Compared to difference-in-differences, our method often yields more than 50% shorter confidence intervals and is robust to violations of parallel trends assumptions. An $\texttt{R}$-package for implementing our methods is available.",A $t$-test for synthetic controls,2018-12-27 23:40:13,"Victor Chernozhukov, Kaspar Wuthrich, Yinchu Zhu","http://arxiv.org/abs/1812.10820v7, http://arxiv.org/pdf/1812.10820v7",econ.EM 28922,em,"The instrumental variable quantile regression (IVQR) model (Chernozhukov and Hansen, 2005) is a popular tool for estimating causal quantile effects with endogenous covariates. However, estimation is complicated by the non-smoothness and non-convexity of the IVQR GMM objective function. This paper shows that the IVQR estimation problem can be decomposed into a set of conventional quantile regression sub-problems which are convex and can be solved efficiently. This reformulation leads to new identification results and to fast, easy to implement, and tuning-free estimators that do not require the availability of high-level ""black box"" optimization routines.",Decentralization Estimators for Instrumental Variable Quantile Regression Models,2018-12-28 11:50:33,"Hiroaki Kaido, Kaspar Wuthrich","http://arxiv.org/abs/1812.10925v4, http://arxiv.org/pdf/1812.10925v4",econ.EM 28923,em,"Predicting future successful designs and corresponding market opportunity is a fundamental goal of product design firms. There is accordingly a long history of quantitative approaches that aim to capture diverse consumer preferences, and then translate those preferences to corresponding ""design gaps"" in the market. We extend this work by developing a deep learning approach to predict design gaps in the market. These design gaps represent clusters of designs that do not yet exist, but are predicted to be both (1) highly preferred by consumers, and (2) feasible to build under engineering and manufacturing constraints. This approach is tested on the entire U.S. automotive market using of millions of real purchase data. We retroactively predict design gaps in the market, and compare predicted design gaps with actual known successful designs. Our preliminary results give evidence it may be possible to predict design gaps, suggesting this approach has promise for early identification of market opportunity.","Predicting ""Design Gaps"" in the Market: Deep Consumer Choice Models under Probabilistic Design Constraints",2018-12-28 18:56:46,"Alex Burnap, John Hauser","http://arxiv.org/abs/1812.11067v1, http://arxiv.org/pdf/1812.11067v1",econ.EM 28924,em,"This paper studies identification and estimation of a class of dynamic models in which the decision maker (DM) is uncertain about the data-generating process. The DM surrounds a benchmark model that he or she fears is misspecified by a set of models. Decisions are evaluated under a worst-case model delivering the lowest utility among all models in this set. The DM's benchmark model and preference parameters are jointly underidentified. With the benchmark model held fixed, primitive conditions are established for identification of the DM's worst-case model and preference parameters. The key step in the identification analysis is to establish existence and uniqueness of the DM's continuation value function allowing for unbounded statespace and unbounded utilities. To do so, fixed-point results are derived for monotone, convex operators that act on a Banach space of thin-tailed functions arising naturally from the structure of the continuation value recursion. The fixed-point results are quite general; applications to models with learning and Rust-type dynamic discrete choice models are also discussed. For estimation, a perturbation result is derived which provides a necessary and sufficient condition for consistent estimation of continuation values and the worst-case model. The result also allows convergence rates of estimators to be characterized. An empirical application studies an endowment economy where the DM's benchmark model may be interpreted as an aggregate of experts' forecasting models. The application reveals time-variation in the way the DM pessimistically distorts benchmark probabilities. Consequences for asset pricing are explored and connections are drawn with the literature on macroeconomic uncertainty.",Dynamic Models with Robust Decision Makers: Identification and Estimation,2018-12-29 02:36:41,Timothy M. Christensen,"http://arxiv.org/abs/1812.11246v3, http://arxiv.org/pdf/1812.11246v3",econ.EM 28926,em,"This paper introduces a flexible regularization approach that reduces point estimation risk of group means stemming from e.g. categorical regressors, (quasi-)experimental data or panel data models. The loss function is penalized by adding weighted squared l2-norm differences between group location parameters and informative first-stage estimates. Under quadratic loss, the penalized estimation problem has a simple interpretable closed-form solution that nests methods established in the literature on ridge regression, discretized support smoothing kernels and model averaging methods. We derive risk-optimal penalty parameters and propose a plug-in approach for estimation. The large sample properties are analyzed in an asymptotic local to zero framework by introducing a class of sequences for close and distant systems of locations that is sufficient for describing a large range of data generating processes. We provide the asymptotic distributions of the shrinkage estimators under different penalization schemes. The proposed plug-in estimator uniformly dominates the ordinary least squares in terms of asymptotic risk if the number of groups is larger than three. Monte Carlo simulations reveal robust improvements over standard methods in finite samples. Real data examples of estimating time trends in a panel and a difference-in-differences study illustrate potential applications.",Shrinkage for Categorical Regressors,2019-01-07 19:17:23,"Phillip Heiler, Jana Mareckova","http://arxiv.org/abs/1901.01898v1, http://arxiv.org/pdf/1901.01898v1",econ.EM 28927,em,"This article introduces lassopack, a suite of programs for regularized regression in Stata. lassopack implements lasso, square-root lasso, elastic net, ridge regression, adaptive lasso and post-estimation OLS. The methods are suitable for the high-dimensional setting where the number of predictors $p$ may be large and possibly greater than the number of observations, $n$. We offer three different approaches for selecting the penalization (`tuning') parameters: information criteria (implemented in lasso2), $K$-fold cross-validation and $h$-step ahead rolling cross-validation for cross-section, panel and time-series data (cvlasso), and theory-driven (`rigorous') penalization for the lasso and square-root lasso for cross-section and panel data (rlasso). We discuss the theoretical framework and practical considerations for each approach. We also present Monte Carlo results to compare the performance of the penalization approaches.",lassopack: Model selection and prediction with regularized regression in Stata,2019-01-16 20:30:27,"Achim Ahrens, Christian B. Hansen, Mark E. Schaffer","http://arxiv.org/abs/1901.05397v1, http://arxiv.org/pdf/1901.05397v1",econ.EM 28928,em,"The maximum utility estimation proposed by Elliott and Lieli (2013) can be viewed as cost-sensitive binary classification; thus, its in-sample overfitting issue is similar to that of perceptron learning. A utility-maximizing prediction rule (UMPR) is constructed to alleviate the in-sample overfitting of the maximum utility estimation. We establish non-asymptotic upper bounds on the difference between the maximal expected utility and the generalized expected utility of the UMPR. Simulation results show that the UMPR with an appropriate data-dependent penalty achieves larger generalized expected utility than common estimators in the binary classification if the conditional probability of the binary outcome is misspecified.",Model Selection in Utility-Maximizing Binary Prediction,2019-03-02 18:02:50,Jiun-Hua Su,"http://dx.doi.org/10.1016/j.jeconom.2020.07.052, http://arxiv.org/abs/1903.00716v3, http://arxiv.org/pdf/1903.00716v3",econ.EM 28929,em,"We provide a finite sample inference method for the structural parameters of a semiparametric binary response model under a conditional median restriction originally studied by Manski (1975, 1985). Our inference method is valid for any sample size and irrespective of whether the structural parameters are point identified or partially identified, for example due to the lack of a continuously distributed covariate with large support. Our inference approach exploits distributional properties of observable outcomes conditional on the observed sequence of exogenous variables. Moment inequalities conditional on this size n sequence of exogenous covariates are constructed, and the test statistic is a monotone function of violations of sample moment inequalities. The critical value used for inference is provided by the appropriate quantile of a known function of n independent Rademacher random variables. We investigate power properties of the underlying test and provide simulation studies to support the theoretical findings.",Finite Sample Inference for the Maximum Score Estimand,2019-03-04 22:53:00,"Adam M. Rosen, Takuya Ura","http://arxiv.org/abs/1903.01511v2, http://arxiv.org/pdf/1903.01511v2",econ.EM 28930,em,"A fundamental problem with nonlinear models is that maximum likelihood estimates are not guaranteed to exist. Though nonexistence is a well known problem in the binary choice literature, it presents significant challenges for other models as well and is not as well understood in more general settings. These challenges are only magnified for models that feature many fixed effects and other high-dimensional parameters. We address the current ambiguity surrounding this topic by studying the conditions that govern the existence of estimates for (pseudo-)maximum likelihood estimators used to estimate a wide class of generalized linear models (GLMs). We show that some, but not all, of these GLM estimators can still deliver consistent estimates of at least some of the linear parameters when these conditions fail to hold. We also demonstrate how to verify these conditions in models with high-dimensional parameters, such as panel data models with multiple levels of fixed effects.",Verifying the existence of maximum likelihood estimates for generalized linear models,2019-03-05 05:18:49,"Sergio Correia, Paulo Guimarães, Thomas Zylkin","http://arxiv.org/abs/1903.01633v6, http://arxiv.org/pdf/1903.01633v6",econ.EM 28931,em,"Bojinov & Shephard (2019) defined potential outcome time series to nonparametrically measure dynamic causal effects in time series experiments. Four innovations are developed in this paper: ""instrumental paths,"" treatments which are ""shocks,"" ""linear potential outcomes"" and the ""causal response function."" Potential outcome time series are then used to provide a nonparametric causal interpretation of impulse response functions, generalized impulse response functions, local projections and LP-IV.","Econometric analysis of potential outcomes time series: instruments, shocks, linearity and the causal response function",2019-03-05 05:53:08,"Ashesh Rambachan, Neil Shephard","http://arxiv.org/abs/1903.01637v3, http://arxiv.org/pdf/1903.01637v3",econ.EM 28932,em,"In this paper we present ppmlhdfe, a new Stata command for estimation of (pseudo) Poisson regression models with multiple high-dimensional fixed effects (HDFE). Estimation is implemented using a modified version of the iteratively reweighted least-squares (IRLS) algorithm that allows for fast estimation in the presence of HDFE. Because the code is built around the reghdfe package, it has similar syntax, supports many of the same functionalities, and benefits from reghdfe's fast convergence properties for computing high-dimensional least squares problems. Performance is further enhanced by some new techniques we introduce for accelerating HDFE-IRLS estimation specifically. ppmlhdfe also implements a novel and more robust approach to check for the existence of (pseudo) maximum likelihood estimates.",ppmlhdfe: Fast Poisson Estimation with High-Dimensional Fixed Effects,2019-03-05 09:11:26,"Sergio Correia, Paulo Guimarães, Thomas Zylkin","http://dx.doi.org/10.1177/1536867X20909691, http://arxiv.org/abs/1903.01690v3, http://arxiv.org/pdf/1903.01690v3",econ.EM 28933,em,"A fixed effects regression estimator is introduced that can directly identify and estimate the Africa-Dummy in one regression step so that its correct standard errors as well as correlations to other coefficients can easily be estimated. We can estimate the Nickel bias and found it to be negligibly tiny. Semiparametric extensions check whether the Africa-Dummy is simply a result of misspecification of the functional form. In particular, we show that the returns to growth factors are different for Sub-Saharan African countries compared to the rest of the world. For example, returns to population growth are positive and beta-convergence is faster. When extending the model to identify the development of the Africa-Dummy over time we see that it has been changing dramatically over time and that the punishment for Sub-Saharan African countries has been decreasing incrementally to reach insignificance around the turn of the millennium.",The Africa-Dummy: Gone with the Millennium?,2019-03-06 16:18:13,"Max Köhler, Stefan Sperlich","http://arxiv.org/abs/1903.02357v1, http://arxiv.org/pdf/1903.02357v1",econ.EM 28934,em,"Various papers demonstrate the importance of inequality, poverty and the size of the middle class for economic growth. When explaining why these measures of the income distribution are added to the growth regression, it is often mentioned that poor people behave different which may translate to the economy as a whole. However, simply adding explanatory variables does not reflect this behavior. By a varying coefficient model we show that the returns to growth differ a lot depending on poverty and inequality. Furthermore, we investigate how these returns differ for the poorer and for the richer part of the societies. We argue that the differences in the coefficients impede, on the one hand, that the means coefficients are informative, and, on the other hand, challenge the credibility of the economic interpretation. In short, we show that, when estimating mean coefficients without accounting for poverty and inequality, the estimation is likely to suffer from a serious endogeneity bias.",A Varying Coefficient Model for Assessing the Returns to Growth to Account for Poverty and Inequality,2019-03-06 17:07:05,"Max Köhler, Stefan Sperlich, Jisu Yoon","http://arxiv.org/abs/1903.02390v1, http://arxiv.org/pdf/1903.02390v1",econ.EM 28935,em,"We consider inference on the probability density of valuations in the first-price sealed-bid auctions model within the independent private value paradigm. We show the asymptotic normality of the two-step nonparametric estimator of Guerre, Perrigne, and Vuong (2000) (GPV), and propose an easily implementable and consistent estimator of the asymptotic variance. We prove the validity of the pointwise percentile bootstrap confidence intervals based on the GPV estimator. Lastly, we use the intermediate Gaussian approximation approach to construct bootstrap-based asymptotically valid uniform confidence bands for the density of the valuations.","Inference for First-Price Auctions with Guerre, Perrigne, and Vuong's Estimator",2019-03-15 11:09:33,"Jun Ma, Vadim Marmer, Artyom Shneyerov","http://dx.doi.org/10.1016/j.jeconom.2019.02.006, http://arxiv.org/abs/1903.06401v1, http://arxiv.org/pdf/1903.06401v1",econ.EM 28936,em,"Empirical growth analysis has three major problems --- variable selection, parameter heterogeneity and cross-sectional dependence --- which are addressed independently from each other in most studies. The purpose of this study is to propose an integrated framework that extends the conventional linear growth regression model to allow for parameter heterogeneity and cross-sectional error dependence, while simultaneously performing variable selection. We also derive the asymptotic properties of the estimator under both low and high dimensions, and further investigate the finite sample performance of the estimator through Monte Carlo simulations. We apply the framework to a dataset of 89 countries over the period from 1960 to 2014. Our results reveal some cross-country patterns not found in previous studies (e.g., ""middle income trap hypothesis"", ""natural resources curse hypothesis"", ""religion works via belief, not practice"", etc.).",An Integrated Panel Data Approach to Modelling Economic Growth,2019-03-19 14:38:09,"Guohua Feng, Jiti Gao, Bin Peng","http://arxiv.org/abs/1903.07948v1, http://arxiv.org/pdf/1903.07948v1",econ.EM 28937,em,"We propose a new approach to mixed-frequency regressions in a high-dimensional environment that resorts to Group Lasso penalization and Bayesian techniques for estimation and inference. In particular, to improve the prediction properties of the model and its sparse recovery ability, we consider a Group Lasso with a spike-and-slab prior. Penalty hyper-parameters governing the model shrinkage are automatically tuned via an adaptive MCMC algorithm. We establish good frequentist asymptotic properties of the posterior of the in-sample and out-of-sample prediction error, we recover the optimal posterior contraction rate, and we show optimality of the posterior predictive density. Simulations show that the proposed models have good selection and forecasting performance in small samples, even when the design matrix presents cross-correlation. When applied to forecasting U.S. GDP, our penalized regressions can outperform many strong competitors. Results suggest that financial variables may have some, although very limited, short-term predictive content.","Bayesian MIDAS Penalized Regressions: Estimation, Selection, and Prediction",2019-03-19 17:42:37,"Matteo Mogliani, Anna Simoni","http://arxiv.org/abs/1903.08025v3, http://arxiv.org/pdf/1903.08025v3",econ.EM 28938,em,"I study a regression model in which one covariate is an unknown function of a latent driver of link formation in a network. Rather than specify and fit a parametric network formation model, I introduce a new method based on matching pairs of agents with similar columns of the squared adjacency matrix, the ijth entry of which contains the number of other agents linked to both agents i and j. The intuition behind this approach is that for a large class of network formation models the columns of the squared adjacency matrix characterize all of the identifiable information about individual linking behavior. In this paper, I describe the model, formalize this intuition, and provide consistent estimators for the parameters of the regression model. Auerbach (2021) considers inference and an application to network peer effects.",Identification and Estimation of a Partially Linear Regression Model using Network Data,2019-03-22 21:59:22,Eric Auerbach,"http://arxiv.org/abs/1903.09679v3, http://arxiv.org/pdf/1903.09679v3",econ.EM 28939,em,"This paper studies a panel data setting where the goal is to estimate causal effects of an intervention by predicting the counterfactual values of outcomes for treated units, had they not received the treatment. Several approaches have been proposed for this problem, including regression methods, synthetic control methods and matrix completion methods. This paper considers an ensemble approach, and shows that it performs better than any of the individual methods in several economic datasets. Matrix completion methods are often given the most weight by the ensemble, but this clearly depends on the setting. We argue that ensemble methods present a fruitful direction for further research in the causal panel data setting.",Ensemble Methods for Causal Effects in Panel Data Settings,2019-03-25 02:21:52,"Susan Athey, Mohsen Bayati, Guido Imbens, Zhaonan Qu","http://arxiv.org/abs/1903.10079v1, http://arxiv.org/pdf/1903.10079v1",econ.EM 28941,em,"How can one determine whether a community-level treatment, such as the introduction of a social program or trade shock, alters agents' incentives to form links in a network? This paper proposes analogues of a two-sample Kolmogorov-Smirnov test, widely used in the literature to test the null hypothesis of ""no treatment effects"", for network data. It first specifies a testing problem in which the null hypothesis is that two networks are drawn from the same random graph model. It then describes two randomization tests based on the magnitude of the difference between the networks' adjacency matrices as measured by the $2\to2$ and $\infty\to1$ operator norms. Power properties of the tests are examined analytically, in simulation, and through two real-world applications. A key finding is that the test based on the $\infty\to1$ norm can be substantially more powerful than that based on the $2\to2$ norm for the kinds of sparse and degree-heterogeneous networks common in economics.",Testing for Differences in Stochastic Network Structure,2019-03-26 22:00:45,Eric Auerbach,"http://arxiv.org/abs/1903.11117v5, http://arxiv.org/pdf/1903.11117v5",econ.EM 28942,em,"This paper studies a regularized support function estimator for bounds on components of the parameter vector in the case in which the identified set is a polygon. The proposed regularized estimator has three important properties: (i) it has a uniform asymptotic Gaussian limit in the presence of flat faces in the absence of redundant (or overidentifying) constraints (or vice versa); (ii) the bias from regularization does not enter the first-order limiting distribution;(iii) the estimator remains consistent for sharp identified set for the individual components even in the non-regualar case. These properties are used to construct uniformly valid confidence sets for an element $\theta_{1}$ of a parameter vector $\theta\in\mathbb{R}^{d}$ that is partially identified by affine moment equality and inequality conditions. The proposed confidence sets can be computed as a solution to a small number of linear and convex quadratic programs, which leads to a substantial decrease in computation time and guarantees a global optimum. As a result, the method provides uniformly valid inference in applications in which the dimension of the parameter space, $d$, and the number of inequalities, $k$, were previously computationally unfeasible ($d,k=100$). The proposed approach can be extended to construct confidence sets for intersection bounds, to construct joint polygon-shaped confidence sets for multiple components of $\theta$, and to find the set of solutions to a linear program. Inference for coefficients in the linear IV regression model with an interval outcome is used as an illustrative example.",Simple subvector inference on sharp identified set in affine models,2019-03-30 01:49:40,Bulat Gafarov,"http://arxiv.org/abs/1904.00111v2, http://arxiv.org/pdf/1904.00111v2",econ.EM 28943,em,"Three-dimensional panel models are widely used in empirical analysis. Researchers use various combinations of fixed effects for three-dimensional panels. When one imposes a parsimonious model and the true model is rich, then it incurs mis-specification biases. When one employs a rich model and the true model is parsimonious, then it incurs larger standard errors than necessary. It is therefore useful for researchers to know correct models. In this light, Lu, Miao, and Su (2018) propose methods of model selection. We advance this literature by proposing a method of post-selection inference for regression parameters. Despite our use of the lasso technique as means of model selection, our assumptions allow for many and even all fixed effects to be nonzero. Simulation studies demonstrate that the proposed method is more precise than under-fitting fixed effect estimators, is more efficient than over-fitting fixed effect estimators, and allows for as accurate inference as the oracle estimator.",Post-Selection Inference in Three-Dimensional Panel Data,2019-03-30 15:51:35,"Harold D. Chiang, Joel Rodrigue, Yuya Sasaki","http://arxiv.org/abs/1904.00211v2, http://arxiv.org/pdf/1904.00211v2",econ.EM 28944,em,"We propose a framework for analyzing the sensitivity of counterfactuals to parametric assumptions about the distribution of latent variables in structural models. In particular, we derive bounds on counterfactuals as the distribution of latent variables spans nonparametric neighborhoods of a given parametric specification while other ""structural"" features of the model are maintained. Our approach recasts the infinite-dimensional problem of optimizing the counterfactual with respect to the distribution of latent variables (subject to model constraints) as a finite-dimensional convex program. We also develop an MPEC version of our method to further simplify computation in models with endogenous parameters (e.g., value functions) defined by equilibrium constraints. We propose plug-in estimators of the bounds and two methods for inference. We also show that our bounds converge to the sharp nonparametric bounds on counterfactuals as the neighborhood size becomes large. To illustrate the broad applicability of our procedure, we present empirical applications to matching models with transferable utility and dynamic discrete choice models.",Counterfactual Sensitivity and Robustness,2019-04-01 20:53:20,"Timothy Christensen, Benjamin Connault","http://arxiv.org/abs/1904.00989v4, http://arxiv.org/pdf/1904.00989v4",econ.EM 28945,em,"Models with a discrete endogenous variable are typically underidentified when the instrument takes on too few values. This paper presents a new method that matches pairs of covariates and instruments to restore point identification in this scenario in a triangular model. The model consists of a structural function for a continuous outcome and a selection model for the discrete endogenous variable. The structural outcome function must be continuous and monotonic in a scalar disturbance, but it can be nonseparable. The selection model allows for unrestricted heterogeneity. Global identification is obtained under weak conditions. The paper also provides estimators of the structural outcome function. Two empirical examples of the return to education and selection into Head Start illustrate the value and limitations of the method.",Matching Points: Supplementing Instruments with Covariates in Triangular Models,2019-04-02 04:12:10,Junlong Feng,"http://arxiv.org/abs/1904.01159v3, http://arxiv.org/pdf/1904.01159v3",econ.EM 28946,em,"Empirical economists are often deterred from the application of fixed effects binary choice models mainly for two reasons: the incidental parameter problem and the computational challenge even in moderately large panels. Using the example of binary choice models with individual and time fixed effects, we show how both issues can be alleviated by combining asymptotic bias corrections with computational advances. Because unbalancedness is often encountered in applied work, we investigate its consequences on the finite sample properties of various (bias corrected) estimators. In simulation experiments we find that analytical bias corrections perform particularly well, whereas split-panel jackknife estimators can be severely biased in unbalanced panels.",Fixed Effects Binary Choice Models: Estimation and Inference with Long Panels,2019-04-08 20:38:31,"Daniel Czarnowske, Amrei Stammann","http://arxiv.org/abs/1904.04217v3, http://arxiv.org/pdf/1904.04217v3",econ.EM 28948,em,"This article proposes inference procedures for distribution regression models in duration analysis using randomly right-censored data. This generalizes classical duration models by allowing situations where explanatory variables' marginal effects freely vary with duration time. The article discusses applications to testing uniform restrictions on the varying coefficients, inferences on average marginal effects, and others involving conditional distribution estimates. Finite sample properties of the proposed method are studied by means of Monte Carlo experiments. Finally, we apply our proposal to study the effects of unemployment benefits on unemployment duration.",Distribution Regression in Duration Analysis: an Application to Unemployment Spells,2019-04-12 15:22:27,"Miguel A. Delgado, Andrés García-Suaza, Pedro H. C. Sant'Anna","http://arxiv.org/abs/1904.06185v2, http://arxiv.org/pdf/1904.06185v2",econ.EM 28949,em,"Internet finance is a new financial model that applies Internet technology to payment, capital borrowing and lending and transaction processing. In order to study the internal risks, this paper uses the Internet financial risk elements as the network node to construct the complex network of Internet financial risk system. Different from the study of macroeconomic shocks and financial institution data, this paper mainly adopts the perspective of complex system to analyze the systematic risk of Internet finance. By dividing the entire financial system into Internet financial subnet, regulatory subnet and traditional financial subnet, the paper discusses the relationship between contagion and contagion among different risk factors, and concludes that risks are transmitted externally through the internal circulation of Internet finance, thus discovering potential hidden dangers of systemic risks. The results show that the nodes around the center of the whole system are the main objects of financial risk contagion in the Internet financial network. In addition, macro-prudential regulation plays a decisive role in the control of the Internet financial system, and points out the reasons why the current regulatory measures are still limited. This paper summarizes a research model which is still in its infancy, hoping to open up new prospects and directions for us to understand the cascading behaviors of Internet financial risks.",Complex Network Construction of Internet Financial risk,2019-04-14 09:55:11,"Runjie Xu, Chuanmin Mi, Rafal Mierzwiak, Runyu Meng","http://dx.doi.org/10.1016/j.physa.2019.122930, http://arxiv.org/abs/1904.06640v3, http://arxiv.org/pdf/1904.06640v3",econ.EM 28950,em,"We develop a dynamic model of discrete choice that incorporates peer effects into random consideration sets. We characterize the equilibrium behavior and study the empirical content of the model. In our setup, changes in the choices of friends affect the distribution of the consideration sets. We exploit this variation to recover the ranking of preferences, attention mechanisms, and network connections. These nonparametric identification results allow unrestricted heterogeneity across people and do not rely on the variation of either covariates or the set of available options. Our methodology leads to a maximum-likelihood estimator that performs well in simulations. We apply our results to an experimental dataset that has been designed to study the visual focus of attention.",Peer Effects in Random Consideration Sets,2019-04-14 22:15:07,"Nail Kashaev, Natalia Lazzati","http://arxiv.org/abs/1904.06742v3, http://arxiv.org/pdf/1904.06742v3",econ.EM 28951,em,"In multinomial response models, idiosyncratic variations in the indirect utility are generally modeled using Gumbel or normal distributions. This study makes a strong case to substitute these thin-tailed distributions with a t-distribution. First, we demonstrate that a model with a t-distributed error kernel better estimates and predicts preferences, especially in class-imbalanced datasets. Our proposed specification also implicitly accounts for decision-uncertainty behavior, i.e. the degree of certainty that decision-makers hold in their choices relative to the variation in the indirect utility of any alternative. Second, after applying a t-distributed error kernel in a multinomial response model for the first time, we extend this specification to a generalized continuous-multinomial (GCM) model and derive its full-information maximum likelihood estimator. The likelihood involves an open-form expression of the cumulative density function of the multivariate t-distribution, which we propose to compute using a combination of the composite marginal likelihood method and the separation-of-variables approach. Third, we establish finite sample properties of the GCM model with a t-distributed error kernel (GCM-t) and highlight its superiority over the GCM model with a normally-distributed error kernel (GCM-N) in a Monte Carlo study. Finally, we compare GCM-t and GCM-N in an empirical setting related to preferences for electric vehicles (EVs). We observe that accounting for decision-uncertainty behavior in GCM-t results in lower elasticity estimates and a higher willingness to pay for improving the EV attributes than those of the GCM-N model. These differences are relevant in making policies to expedite the adoption of EVs.",A Generalized Continuous-Multinomial Response Model with a t-distributed Error Kernel,2019-04-17 18:54:04,"Subodh Dubey, Prateek Bansal, Ricardo A. Daziano, Erick Guerra","http://arxiv.org/abs/1904.08332v3, http://arxiv.org/pdf/1904.08332v3",econ.EM 28952,em,"Currently all countries including developing countries are expected to utilize their own tax revenues and carry out their own development for solving poverty in their countries. However, developing countries cannot earn tax revenues like developed countries partly because they do not have effective countermeasures against international tax avoidance. Our analysis focuses on treaty shopping among various ways to conduct international tax avoidance because tax revenues of developing countries have been heavily damaged through treaty shopping. To analyze the location and sector of conduit firms likely to be used for treaty shopping, we constructed a multilayer ownership-tax network and proposed multilayer centrality. Because multilayer centrality can consider not only the value owing in the ownership network but also the withholding tax rate, it is expected to grasp precisely the locations and sectors of conduit firms established for the purpose of treaty shopping. Our analysis shows that firms in the sectors of Finance & Insurance and Wholesale & Retail trade etc. are involved with treaty shopping. We suggest that developing countries make a clause focusing on these sectors in the tax treaties they conclude.",Location-Sector Analysis of International Profit Shifting on a Multilayer Ownership-Tax Network,2019-04-19 15:30:34,"Tembo Nakamoto, Odile Rouhban, Yuichi Ikeda","http://arxiv.org/abs/1904.09165v1, http://arxiv.org/pdf/1904.09165v1",econ.EM 29010,em,"This paper considers generalized least squares (GLS) estimation for linear panel data models. By estimating the large error covariance matrix consistently, the proposed feasible GLS (FGLS) estimator is more efficient than the ordinary least squares (OLS) in the presence of heteroskedasticity, serial, and cross-sectional correlations. To take into account the serial correlations, we employ the banding method. To take into account the cross-sectional correlations, we suggest to use the thresholding method. We establish the limiting distribution of the proposed estimator. A Monte Carlo study is considered. The proposed method is applied to an empirical application.",Feasible Generalized Least Squares for Panel Data with Cross-sectional and Serial Correlations,2019-10-20 18:37:51,"Jushan Bai, Sung Hoon Choi, Yuan Liao","http://arxiv.org/abs/1910.09004v3, http://arxiv.org/pdf/1910.09004v3",econ.EM 28953,em,"We study identification in nonparametric regression models with a misclassified and endogenous binary regressor when an instrument is correlated with misclassification error. We show that the regression function is nonparametrically identified if one binary instrument variable and one binary covariate satisfy the following conditions. The instrumental variable corrects endogeneity; the instrumental variable must be correlated with the unobserved true underlying binary variable, must be uncorrelated with the error term in the outcome equation, but is allowed to be correlated with the misclassification error. The covariate corrects misclassification; this variable can be one of the regressors in the outcome equation, must be correlated with the unobserved true underlying binary variable, and must be uncorrelated with the misclassification error. We also propose a mixture-based framework for modeling unobserved heterogeneous treatment effects with a misclassified and endogenous binary regressor and show that treatment effects can be identified if the true treatment effect is related to an observed regressor and another observable variable.",Identification of Regression Models with a Misclassified and Endogenous Binary Regressor,2019-04-25 06:41:37,"Hiroyuki Kasahara, Katsumi Shimotsu","http://arxiv.org/abs/1904.11143v3, http://arxiv.org/pdf/1904.11143v3",econ.EM 28954,em,"In matched-pairs experiments in which one cluster per pair of clusters is assigned to treatment, to estimate treatment effects, researchers often regress their outcome on a treatment indicator and pair fixed effects, clustering standard errors at the unit-ofrandomization level. We show that even if the treatment has no effect, a 5%-level t-test based on this regression will wrongly conclude that the treatment has an effect up to 16.5% of the time. To fix this problem, researchers should instead cluster standard errors at the pair level. Using simulations, we show that similar results apply to clustered experiments with small strata.",At What Level Should One Cluster Standard Errors in Paired and Small-Strata Experiments?,2019-06-01 23:47:18,"Clément de Chaisemartin, Jaime Ramirez-Cuellar","http://arxiv.org/abs/1906.00288v10, http://arxiv.org/pdf/1906.00288v10",econ.EM 28955,em,"We propose the use of indirect inference estimation to conduct inference in complex locally stationary models. We develop a local indirect inference algorithm and establish the asymptotic properties of the proposed estimator. Due to the nonparametric nature of locally stationary models, the resulting indirect inference estimator exhibits nonparametric rates of convergence. We validate our methodology with simulation studies in the confines of a locally stationary moving average model and a new locally stationary multiplicative stochastic volatility model. Using this indirect inference methodology and the new locally stationary volatility model, we obtain evidence of non-linear, time-varying volatility trends for monthly returns on several Fama-French portfolios.",Indirect Inference for Locally Stationary Models,2019-06-05 03:41:13,"David Frazier, Bonsoo Koo","http://dx.doi.org/10.1016/S0304-4076/20/30303-1, http://arxiv.org/abs/1906.01768v2, http://arxiv.org/pdf/1906.01768v2",econ.EM 28956,em,"In a nonparametric instrumental regression model, we strengthen the conventional moment independence assumption towards full statistical independence between instrument and error term. This allows us to prove identification results and develop estimators for a structural function of interest when the instrument is discrete, and in particular binary. When the regressor of interest is also discrete with more mass points than the instrument, we state straightforward conditions under which the structural function is partially identified, and give modified assumptions which imply point identification. These stronger assumptions are shown to hold outside of a small set of conditional moments of the error term. Estimators for the identified set are given when the structural function is either partially or point identified. When the regressor is continuously distributed, we prove that if the instrument induces a sufficiently rich variation in the joint distribution of the regressor and error term then point identification of the structural function is still possible. This approach is relatively tractable, and under some standard conditions we demonstrate that our point identifying assumption holds on a topologically generic set of density functions for the joint distribution of regressor, error, and instrument. Our method also applies to a well-known nonparametric quantile regression framework, and we are able to state analogous point identification results in that context.","Nonparametric Identification and Estimation with Independent, Discrete Instruments",2019-06-12 19:05:52,Isaac Loh,"http://arxiv.org/abs/1906.05231v1, http://arxiv.org/pdf/1906.05231v1",econ.EM 28957,em,"We consider the asymptotic properties of the Synthetic Control (SC) estimator when both the number of pre-treatment periods and control units are large. If potential outcomes follow a linear factor model, we provide conditions under which the factor loadings of the SC unit converge in probability to the factor loadings of the treated unit. This happens when there are weights diluted among an increasing number of control units such that a weighted average of the factor loadings of the control units asymptotically reconstructs the factor loadings of the treated unit. In this case, the SC estimator is asymptotically unbiased even when treatment assignment is correlated with time-varying unobservables. This result can be valid even when the number of control units is larger than the number of pre-treatment periods.",On the Properties of the Synthetic Control Estimator with Many Periods and Many Controls,2019-06-16 15:26:28,Bruno Ferman,"http://arxiv.org/abs/1906.06665v5, http://arxiv.org/pdf/1906.06665v5",econ.EM 28958,em,"We study the association between physical appearance and family income using a novel data which has 3-dimensional body scans to mitigate the issue of reporting errors and measurement errors observed in most previous studies. We apply machine learning to obtain intrinsic features consisting of human body and take into account a possible issue of endogenous body shapes. The estimation results show that there is a significant relationship between physical appearance and family income and the associations are different across the gender. This supports the hypothesis on the physical attractiveness premium and its heterogeneity across the gender.",Shape Matters: Evidence from Machine Learning on Body Shape-Income Relationship,2019-06-16 21:42:22,"Suyong Song, Stephen S. Baek","http://dx.doi.org/10.1371/journal.pone.0254785, http://arxiv.org/abs/1906.06747v1, http://arxiv.org/pdf/1906.06747v1",econ.EM 29011,em,"This paper considers estimation of large dynamic factor models with common and idiosyncratic trends by means of the Expectation Maximization algorithm, implemented jointly with the Kalman smoother. We show that, as the cross-sectional dimension $n$ and the sample size $T$ diverge to infinity, the common component for a given unit estimated at a given point in time is $\min(\sqrt n,\sqrt T)$-consistent. The case of local levels and/or local linear trends trends is also considered. By means of a MonteCarlo simulation exercise, we compare our approach with estimators based on principal component analysis.",Quasi Maximum Likelihood Estimation of Non-Stationary Large Approximate Dynamic Factor Models,2019-10-22 12:00:06,"Matteo Barigozzi, Matteo Luciani","http://arxiv.org/abs/1910.09841v1, http://arxiv.org/pdf/1910.09841v1",econ.EM 28959,em,"This paper aims to examine the use of sparse methods to forecast the real, in the chain-linked volume sense, expenditure components of the US and EU GDP in the short-run sooner than the national institutions of statistics officially release the data. We estimate current quarter nowcasts along with 1- and 2-quarter forecasts by bridging quarterly data with available monthly information announced with a much smaller delay. We solve the high-dimensionality problem of the monthly dataset by assuming sparse structures of leading indicators, capable of adequately explaining the dynamics of analyzed data. For variable selection and estimation of the forecasts, we use the sparse methods - LASSO together with its recent modifications. We propose an adjustment that combines LASSO cases with principal components analysis that deemed to improve the forecasting performance. We evaluate forecasting performance conducting pseudo-real-time experiments for gross fixed capital formation, private consumption, imports and exports over the sample of 2005-2019, compared with benchmark ARMA and factor models. The main results suggest that sparse methods can outperform the benchmarks and to identify reasonable subsets of explanatory variables. The proposed LASSO-PC modification show further improvement in forecast accuracy.",Sparse structures with LASSO through Principal Components: forecasting GDP components in the short-run,2019-06-19 12:30:36,"Saulius Jokubaitis, Dmitrij Celov, Remigijus Leipus","http://dx.doi.org/10.1016/j.ijforecast.2020.09.005, http://arxiv.org/abs/1906.07992v2, http://arxiv.org/pdf/1906.07992v2",econ.EM 28960,em,"In 2018, allowance prices in the EU Emission Trading Scheme (EU ETS) experienced a run-up from persistently low levels in previous years. Regulators attribute this to a comprehensive reform in the same year, and are confident the new price level reflects an anticipated tighter supply of allowances. We ask if this is indeed the case, or if it is an overreaction of the market driven by speculation. We combine several econometric methods - time-varying coefficient regression, formal bubble detection as well as time stamping and crash odds prediction - to juxtapose the regulators' claim versus the concurrent explanation. We find evidence of a long period of explosive behaviour in allowance prices, starting in March 2018 when the reform was adopted. Our results suggest that the reform triggered market participants into speculation, and question regulators' confidence in its long-term outcome. This has implications for both the further development of the EU ETS, and the long lasting debate about taxes versus emission trading schemes.",Understanding the explosive trend in EU ETS prices -- fundamentals or speculation?,2019-06-25 17:43:50,"Marina Friedrich, Sébastien Fries, Michael Pahle, Ottmar Edenhofer","http://arxiv.org/abs/1906.10572v5, http://arxiv.org/pdf/1906.10572v5",econ.EM 28961,em,"Many economic studies use shift-share instruments to estimate causal effects. Often, all shares need to fulfil an exclusion restriction, making the identifying assumption strict. This paper proposes to use methods that relax the exclusion restriction by selecting invalid shares. I apply the methods in two empirical examples: the effect of immigration on wages and of Chinese import exposure on employment. In the first application, the coefficient becomes lower and often changes sign, but this is reconcilable with arguments made in the literature. In the second application, the findings are mostly robust to the use of the new methods.",Relaxing the Exclusion Restriction in Shift-Share Instrumental Variable Estimation,2019-06-29 18:27:49,Nicolas Apfel,"http://arxiv.org/abs/1907.00222v4, http://arxiv.org/pdf/1907.00222v4",econ.EM 28962,em,"There is currently an increasing interest in large vector autoregressive (VAR) models. VARs are popular tools for macroeconomic forecasting and use of larger models has been demonstrated to often improve the forecasting ability compared to more traditional small-scale models. Mixed-frequency VARs deal with data sampled at different frequencies while remaining within the realms of VARs. Estimation of mixed-frequency VARs makes use of simulation smoothing, but using the standard procedure these models quickly become prohibitive in nowcasting situations as the size of the model grows. We propose two algorithms that alleviate the computational efficiency of the simulation smoothing algorithm. Our preferred choice is an adaptive algorithm, which augments the state vector as necessary to sample also monthly variables that are missing at the end of the sample. For large VARs, we find considerable improvements in speed using our adaptive algorithm. The algorithm therefore provides a crucial building block for bringing the mixed-frequency VARs to the high-dimensional regime.",Simulation smoothing for nowcasting with large mixed-frequency VARs,2019-07-02 00:08:21,"Sebastian Ankargren, Paulina Jonéus","http://arxiv.org/abs/1907.01075v1, http://arxiv.org/pdf/1907.01075v1",econ.EM 28963,em,"We propose a robust method of discrete choice analysis when agents' choice sets are unobserved. Our core model assumes nothing about agents' choice sets apart from their minimum size. Importantly, it leaves unrestricted the dependence, conditional on observables, between choice sets and preferences. We first characterize the sharp identification region of the model's parameters by a finite set of conditional moment inequalities. We then apply our theoretical findings to learn about households' risk preferences and choice sets from data on their deductible choices in auto collision insurance. We find that the data can be explained by expected utility theory with low levels of risk aversion and heterogeneous non-singleton choice sets, and that more than three in four households require limited choice sets to explain their deductible choices. We also provide simulation evidence on the computational tractability of our method in applications with larger feasible sets or higher-dimensional unobserved heterogeneity.",Heterogeneous Choice Sets and Preferences,2019-07-04 14:47:26,"Levon Barseghyan, Maura Coughlin, Francesca Molinari, Joshua C. Teitelbaum","http://arxiv.org/abs/1907.02337v2, http://arxiv.org/pdf/1907.02337v2",econ.EM 28964,em,"In this paper we develop a new machine learning estimator for ordered choice models based on the random forest. The proposed Ordered Forest flexibly estimates the conditional choice probabilities while taking the ordering information explicitly into account. In addition to common machine learning estimators, it enables the estimation of marginal effects as well as conducting inference and thus provides the same output as classical econometric estimators. An extensive simulation study reveals a good predictive performance, particularly in settings with non-linearities and near-multicollinearity. An empirical application contrasts the estimation of marginal effects and their standard errors with an ordered logit model. A software implementation of the Ordered Forest is provided both in R and Python in the package orf available on CRAN and PyPI, respectively.",Random Forest Estimation of the Ordered Choice Model,2019-07-04 17:54:58,"Michael Lechner, Gabriel Okasa","http://arxiv.org/abs/1907.02436v3, http://arxiv.org/pdf/1907.02436v3",econ.EM 28966,em,"This paper provides tests for detecting sample selection in nonparametric conditional quantile functions. The first test is an omitted predictor test with the propensity score as the omitted variable. As with any omnibus test, in the case of rejection we cannot distinguish between rejection due to genuine selection or to misspecification. Thus, we suggest a second test to provide supporting evidence whether the cause for rejection at the first stage was solely due to selection or not. Using only individuals with propensity score close to one, this second test relies on an `identification at infinity' argument, but accommodates cases of irregular identification. Importantly, neither of the two tests requires parametric assumptions on the selection equation nor a continuous exclusion restriction. Data-driven bandwidth procedures are proposed, and Monte Carlo evidence suggests a good finite sample performance in particular of the first test. Finally, we also derive an extension of the first test to nonparametric conditional mean functions, and apply our procedure to test for selection in log hourly wages using UK Family Expenditure Survey data as \citet{AB2017}.",Testing for Quantile Sample Selection,2019-07-17 12:39:39,"Valentina Corradi, Daniel Gutknecht","http://arxiv.org/abs/1907.07412v5, http://arxiv.org/pdf/1907.07412v5",econ.EM 28967,em,"Clustering methods such as k-means have found widespread use in a variety of applications. This paper proposes a formal testing procedure to determine whether a null hypothesis of a single cluster, indicating homogeneity of the data, can be rejected in favor of multiple clusters. The test is simple to implement, valid under relatively mild conditions (including non-normality, and heterogeneity of the data in aspects beyond those in the clustering analysis), and applicable in a range of contexts (including clustering when the time series dimension is small, or clustering on parameters other than the mean). We verify that the test has good size control in finite samples, and we illustrate the test in applications to clustering vehicle manufacturers and U.S. mutual funds.",Testing for Unobserved Heterogeneity via k-means Clustering,2019-07-17 18:28:24,"Andrew J. Patton, Brian M. Weller","http://arxiv.org/abs/1907.07582v1, http://arxiv.org/pdf/1907.07582v1",econ.EM 28968,em,"Despite its critical importance, the famous X-model elaborated by Ziel and Steinert (2016) has neither bin been widely studied nor further developed. And yet, the possibilities to improve the model are as numerous as the fields it can be applied to. The present paper takes advantage of a technique proposed by Coulon et al. (2014) to enhance the X-model. Instead of using the wholesale supply and demand curves as inputs for the model, we rely on the transformed versions of these curves with a perfectly inelastic demand. As a result, computational requirements of our X-model reduce and its forecasting power increases substantially. Moreover, our X-model becomes more robust towards outliers present in the initial auction curves data.",X-model: further development and possible modifications,2019-07-22 12:59:08,Sergei Kulakov,"http://arxiv.org/abs/1907.09206v1, http://arxiv.org/pdf/1907.09206v1",econ.EM 28969,em,"In their IZA Discussion Paper 10247, Johansson and Lee claim that the main result (Proposition 3) in Abbring and Van den Berg (2003b) does not hold. We show that their claim is incorrect. At a certain point within their line of reasoning, they make a rather basic error while transforming one random variable into another random variable, and this leads them to draw incorrect conclusions. As a result, their paper can be discarded.","Rebuttal of ""On Nonparametric Identification of Treatment Effects in Duration Models""",2019-07-20 12:18:44,"Jaap H. Abbring, Gerard J. van den Berg","http://arxiv.org/abs/1907.09886v1, http://arxiv.org/pdf/1907.09886v1",econ.EM 28970,em,"This study examines statistical performance of tests for time-varying properties under misspecified conditional mean and variance. When we test for time-varying properties of the conditional mean in the case in which data have no time-varying mean but have time-varying variance, asymptotic tests have size distortions. This is improved by the use of a bootstrap method. Similarly, when we test for time-varying properties of the conditional variance in the case in which data have time-varying mean but no time-varying variance, asymptotic tests have large size distortions. This is not improved even by the use of bootstrap methods. We show that tests for time-varying properties of the conditional mean by the bootstrap are robust regardless of the time-varying variance model, whereas tests for time-varying properties of the conditional variance do not perform well in the presence of misspecified time-varying mean.",Testing for time-varying properties under misspecified conditional mean and variance,2019-07-28 19:47:10,"Daiki Maki, Yasushi Ota","http://arxiv.org/abs/1907.12107v2, http://arxiv.org/pdf/1907.12107v2",econ.EM 28971,em,"This study compares statistical properties of ARCH tests that are robust to the presence of the misspecified conditional mean. The approaches employed in this study are based on two nonparametric regressions for the conditional mean. First is the ARCH test using Nadayara-Watson kernel regression. Second is the ARCH test using the polynomial approximation regression. The two approaches do not require specification of the conditional mean and can adapt to various nonlinear models, which are unknown a priori. Accordingly, they are robust to misspecified conditional mean models. Simulation results show that ARCH tests based on the polynomial approximation regression approach have better statistical properties than ARCH tests using Nadayara-Watson kernel regression approach for various nonlinear models.",Robust tests for ARCH in the presence of the misspecified conditional mean: A comparison of nonparametric approches,2019-07-30 09:19:18,"Daiki Maki, Yasushi Ota","http://arxiv.org/abs/1907.12752v2, http://arxiv.org/pdf/1907.12752v2",econ.EM 28972,em,"This paper provides a necessary and sufficient instruments condition assuring two-step generalized method of moments (GMM) based on the forward orthogonal deviations transformation is numerically equivalent to two-step GMM based on the first-difference transformation. The condition also tells us when system GMM, based on differencing, can be computed using forward orthogonal deviations. Additionally, it tells us when forward orthogonal deviations and differencing do not lead to the same GMM estimator. When estimators based on these two transformations differ, Monte Carlo simulations indicate that estimators based on forward orthogonal deviations have better finite sample properties than estimators based on differencing.",A Comparison of First-Difference and Forward Orthogonal Deviations GMM,2019-07-30 16:19:35,Robert F. Phillips,"http://arxiv.org/abs/1907.12880v1, http://arxiv.org/pdf/1907.12880v1",econ.EM 29012,em,"This paper introduces a version of the interdependent value model of Milgrom and Weber (1982), where the signals are given by an index gathering signal shifters observed by the econometrician and private ones specific to each bidders. The model primitives are shown to be nonparametrically identified from first-price auction bids under a testable mild rank condition. Identification holds for all possible signal values. This allows to consider a wide range of counterfactuals where this is important, as expected revenue in second-price auction. An estimation procedure is briefly discussed.",Nonparametric identification of an interdependent value model with buyer covariates from first-price auction bids,2019-10-23 19:12:17,"Nathalie Gimenes, Emmanuel Guerre","http://arxiv.org/abs/1910.10646v1, http://arxiv.org/pdf/1910.10646v1",econ.EM 28973,em,"Given the unconfoundedness assumption, we propose new nonparametric estimators for the reduced dimensional conditional average treatment effect (CATE) function. In the first stage, the nuisance functions necessary for identifying CATE are estimated by machine learning methods, allowing the number of covariates to be comparable to or larger than the sample size. The second stage consists of a low-dimensional local linear regression, reducing CATE to a function of the covariate(s) of interest. We consider two variants of the estimator depending on whether the nuisance functions are estimated over the full sample or over a hold-out sample. Building on Belloni at al. (2017) and Chernozhukov et al. (2018), we derive functional limit theory for the estimators and provide an easy-to-implement procedure for uniform inference based on the multiplier bootstrap. The empirical application revisits the effect of maternal smoking on a baby's birth weight as a function of the mother's age.",Estimation of Conditional Average Treatment Effects with High-Dimensional Data,2019-08-07 02:40:47,"Qingliang Fan, Yu-Chin Hsu, Robert P. Lieli, Yichong Zhang","http://arxiv.org/abs/1908.02399v5, http://arxiv.org/pdf/1908.02399v5",econ.EM 28974,em,"We consider nonparametric identification of independent private value first-price auction models, in which the analyst only observes winning bids. Our benchmark model assumes an exogenous number of bidders N. We show that, if the bidders observe N, the resulting discontinuities in the winning bid density can be used to identify the distribution of N. The private value distribution can be nonparametrically identified in a second step. This extends, under testable identification conditions, to the case where N is a number of potential buyers, who bid with some unknown probability. Identification also holds in presence of additive unobserved heterogeneity drawn from some parametric distributions. A last class of extensions deals with cartels which can change size across auctions due to varying bidder cartel membership. Identification still holds if the econometrician observes winner identities and winning bids, provided a (unknown) bidder is always a cartel member. The cartel participation probabilities of other bidders can also be identified. An application to USFS timber auction data illustrates the usefulness of discontinuities to analyze bidder participation.",Nonparametric Identification of First-Price Auction with Unobserved Competition: A Density Discontinuity Framework,2019-08-15 13:06:05,"Emmanuel Guerre, Yao Luo","http://arxiv.org/abs/1908.05476v2, http://arxiv.org/pdf/1908.05476v2",econ.EM 28975,em,"Establishing that a demand mapping is injective is core first step for a variety of methodologies. When a version of the law of demand holds, global injectivity can be checked by seeing whether the demand mapping is constant over any line segments. When we add the assumption of differentiability, we obtain necessary and sufficient conditions for injectivity that generalize classical \cite{gale1965jacobian} conditions for quasi-definite Jacobians.",Injectivity and the Law of Demand,2019-08-15 22:13:43,Roy Allen,"http://arxiv.org/abs/1908.05714v1, http://arxiv.org/pdf/1908.05714v1",econ.EM 28976,em,"Policy evaluation is central to economic data analysis, but economists mostly work with observational data in view of limited opportunities to carry out controlled experiments. In the potential outcome framework, the panel data approach (Hsiao, Ching and Wan, 2012) constructs the counterfactual by exploiting the correlation between cross-sectional units in panel data. The choice of cross-sectional control units, a key step in its implementation, is nevertheless unresolved in data-rich environment when many possible controls are at the researcher's disposal. We propose the forward selection method to choose control units, and establish validity of the post-selection inference. Our asymptotic framework allows the number of possible controls to grow much faster than the time dimension. The easy-to-implement algorithms and their theoretical guarantee extend the panel data approach to big data settings.",Forward-Selected Panel Data Approach for Program Evaluation,2019-08-16 12:00:57,"Zhentao Shi, Jingyi Huang","http://arxiv.org/abs/1908.05894v3, http://arxiv.org/pdf/1908.05894v3",econ.EM 28977,em,"A family of models of individual discrete choice are constructed by means of statistical averaging of choices made by a subject in a reinforcement learning process, where the subject has short, k-term memory span. The choice probabilities in these models combine in a non-trivial, non-linear way the initial learning bias and the experience gained through learning. The properties of such models are discussed and, in particular, it is shown that probabilities deviate from Luce's Choice Axiom, even if the initial bias adheres to it. Moreover, we shown that the latter property is recovered as the memory span becomes large. Two applications in utility theory are considered. In the first, we use the discrete choice model to generate binary preference relation on simple lotteries. We show that the preferences violate transitivity and independence axioms of expected utility theory. Furthermore, we establish the dependence of the preferences on frames, with risk aversion for gains, and risk seeking for losses. Based on these findings we propose next a parametric model of choice based on the probability maximization principle, as a model for deviations from expected utility principle. To illustrate the approach we apply it to the classical problem of demand for insurance.",A model of discrete choice based on reinforcement learning under short-term memory,2019-08-16 22:15:33,Misha Perepelitsa,"http://arxiv.org/abs/1908.06133v1, http://arxiv.org/pdf/1908.06133v1",econ.EM 28978,em,"We propose a new finite sample corrected variance estimator for the linear generalized method of moments (GMM) including the one-step, two-step, and iterated estimators. Our formula additionally corrects for the over-identification bias in variance estimation on top of the commonly used finite sample correction of Windmeijer (2005) which corrects for the bias from estimating the efficient weight matrix, so is doubly corrected. An important feature of the proposed double correction is that it automatically provides robustness to misspecification of the moment condition. In contrast, the conventional variance estimator and the Windmeijer correction are inconsistent under misspecification. That is, the proposed double correction formula provides a convenient way to obtain improved inference under correct specification and robustness against misspecification at the same time.",A Doubly Corrected Robust Variance Estimator for Linear GMM,2019-08-21 15:41:08,"Jungbin Hwang, Byunghoon Kang, Seojeong Lee","http://arxiv.org/abs/1908.07821v2, http://arxiv.org/pdf/1908.07821v2",econ.EM 29013,em,"This paper deals with the time-varying high dimensional covariance matrix estimation. We propose two covariance matrix estimators corresponding with a time-varying approximate factor model and a time-varying approximate characteristic-based factor model, respectively. The models allow the factor loadings, factor covariance matrix, and error covariance matrix to change smoothly over time. We study the rate of convergence of each estimator. Our simulation and empirical study indicate that time-varying covariance matrix estimators generally perform better than time-invariant covariance matrix estimators. Also, if characteristics are available that genuinely explain true loadings, the characteristics can be used to estimate loadings more precisely in finite samples; their helpfulness increases when loadings rapidly change.",Estimating a Large Covariance Matrix in Time-varying Factor Models,2019-10-26 03:08:24,Jaeheon Jung,"http://arxiv.org/abs/1910.11965v1, http://arxiv.org/pdf/1910.11965v1",econ.EM 28979,em,"This paper considers the practically important case of nonparametrically estimating heterogeneous average treatment effects that vary with a limited number of discrete and continuous covariates in a selection-on-observables framework where the number of possible confounders is very large. We propose a two-step estimator for which the first step is estimated by machine learning. We show that this estimator has desirable statistical properties like consistency, asymptotic normality and rate double robustness. In particular, we derive the coupled convergence conditions between the nonparametric and the machine learning steps. We also show that estimating population average treatment effects by averaging the estimated heterogeneous effects is semi-parametrically efficient. The new estimator is an empirical example of the effects of mothers' smoking during pregnancy on the resulting birth weight.",Nonparametric estimation of causal heterogeneity under high-dimensional confounding,2019-08-23 15:18:37,"Michael Zimmert, Michael Lechner","http://arxiv.org/abs/1908.08779v1, http://arxiv.org/pdf/1908.08779v1",econ.EM 28980,em,"The literature on stochastic programming typically restricts attention to problems that fulfill constraint qualifications. The literature on estimation and inference under partial identification frequently restricts the geometry of identified sets with diverse high-level assumptions. These superficially appear to be different approaches to closely related problems. We extensively analyze their relation. Among other things, we show that for partial identification through pure moment inequalities, numerous assumptions from the literature essentially coincide with the Mangasarian-Fromowitz constraint qualification. This clarifies the relation between well-known contributions, including within econometrics, and elucidates stringency, as well as ease of verification, of some high-level assumptions in seminal papers.",Constraint Qualifications in Partial Identification,2019-08-24 10:34:43,"Hiroaki Kaido, Francesca Molinari, Jörg Stoye","http://dx.doi.org/10.1017/S0266466621000207, http://arxiv.org/abs/1908.09103v4, http://arxiv.org/pdf/1908.09103v4",econ.EM 28981,em,"We develop a new extreme value theory for repeated cross-sectional and panel data to construct asymptotically valid confidence intervals (CIs) for conditional extremal quantiles from a fixed number $k$ of nearest-neighbor tail observations. As a by-product, we also construct CIs for extremal quantiles of coefficients in linear random coefficient models. For any fixed $k$, the CIs are uniformly valid without parametric assumptions over a set of nonparametric data generating processes associated with various tail indices. Simulation studies show that our CIs exhibit superior small-sample coverage and length properties than alternative nonparametric methods based on asymptotic normality. Applying the proposed method to Natality Vital Statistics, we study factors of extremely low birth weights. We find that signs of major effects are the same as those found in preceding studies based on parametric models, but with different magnitudes.",Fixed-k Inference for Conditional Extremal Quantiles,2019-09-01 01:39:33,"Yuya Sasaki, Yulong Wang","http://arxiv.org/abs/1909.00294v3, http://arxiv.org/pdf/1909.00294v3",econ.EM 28982,em,"We study the incidental parameter problem for the ``three-way'' Poisson {Pseudo-Maximum Likelihood} (``PPML'') estimator recently recommended for identifying the effects of trade policies and in other panel data gravity settings. Despite the number and variety of fixed effects involved, we confirm PPML is consistent for fixed $T$ and we show it is in fact the only estimator among a wide range of PML gravity estimators that is generally consistent in this context when $T$ is fixed. At the same time, asymptotic confidence intervals in fixed-$T$ panels are not correctly centered at the true point estimates, and cluster-robust variance estimates used to construct standard errors are generally biased as well. We characterize each of these biases analytically and show both numerically and empirically that they are salient even for real-data settings with a large number of countries. We also offer practical remedies that can be used to obtain more reliable inferences of the effects of trade policies and other time-varying gravity variables, which we make available via an accompanying Stata package called ppml_fe_bias.",Bias and Consistency in Three-way Gravity Models,2019-09-03 20:54:06,"Martin Weidner, Thomas Zylkin","http://arxiv.org/abs/1909.01327v6, http://arxiv.org/pdf/1909.01327v6",econ.EM 28983,em,"We analyze the challenges for inference in difference-in-differences (DID) when there is spatial correlation. We present novel theoretical insights and empirical evidence on the settings in which ignoring spatial correlation should lead to more or less distortions in DID applications. We show that details such as the time frame used in the estimation, the choice of the treated and control groups, and the choice of the estimator, are key determinants of distortions due to spatial correlation. We also analyze the feasibility and trade-offs involved in a series of alternatives to take spatial correlation into account. Given that, we provide relevant recommendations for applied researchers on how to mitigate and assess the possibility of inference distortions due to spatial correlation.",Inference in Difference-in-Differences: How Much Should We Trust in Independent Clusters?,2019-09-04 16:19:25,Bruno Ferman,"http://arxiv.org/abs/1909.01782v7, http://arxiv.org/pdf/1909.01782v7",econ.EM 28984,em,"This paper explores the estimation of a panel data model with cross-sectional interaction that is flexible both in its approach to specifying the network of connections between cross-sectional units, and in controlling for unobserved heterogeneity. It is assumed that there are different sources of information available on a network, which can be represented in the form of multiple weights matrices. These matrices may reflect observed links, different measures of connectivity, groupings or other network structures, and the number of matrices may be increasing with sample size. A penalised quasi-maximum likelihood estimator is proposed which aims to alleviate the risk of network misspecification by shrinking the coefficients of irrelevant weights matrices to exactly zero. Moreover, controlling for unobserved factors in estimation provides a safeguard against the misspecification that might arise from unobserved heterogeneity. The asymptotic properties of the estimator are derived in a framework where the true value of each parameter remains fixed as the total number of parameters increases. A Monte Carlo simulation is used to assess finite sample performance, and in an empirical application the method is applied to study the prevalence of network spillovers in determining growth rates across countries.",Shrinkage Estimation of Network Spillovers with Factor Structured Errors,2019-09-06 14:28:41,"Ayden Higgins, Federico Martellosio","http://arxiv.org/abs/1909.02823v4, http://arxiv.org/pdf/1909.02823v4",econ.EM 29254,em,"We consider the problem of inference in Difference-in-Differences (DID) when there are few treated units and errors are spatially correlated. We first show that, when there is a single treated unit, some existing inference methods designed for settings with few treated and many control units remain asymptotically valid when errors are weakly dependent. However, these methods may be invalid with more than one treated unit. We propose alternatives that are asymptotically valid in this setting, even when the relevant distance metric across units is unavailable.",Inference in Difference-in-Differences with Few Treated Units and Spatial Correlation,2020-06-30 20:58:43,"Luis Alvarez, Bruno Ferman","http://arxiv.org/abs/2006.16997v7, http://arxiv.org/pdf/2006.16997v7",econ.EM 28985,em,"The Economy Watcher Survey, which is a market survey published by the Japanese government, contains \emph{assessments of current and future economic conditions} by people from various fields. Although this survey provides insights regarding economic policy for policymakers, a clear definition of the word ""future"" in future economic conditions is not provided. Hence, the assessments respondents provide in the survey are simply based on their interpretations of the meaning of ""future."" This motivated us to reveal the different interpretations of the future in their judgments of future economic conditions by applying weakly supervised learning and text mining. In our research, we separate the assessments of future economic conditions into economic conditions of the near and distant future using learning from positive and unlabeled data (PU learning). Because the dataset includes data from several periods, we devised new architecture to enable neural networks to conduct PU learning based on the idea of multi-task learning to efficiently learn a classifier. Our empirical analysis confirmed that the proposed method could separate the future economic conditions, and we interpreted the classification results to obtain intuitions for policymaking.",Identifying Different Definitions of Future in the Assessment of Future Economic Conditions: Application of PU Learning and Text Mining,2019-09-08 02:13:46,Masahiro Kato,"http://arxiv.org/abs/1909.03348v3, http://arxiv.org/pdf/1909.03348v3",econ.EM 28986,em,"This paper investigates double/debiased machine learning (DML) under multiway clustered sampling environments. We propose a novel multiway cross fitting algorithm and a multiway DML estimator based on this algorithm. We also develop a multiway cluster robust standard error formula. Simulations indicate that the proposed procedure has favorable finite sample performance. Applying the proposed method to market share data for demand analysis, we obtain larger two-way cluster robust standard errors than non-robust ones.",Multiway Cluster Robust Double/Debiased Machine Learning,2019-09-08 19:03:37,"Harold D. Chiang, Kengo Kato, Yukun Ma, Yuya Sasaki","http://arxiv.org/abs/1909.03489v3, http://arxiv.org/pdf/1909.03489v3",econ.EM 28987,em,"A desire to understand the decision of the UK to leave the European Union, Brexit, in the referendum of June 2016 has continued to occupy academics, the media and politicians. Using topological data analysis ball mapper we extract information from multi-dimensional datasets gathered on Brexit voting and regional socio-economic characteristics. While we find broad patterns consistent with extant empirical work, we also evidence that support for Leave drew from a far more homogenous demographic than Remain. Obtaining votes from this concise set was more straightforward for Leave campaigners than was Remain's task of mobilising a diverse group to oppose Brexit.",An Economic Topology of the Brexit vote,2019-09-08 19:05:40,"Pawel Dlotko, Lucy Minford, Simon Rudkin, Wanling Qiu","http://arxiv.org/abs/1909.03490v2, http://arxiv.org/pdf/1909.03490v2",econ.EM 28988,em,"We recast the synthetic controls for evaluating policies as a counterfactual prediction problem and replace its linear regression with a nonparametric model inspired by machine learning. The proposed method enables us to achieve accurate counterfactual predictions and we provide theoretical guarantees. We apply our method to a highly debated policy: the relocation of the US embassy to Jerusalem. In Israel and Palestine, we find that the average number of weekly conflicts has increased by roughly 103\% over 48 weeks since the relocation was announced on December 6, 2017. By using conformal inference and placebo tests, we justify our model and find the increase to be statistically significant.",Tree-based Synthetic Control Methods: Consequences of moving the US Embassy,2019-09-09 19:15:03,"Nicolaj Søndergaard Mühlbach, Mikkel Slot Nielsen","http://arxiv.org/abs/1909.03968v3, http://arxiv.org/pdf/1909.03968v3",econ.EM 28989,em,"We analyze the properties of matching estimators when there are few treated, but many control observations. We show that, under standard assumptions, the nearest neighbor matching estimator for the average treatment effect on the treated is asymptotically unbiased in this framework. However, when the number of treated observations is fixed, the estimator is not consistent, and it is generally not asymptotically normal. Since standard inference methods are inadequate, we propose alternative inference methods, based on the theory of randomization tests under approximate symmetry, that are asymptotically valid in this framework. We show that these tests are valid under relatively strong assumptions when the number of treated observations is fixed, and under weaker assumptions when the number of treated observations increases, but at a lower rate relative to the number of control observations.",Matching Estimators with Few Treated and Many Control Observations,2019-09-11 17:49:03,Bruno Ferman,"http://arxiv.org/abs/1909.05093v4, http://arxiv.org/pdf/1909.05093v4",econ.EM 28990,em,"The paper proposes a quantile-regression inference framework for first-price auctions with symmetric risk-neutral bidders under the independent private-value paradigm. It is first shown that a private-value quantile regression generates a quantile regression for the bids. The private-value quantile regression can be easily estimated from the bid quantile regression and its derivative with respect to the quantile level. This also allows to test for various specification or exogeneity null hypothesis using the observed bids in a simple way. A new local polynomial technique is proposed to estimate the latter over the whole quantile level interval. Plug-in estimation of functionals is also considered, as needed for the expected revenue or the case of CRRA risk-averse bidders, which is amenable to our framework. A quantile-regression analysis to USFS timber is found more appropriate than the homogenized-bid methodology and illustrates the contribution of each explanatory variables to the private-value distribution. Linear interactive sieve extensions are proposed and studied in the Appendices.",Quantile regression methods for first-price auctions,2019-09-12 13:05:37,"Nathalie Gimenes, Emmanuel Guerre","http://arxiv.org/abs/1909.05542v2, http://arxiv.org/pdf/1909.05542v2",econ.EM 28991,em,"This paper develops a consistent series-based specification test for semiparametric panel data models with fixed effects. The test statistic resembles the Lagrange Multiplier (LM) test statistic in parametric models and is based on a quadratic form in the restricted model residuals. The use of series methods facilitates both estimation of the null model and computation of the test statistic. The asymptotic distribution of the test statistic is standard normal, so that appropriate critical values can easily be computed. The projection property of series estimators allows me to develop a degrees of freedom correction. This correction makes it possible to account for the estimation variance and obtain refined asymptotic results. It also substantially improves the finite sample performance of the test.",A Consistent LM Type Specification Test for Semiparametric Panel Data Models,2019-09-12 16:42:16,Ivan Korolev,"http://arxiv.org/abs/1909.05649v1, http://arxiv.org/pdf/1909.05649v1",econ.EM 28992,em,"One simple, and often very effective, way to attenuate the impact of nuisance parameters on maximum likelihood estimation of a parameter of interest is to recenter the profile score for that parameter. We apply this general principle to the quasi-maximum likelihood estimator (QMLE) of the autoregressive parameter $\lambda$ in a spatial autoregression. The resulting estimator for $\lambda$ has better finite sample properties compared to the QMLE for $\lambda$, especially in the presence of a large number of covariates. It can also solve the incidental parameter problem that arises, for example, in social interaction models with network fixed effects, or in spatial panel models with individual or time fixed effects. However, spatial autoregressions present specific challenges for this type of adjustment, because recentering the profile score may cause the adjusted estimate to be outside the usual parameter space for $\lambda$. Conditions for this to happen are given, and implications are discussed. For inference, we propose confidence intervals based on a Lugannani--Rice approximation to the distribution of the adjusted QMLE of $\lambda$. Based on our simulations, the coverage properties of these intervals are excellent even in models with a large number of covariates.",Adjusted QMLE for the spatial autoregressive parameter,2019-09-18 02:23:50,"Federico Martellosio, Grant Hillier","http://arxiv.org/abs/1909.08141v1, http://arxiv.org/pdf/1909.08141v1",econ.EM 28993,em,"This paper investigates and extends the computationally attractive nonparametric random coefficients estimator of Fox, Kim, Ryan, and Bajari (2011). We show that their estimator is a special case of the nonnegative LASSO, explaining its sparse nature observed in many applications. Recognizing this link, we extend the estimator, transforming it to a special case of the nonnegative elastic net. The extension improves the estimator's recovery of the true support and allows for more accurate estimates of the random coefficients' distribution. Our estimator is a generalization of the original estimator and therefore, is guaranteed to have a model fit at least as good as the original one. A theoretical analysis of both estimators' properties shows that, under conditions, our generalized estimator approximates the true distribution more accurately. Two Monte Carlo experiments and an application to a travel mode data set illustrate the improved performance of the generalized estimator.",Nonparametric Estimation of the Random Coefficients Model: An Elastic Net Approach,2019-09-18 16:22:28,"Florian Heiss, Stephan Hetzenecker, Maximilian Osterhaus","http://arxiv.org/abs/1909.08434v2, http://arxiv.org/pdf/1909.08434v2",econ.EM 28994,em,"In this paper, a statistical model for panel data with unobservable grouped factor structures which are correlated with the regressors and the group membership can be unknown. The factor loadings are assumed to be in different subspaces and the subspace clustering for factor loadings are considered. A method called least squares subspace clustering estimate (LSSC) is proposed to estimate the model parameters by minimizing the least-square criterion and to perform the subspace clustering simultaneously. The consistency of the proposed subspace clustering is proved and the asymptotic properties of the estimation procedure are studied under certain conditions. A Monte Carlo simulation study is used to illustrate the advantages of the proposed method. Further considerations for the situations that the number of subspaces for factors, the dimension of factors and the dimension of subspaces are unknown are also discussed. For illustrative purposes, the proposed method is applied to study the linkage between income and democracy across countries while subspace patterns of unobserved factors and factor loadings are allowed.",Subspace Clustering for Panel Data with Interactive Effects,2019-09-22 04:51:11,"Jiangtao Duan, Wei Gao, Hao Qu, Hon Keung Tony","http://arxiv.org/abs/1909.09928v2, http://arxiv.org/pdf/1909.09928v2",econ.EM 28995,em,"We show that moment inequalities in a wide variety of economic applications have a particular linear conditional structure. We use this structure to construct uniformly valid confidence sets that remain computationally tractable even in settings with nuisance parameters. We first introduce least favorable critical values which deliver non-conservative tests if all moments are binding. Next, we introduce a novel conditional inference approach which ensures a strong form of insensitivity to slack moments. Our recommended approach is a hybrid technique which combines desirable aspects of the least favorable and conditional methods. The hybrid approach performs well in simulations calibrated to Wollmann (2018), with favorable power and computational time comparisons relative to existing alternatives.",Inference for Linear Conditional Moment Inequalities,2019-09-22 21:24:09,"Isaiah Andrews, Jonathan Roth, Ariel Pakes","http://arxiv.org/abs/1909.10062v5, http://arxiv.org/pdf/1909.10062v5",econ.EM 28996,em,"There are many environments in econometrics which require nonseparable modeling of a structural disturbance. In a nonseparable model with endogenous regressors, key conditions are validity of instrumental variables and monotonicity of the model in a scalar unobservable variable. Under these conditions the nonseparable model is equivalent to an instrumental quantile regression model. A failure of the key conditions, however, makes instrumental quantile regression potentially inconsistent. This paper develops a methodology for testing the hypothesis whether the instrumental quantile regression model is correctly specified. Our test statistic is asymptotically normally distributed under correct specification and consistent against any alternative model. In addition, test statistics to justify the model simplification are established. Finite sample properties are examined in a Monte Carlo study and an empirical illustration is provided.",Specification Testing in Nonparametric Instrumental Quantile Regression,2019-09-23 05:41:14,Christoph Breunig,"http://dx.doi.org/10.1017/S0266466619000288, http://arxiv.org/abs/1909.10129v1, http://arxiv.org/pdf/1909.10129v1",econ.EM 28997,em,"This paper proposes several tests of restricted specification in nonparametric instrumental regression. Based on series estimators, test statistics are established that allow for tests of the general model against a parametric or nonparametric specification as well as a test of exogeneity of the vector of regressors. The tests' asymptotic distributions under correct specification are derived and their consistency against any alternative model is shown. Under a sequence of local alternative hypotheses, the asymptotic distributions of the tests is derived. Moreover, uniform consistency is established over a class of alternatives whose distance to the null hypothesis shrinks appropriately as the sample size increases. A Monte Carlo study examines finite sample performance of the test statistics.",Goodness-of-Fit Tests based on Series Estimators in Nonparametric Instrumental Regression,2019-09-23 05:55:22,Christoph Breunig,"http://dx.doi.org/10.1016/j.jeconom.2014.09.006, http://arxiv.org/abs/1909.10133v1, http://arxiv.org/pdf/1909.10133v1",econ.EM 28998,em,"Nonparametric series regression often involves specification search over the tuning parameter, i.e., evaluating estimates and confidence intervals with a different number of series terms. This paper develops pointwise and uniform inferences for conditional mean functions in nonparametric series estimations that are uniform in the number of series terms. As a result, this paper constructs confidence intervals and confidence bands with possibly data-dependent series terms that have valid asymptotic coverage probabilities. This paper also considers a partially linear model setup and develops inference methods for the parametric part uniform in the number of series terms. The finite sample performance of the proposed methods is investigated in various simulation setups as well as in an illustrative example, i.e., the nonparametric estimation of the wage elasticity of the expected labor supply from Blomquist and Newey (2002).",Inference in Nonparametric Series Estimation with Specification Searches for the Number of Series Terms,2019-09-26 17:45:13,Byunghoon Kang,"http://arxiv.org/abs/1909.12162v2, http://arxiv.org/pdf/1909.12162v2",econ.EM 28999,em,"In this study, we investigate estimation and inference on a low-dimensional causal parameter in the presence of high-dimensional controls in an instrumental variable quantile regression. Our proposed econometric procedure builds on the Neyman-type orthogonal moment conditions of a previous study Chernozhukov, Hansen and Wuthrich (2018) and is thus relatively insensitive to the estimation of the nuisance parameters. The Monte Carlo experiments show that the estimator copes well with high-dimensional controls. We also apply the procedure to empirically reinvestigate the quantile treatment effect of 401(k) participation on accumulated wealth.",Debiased/Double Machine Learning for Instrumental Variable Quantile Regressions,2019-09-27 13:11:18,"Jau-er Chen, Chien-Hsun Huang, Jia-Jyun Tien","http://arxiv.org/abs/1909.12592v3, http://arxiv.org/pdf/1909.12592v3",econ.EM 29000,em,"Price indexes in time and space is a most relevant topic in statistical analysis from both the methodological and the application side. In this paper a price index providing a novel and effective solution to price indexes over several periods and among several countries, that is in both a multi-period and a multilateral framework, is devised. The reference basket of the devised index is the union of the intersections of the baskets of all periods/countries in pairs. As such, it provides a broader coverage than usual indexes. Index closed-form expressions and updating formulas are provided and properties investigated. Last, applications with real and simulated data provide evidence of the performance of the index at stake.",An econometric analysis of the Italian cultural supply,2019-09-30 22:58:41,"Consuelo Nava, Maria Grazia Zoia","http://arxiv.org/abs/1910.00073v3, http://arxiv.org/pdf/1910.00073v3",econ.EM 29001,em,"We study the informational content of factor structures in discrete triangular systems. Factor structures have been employed in a variety of settings in cross sectional and panel data models, and in this paper we formally quantify their identifying power in a bivariate system often employed in the treatment effects literature. Our main findings are that imposing a factor structure yields point identification of parameters of interest, such as the coefficient associated with the endogenous regressor in the outcome equation, under weaker assumptions than usually required in these models. In particular, we show that a ""non-standard"" exclusion restriction that requires an explanatory variable in the outcome equation to be excluded from the treatment equation is no longer necessary for identification, even in cases where all of the regressors from the outcome equation are discrete. We also establish identification of the coefficient of the endogenous regressor in models with more general factor structures, in situations where one has access to at least two continuous measurements of the common factor.",Informational Content of Factor Structures in Simultaneous Binary Response Models,2019-10-03 09:29:40,"Shakeeb Khan, Arnaud Maurel, Yichong Zhang","http://arxiv.org/abs/1910.01318v3, http://arxiv.org/pdf/1910.01318v3",econ.EM 29002,em,"This paper analyzes identifiability properties of structural vector autoregressive moving average (SVARMA) models driven by independent and non-Gaussian shocks. It is well known, that SVARMA models driven by Gaussian errors are not identified without imposing further identifying restrictions on the parameters. Even in reduced form and assuming stability and invertibility, vector autoregressive moving average models are in general not identified without requiring certain parameter matrices to be non-singular. Independence and non-Gaussianity of the shocks is used to show that they are identified up to permutations and scalings. In this way, typically imposed identifying restrictions are made testable. Furthermore, we introduce a maximum-likelihood estimator of the non-Gaussian SVARMA model which is consistent and asymptotically normally distributed.",Identification and Estimation of SVARMA models with Independent and Non-Gaussian Inputs,2019-10-09 19:06:46,Bernd Funovits,"http://arxiv.org/abs/1910.04087v1, http://arxiv.org/pdf/1910.04087v1",econ.EM 29003,em,"We generalize well-known results on structural identifiability of vector autoregressive models (VAR) to the case where the innovation covariance matrix has reduced rank. Structural singular VAR models appear, for example, as solutions of rational expectation models where the number of shocks is usually smaller than the number of endogenous variables, and as an essential building block in dynamic factor models. We show that order conditions for identifiability are misleading in the singular case and provide a rank condition for identifiability of the noise parameters. Since the Yule-Walker equations may have multiple solutions, we analyze the effect of restrictions on the system parameters on over- and underidentification in detail and provide easily verifiable conditions.",Identifiability of Structural Singular Vector Autoregressive Models,2019-10-09 19:18:57,"Bernd Funovits, Alexander Braumann","http://dx.doi.org/10.1111/jtsa.12576, http://arxiv.org/abs/1910.04096v2, http://arxiv.org/pdf/1910.04096v2",econ.EM 29014,em,"This paper studies inter-trade durations in the NASDAQ limit order market and finds that inter-trade durations in ultra-high frequency have two modes. One mode is to the order of approximately 10^{-4} seconds, and the other is to the order of 1 second. This phenomenon and other empirical evidence suggest that there are two regimes associated with the dynamics of inter-trade durations, and the regime switchings are driven by the changes of high-frequency traders (HFTs) between providing and taking liquidity. To find how the two modes depend on information in the limit order book (LOB), we propose a two-state multifactor regime-switching (MF-RSD) model for inter-trade durations, in which the probabilities transition matrices are time-varying and depend on some lagged LOB factors. The MF-RSD model has good in-sample fitness and the superior out-of-sample performance, compared with some benchmark duration models. Our findings of the effects of LOB factors on the inter-trade durations help to understand more about the high-frequency market microstructure.",A multifactor regime-switching model for inter-trade durations in the limit order market,2019-12-02 16:30:42,"Zhicheng Li, Haipeng Xing, Xinyun Chen","http://arxiv.org/abs/1912.00764v1, http://arxiv.org/pdf/1912.00764v1",econ.EM 29004,em,"This paper proposes averaging estimation methods to improve the finite-sample efficiency of the instrumental variables quantile regression (IVQR) estimation. First, I apply Cheng, Liao, Shi's (2019) averaging GMM framework to the IVQR model. I propose using the usual quantile regression moments for averaging to take advantage of cases when endogeneity is not too strong. I also propose using two-stage least squares slope moments to take advantage of cases when heterogeneity is not too strong. The empirical optimal weight formula of Cheng et al. (2019) helps optimize the bias-variance tradeoff, ensuring uniformly better (asymptotic) risk of the averaging estimator over the standard IVQR estimator under certain conditions. My implementation involves many computational considerations and builds on recent developments in the quantile literature. Second, I propose a bootstrap method that directly averages among IVQR, quantile regression, and two-stage least squares estimators. More specifically, I find the optimal weights in the bootstrap world and then apply the bootstrap-optimal weights to the original sample. The bootstrap method is simpler to compute and generally performs better in simulations, but it lacks the formal uniform dominance results of Cheng et al. (2019). Simulation results demonstrate that in the multiple-regressors/instruments case, both the GMM averaging and bootstrap estimators have uniformly smaller risk than the IVQR estimator across data-generating processes (DGPs) with all kinds of combinations of different endogeneity levels and heterogeneity levels. In DGPs with a single endogenous regressor and instrument, where averaging estimation is known to have least opportunity for improvement, the proposed averaging estimators outperform the IVQR estimator in some cases but not others.",Averaging estimation for instrumental variables quantile regression,2019-10-09 23:48:58,Xin Liu,"http://arxiv.org/abs/1910.04245v1, http://arxiv.org/pdf/1910.04245v1",econ.EM 29005,em,"This paper proposes an imputation procedure that uses the factors estimated from a tall block along with the re-rotated loadings estimated from a wide block to impute missing values in a panel of data. Assuming that a strong factor structure holds for the full panel of data and its sub-blocks, it is shown that the common component can be consistently estimated at four different rates of convergence without requiring regularization or iteration. An asymptotic analysis of the estimation error is obtained. An application of our analysis is estimation of counterfactuals when potential outcomes have a factor structure. We study the estimation of average and individual treatment effects on the treated and establish a normal distribution theory that can be useful for hypothesis testing.","Matrix Completion, Counterfactuals, and Factor Analysis of Missing Data",2019-10-15 15:18:35,"Jushan Bai, Serena Ng","http://arxiv.org/abs/1910.06677v5, http://arxiv.org/pdf/1910.06677v5",econ.EM 29006,em,"This paper develops a new standard-error estimator for linear panel data models. The proposed estimator is robust to heteroskedasticity, serial correlation, and cross-sectional correlation of unknown forms. The serial correlation is controlled by the Newey-West method. To control for cross-sectional correlations, we propose to use the thresholding method, without assuming the clusters to be known. We establish the consistency of the proposed estimator. Monte Carlo simulations show the method works well. An empirical application is considered.",Standard Errors for Panel Data Models with Unknown Clusters,2019-10-16 18:21:36,"Jushan Bai, Sung Hoon Choi, Yuan Liao","http://arxiv.org/abs/1910.07406v2, http://arxiv.org/pdf/1910.07406v2",econ.EM 29007,em,"This article provides a selective review on the recent literature on econometric models of network formation. The survey starts with a brief exposition on basic concepts and tools for the statistical description of networks. I then offer a review of dyadic models, focussing on statistical models on pairs of nodes and describe several developments of interest to the econometrics literature. The article also presents a discussion of non-dyadic models where link formation might be influenced by the presence or absence of additional links, which themselves are subject to similar influences. This is related to the statistical literature on conditionally specified models and the econometrics of game theoretical models. I close with a (non-exhaustive) discussion of potential areas for further development.",Econometric Models of Network Formation,2019-10-17 12:18:59,Aureo de Paula,"http://arxiv.org/abs/1910.07781v2, http://arxiv.org/pdf/1910.07781v2",econ.EM 29008,em,"Long memory in the sense of slowly decaying autocorrelations is a stylized fact in many time series from economics and finance. The fractionally integrated process is the workhorse model for the analysis of these time series. Nevertheless, there is mixed evidence in the literature concerning its usefulness for forecasting and how forecasting based on it should be implemented. Employing pseudo-out-of-sample forecasting on inflation and realized volatility time series and simulations we show that methods based on fractional integration clearly are superior to alternative methods not accounting for long memory, including autoregressions and exponential smoothing. Our proposal of choosing a fixed fractional integration parameter of $d=0.5$ a priori yields the best results overall, capturing long memory behavior, but overcoming the deficiencies of methods using an estimated parameter. Regarding the implementation of forecasting methods based on fractional integration, we use simulations to compare local and global semiparametric and parametric estimators of the long memory parameter from the Whittle family and provide asymptotic theory backed up by simulations to compare different mean estimators. Both of these analyses lead to new results, which are also of interest outside the realm of forecasting.",Forecasting under Long Memory and Nonstationarity,2019-10-18 02:57:34,"Uwe Hassler, Marc-Oliver Pohle","http://dx.doi.org/10.1093/jjfinec/nbab017, http://arxiv.org/abs/1910.08202v1, http://arxiv.org/pdf/1910.08202v1",econ.EM 29009,em,"This paper develops the inferential theory for latent factor models estimated from large dimensional panel data with missing observations. We propose an easy-to-use all-purpose estimator for a latent factor model by applying principal component analysis to an adjusted covariance matrix estimated from partially observed panel data. We derive the asymptotic distribution for the estimated factors, loadings and the imputed values under an approximate factor model and general missing patterns. The key application is to estimate counterfactual outcomes in causal inference from panel data. The unobserved control group is modeled as missing values, which are inferred from the latent factor model. The inferential theory for the imputed values allows us to test for individual treatment effects at any time under general adoption patterns where the units can be affected by unobserved factors.",Large Dimensional Latent Factor Modeling with Missing Observations and Applications to Causal Inference,2019-10-18 08:38:04,"Ruoxuan Xiong, Markus Pelger","http://arxiv.org/abs/1910.08273v6, http://arxiv.org/pdf/1910.08273v6",econ.EM 29017,em,"We discuss the issue of estimating large-scale vector autoregressive (VAR) models with stochastic volatility in real-time situations where data are sampled at different frequencies. In the case of a large VAR with stochastic volatility, the mixed-frequency data warrant an additional step in the already computationally challenging Markov Chain Monte Carlo algorithm used to sample from the posterior distribution of the parameters. We suggest the use of a factor stochastic volatility model to capture a time-varying error covariance structure. Because the factor stochastic volatility model renders the equations of the VAR conditionally independent, settling for this particular stochastic volatility model comes with major computational benefits. First, we are able to improve upon the mixed-frequency simulation smoothing step by leveraging a univariate and adaptive filtering algorithm. Second, the regression parameters can be sampled equation-by-equation in parallel. These computational features of the model alleviate the computational burden and make it possible to move the mixed-frequency VAR to the high-dimensional regime. We illustrate the model by an application to US data using our mixed-frequency VAR with 20, 34 and 119 variables.",Estimating Large Mixed-Frequency Bayesian VAR Models,2019-12-04 22:59:03,"Sebastian Ankargren, Paulina Jonéus","http://arxiv.org/abs/1912.02231v1, http://arxiv.org/pdf/1912.02231v1",econ.EM 29018,em,"We introduce a synthetic control methodology to study policies with staggered adoption. Many policies, such as the board gender quota, are replicated by other policy setters at different time frames. Our method estimates the dynamic average treatment effects on the treated using variation introduced by the staggered adoption of policies. Our method gives asymptotically unbiased estimators of many interesting quantities and delivers asymptotically valid inference. By using the proposed method and national labor data in Europe, we find evidence that quota regulation on board diversity leads to a decrease in part-time employment, and an increase in full-time employment for female professionals.",Synthetic Control Inference for Staggered Adoption: Estimating the Dynamic Effects of Board Gender Diversity Policies,2019-12-13 07:29:19,"Jianfei Cao, Shirley Lu","http://arxiv.org/abs/1912.06320v1, http://arxiv.org/pdf/1912.06320v1",econ.EM 29019,em,"Haavelmo (1944) proposed a probabilistic structure for econometric modeling, aiming to make econometrics useful for decision making. His fundamental contribution has become thoroughly embedded in subsequent econometric research, yet it could not answer all the deep issues that the author raised. Notably, Haavelmo struggled to formalize the implications for decision making of the fact that models can at most approximate actuality. In the same period, Wald (1939, 1945) initiated his own seminal development of statistical decision theory. Haavelmo favorably cited Wald, but econometrics did not embrace statistical decision theory. Instead, it focused on study of identification, estimation, and statistical inference. This paper proposes statistical decision theory as a framework for evaluation of the performance of models in decision making. I particularly consider the common practice of as-if optimization: specification of a model, point estimation of its parameters, and use of the point estimate to make a decision that would be optimal if the estimate were accurate. A central theme is that one should evaluate as-if optimization or any other model-based decision rule by its performance across the state space, listing all states of nature that one believes feasible, not across the model space. I apply the theme to prediction and treatment choice. Statistical decision theory is conceptually simple, but application is often challenging. Advancement of computation is the primary task to continue building the foundations sketched by Haavelmo and Wald.",Econometrics For Decision Making: Building Foundations Sketched By Haavelmo And Wald,2019-12-17 21:47:30,Charles F. Manski,"http://arxiv.org/abs/1912.08726v4, http://arxiv.org/pdf/1912.08726v4",econ.EM 29020,em,"We analyze different types of simulations that applied researchers may use to assess their inference methods. We show that different types of simulations vary in many dimensions when considered as inference assessments. Moreover, we show that natural ways of running simulations may lead to misleading conclusions, and we propose alternatives. We then provide evidence that even some simple assessments can detect problems in many different settings. Alternative assessments that potentially better approximate the true data generating process may detect problems that simpler assessments would not detect. However, they are not uniformly dominant in this dimension, and may imply some costs.",Assessing Inference Methods,2019-12-18 21:09:57,Bruno Ferman,"http://arxiv.org/abs/1912.08772v13, http://arxiv.org/pdf/1912.08772v13",econ.EM 29021,em,"Learning about cause and effect is arguably the main goal in applied econometrics. In practice, the validity of these causal inferences is contingent on a number of critical assumptions regarding the type of data that has been collected and the substantive knowledge that is available. For instance, unobserved confounding factors threaten the internal validity of estimates, data availability is often limited to non-random, selection-biased samples, causal effects need to be learned from surrogate experiments with imperfect compliance, and causal knowledge has to be extrapolated across structurally heterogeneous populations. A powerful causal inference framework is required to tackle these challenges, which plague most data analysis to varying degrees. Building on the structural approach to causality introduced by Haavelmo (1943) and the graph-theoretic framework proposed by Pearl (1995), the artificial intelligence (AI) literature has developed a wide array of techniques for causal learning that allow to leverage information from various imperfect, heterogeneous, and biased data sources (Bareinboim and Pearl, 2016). In this paper, we discuss recent advances in this literature that have the potential to contribute to econometric methodology along three dimensions. First, they provide a unified and comprehensive framework for causal inference, in which the aforementioned problems can be addressed in full generality. Second, due to their origin in AI, they come together with sound, efficient, and complete algorithmic criteria for automatization of the corresponding identification task. And third, because of the nonparametric description of structural models that graph-theoretic approaches build on, they combine the strengths of both structural econometrics as well as the potential outcomes framework, and thus offer an effective middle ground between these two literature streams.",Causal Inference and Data Fusion in Econometrics,2019-12-19 13:24:04,"Paul Hünermund, Elias Bareinboim","http://arxiv.org/abs/1912.09104v4, http://arxiv.org/pdf/1912.09104v4",econ.EM 29022,em,"We study the use of Temporal-Difference learning for estimating the structural parameters in dynamic discrete choice models. Our algorithms are based on the conditional choice probability approach but use functional approximations to estimate various terms in the pseudo-likelihood function. We suggest two approaches: The first - linear semi-gradient - provides approximations to the recursive terms using basis functions. The second - Approximate Value Iteration - builds a sequence of approximations to the recursive terms by solving non-parametric estimation problems. Our approaches are fast and naturally allow for continuous and/or high-dimensional state spaces. Furthermore, they do not require specification of transition densities. In dynamic games, they avoid integrating over other players' actions, further heightening the computational advantage. Our proposals can be paired with popular existing methods such as pseudo-maximum-likelihood, and we propose locally robust corrections for the latter to achieve parametric rates of convergence. Monte Carlo simulations confirm the properties of our algorithms in practice.",Temporal-Difference estimation of dynamic discrete choice models,2019-12-19 22:21:49,"Karun Adusumilli, Dita Eckardt","http://arxiv.org/abs/1912.09509v2, http://arxiv.org/pdf/1912.09509v2",econ.EM 29034,em,"Researchers increasingly wish to estimate time-varying parameter (TVP) regressions which involve a large number of explanatory variables. Including prior information to mitigate over-parameterization concerns has led to many using Bayesian methods. However, Bayesian Markov Chain Monte Carlo (MCMC) methods can be very computationally demanding. In this paper, we develop computationally efficient Bayesian methods for estimating TVP models using an integrated rotated Gaussian approximation (IRGA). This exploits the fact that whereas constant coefficients on regressors are often important, most of the TVPs are often unimportant. Since Gaussian distributions are invariant to rotations we can split the the posterior into two parts: one involving the constant coefficients, the other involving the TVPs. Approximate methods are used on the latter and, conditional on these, the former are estimated with precision using MCMC methods. In empirical exercises involving artificial data and a large macroeconomic data set, we show the accuracy and computational benefits of IRGA methods.",Bayesian Inference in High-Dimensional Time-varying Parameter Models using Integrated Rotated Gaussian Approximations,2020-02-24 17:07:50,"Florian Huber, Gary Koop, Michael Pfarrhofer","http://arxiv.org/abs/2002.10274v1, http://arxiv.org/pdf/2002.10274v1",econ.EM 29023,em,"Dynamic treatment regimes are treatment allocations tailored to heterogeneous individuals. The optimal dynamic treatment regime is a regime that maximizes counterfactual welfare. We introduce a framework in which we can partially learn the optimal dynamic regime from observational data, relaxing the sequential randomization assumption commonly employed in the literature but instead using (binary) instrumental variables. We propose the notion of sharp partial ordering of counterfactual welfares with respect to dynamic regimes and establish mapping from data to partial ordering via a set of linear programs. We then characterize the identified set of the optimal regime as the set of maximal elements associated with the partial ordering. We relate the notion of partial ordering with a more conventional notion of partial identification using topological sorts. Practically, topological sorts can be served as a policy benchmark for a policymaker. We apply our method to understand returns to schooling and post-school training as a sequence of treatments by combining data from multiple sources. The framework of this paper can be used beyond the current context, e.g., in establishing rankings of multiple treatments or policies across different counterfactual scenarios.",Optimal Dynamic Treatment Regimes and Partial Welfare Ordering,2019-12-20 21:43:01,Sukjin Han,"http://arxiv.org/abs/1912.10014v4, http://arxiv.org/pdf/1912.10014v4",econ.EM 29024,em,"This paper presents a novel deep learning-based travel behaviour choice model.Our proposed Residual Logit (ResLogit) model formulation seamlessly integrates a Deep Neural Network (DNN) architecture into a multinomial logit model. Recently, DNN models such as the Multi-layer Perceptron (MLP) and the Recurrent Neural Network (RNN) have shown remarkable success in modelling complex and noisy behavioural data. However, econometric studies have argued that machine learning techniques are a `black-box' and difficult to interpret for use in the choice analysis.We develop a data-driven choice model that extends the systematic utility function to incorporate non-linear cross-effects using a series of residual layers and using skipped connections to handle model identifiability in estimating a large number of parameters.The model structure accounts for cross-effects and choice heterogeneity arising from substitution, interactions with non-chosen alternatives and other effects in a non-linear manner.We describe the formulation, model estimation, interpretability and examine the relative performance and econometric implications of our proposed model.We present an illustrative example of the model on a classic red/blue bus choice scenario example. For a real-world application, we use a travel mode choice dataset to analyze the model characteristics compared to traditional neural networks and Logit formulations.Our findings show that our ResLogit approach significantly outperforms MLP models while providing similar interpretability as a Multinomial Logit model.",ResLogit: A residual neural network logit model for data-driven choice modelling,2019-12-20 22:02:58,"Melvin Wong, Bilal Farooq","http://arxiv.org/abs/1912.10058v2, http://arxiv.org/pdf/1912.10058v2",econ.EM 29025,em,"We propose a new sequential Efficient Pseudo-Likelihood (k-EPL) estimator for dynamic discrete choice games of incomplete information. k-EPL considers the joint behavior of multiple players simultaneously, as opposed to individual responses to other agents' equilibrium play. This, in addition to reframing the problem from conditional choice probability (CCP) space to value function space, yields a computationally tractable, stable, and efficient estimator. We show that each iteration in the k-EPL sequence is consistent and asymptotically efficient, so the first-order asymptotic properties do not vary across iterations. Furthermore, we show the sequence achieves higher-order equivalence to the finite-sample maximum likelihood estimator with iteration and that the sequence of estimators converges almost surely to the maximum likelihood estimator at a nearly-superlinear rate when the data are generated by any regular Markov perfect equilibrium, including equilibria that lead to inconsistency of other sequential estimators. When utility is linear in parameters, k-EPL iterations are computationally simple, only requiring that the researcher solve linear systems of equations to generate pseudo-regressors which are used in a static logit/probit regression. Monte Carlo simulations demonstrate the theoretical results and show k-EPL's good performance in finite samples in both small- and large-scale games, even when the game admits spurious equilibria in addition to one that generated the data. We apply the estimator to study the role of competition in the U.S. wholesale club industry.",Efficient and Convergent Sequential Pseudo-Likelihood Estimation of Dynamic Discrete Games,2019-12-22 20:34:23,"Adam Dearing, Jason R. Blevins","http://arxiv.org/abs/1912.10488v5, http://arxiv.org/pdf/1912.10488v5",econ.EM 29026,em,"We propose an optimal-transport-based matching method to nonparametrically estimate linear models with independent latent variables. The method consists in generating pseudo-observations from the latent variables, so that the Euclidean distance between the model's predictions and their matched counterparts in the data is minimized. We show that our nonparametric estimator is consistent, and we document that it performs well in simulated data. We apply this method to study the cyclicality of permanent and transitory income shocks in the Panel Study of Income Dynamics. We find that the dispersion of income shocks is approximately acyclical, whereas the skewness of permanent shocks is procyclical. By comparison, we find that the dispersion and skewness of shocks to hourly wages vary little with the business cycle.",Recovering Latent Variables by Matching,2019-12-30 23:49:27,"Manuel Arellano, Stephane Bonhomme","http://arxiv.org/abs/1912.13081v1, http://arxiv.org/pdf/1912.13081v1",econ.EM 29027,em,"Markov switching models are a popular family of models that introduces time-variation in the parameters in the form of their state- or regime-specific values. Importantly, this time-variation is governed by a discrete-valued latent stochastic process with limited memory. More specifically, the current value of the state indicator is determined only by the value of the state indicator from the previous period, thus the Markov property, and the transition matrix. The latter characterizes the properties of the Markov process by determining with what probability each of the states can be visited next period, given the state in the current period. This setup decides on the two main advantages of the Markov switching models. Namely, the estimation of the probability of state occurrences in each of the sample periods by using filtering and smoothing methods and the estimation of the state-specific parameters. These two features open the possibility for improved interpretations of the parameters associated with specific regimes combined with the corresponding regime probabilities, as well as for improved forecasting performance based on persistent regimes and parameters characterizing them.",Markov Switching,2020-02-10 11:29:23,"Yong Song, Tomasz Woźniak","http://dx.doi.org/10.1093/acrefore/9780190625979.013.174, http://arxiv.org/abs/2002.03598v1, http://arxiv.org/pdf/2002.03598v1",econ.EM 29028,em,"Given the extreme dependence of agriculture on weather conditions, this paper analyses the effect of climatic variations on this economic sector, by considering both a huge dataset and a flexible spatio-temporal model specification. In particular, we study the response of N-fertilizer application to abnormal weather conditions, while accounting for other relevant control variables. The dataset consists of gridded data spanning over 21 years (1993-2013), while the methodological strategy makes use of a spatial dynamic panel data (SDPD) model that accounts for both space and time fixed effects, besides dealing with both space and time dependences. Time-invariant short and long term effects, as well as time-varying marginal effects are also properly defined, revealing interesting results on the impact of both GDP and weather conditions on fertilizer utilizations. The analysis considers four macro-regions -- Europe, South America, South-East Asia and Africa -- to allow for comparisons among different socio-economic societies. In addition to finding both spatial (in the form of knowledge spillover effects) and temporal dependences as well as a good support for the existence of an environmental Kuznets curve for fertilizer application, the paper shows peculiar responses of N-fertilization to deviations from normal weather conditions of moisture for each selected region, calling for ad hoc policy interventions.",The Effect of Weather Conditions on Fertilizer Applications: A Spatial Dynamic Panel Data Analysis,2020-02-10 19:31:15,"Anna Gloria Billè, Marco Rogna","http://arxiv.org/abs/2002.03922v2, http://arxiv.org/pdf/2002.03922v2",econ.EM 29029,em,"This article deals with parameterisation, identifiability, and maximum likelihood (ML) estimation of possibly non-invertible structural vector autoregressive moving average (SVARMA) models driven by independent and non-Gaussian shocks. In contrast to previous literature, the novel representation of the MA polynomial matrix using the Wiener-Hopf factorisation (WHF) focuses on the multivariate nature of the model, generates insights into its structure, and uses this structure for devising optimisation algorithms. In particular, it allows to parameterise the location of determinantal zeros inside and outside the unit circle, and it allows for MA zeros at zero, which can be interpreted as informational delays. This is highly relevant for data-driven evaluation of Dynamic Stochastic General Equilibrium (DSGE) models. Typically imposed identifying restrictions on the shock transmission matrix as well as on the determinantal root location are made testable. Furthermore, we provide low level conditions for asymptotic normality of the ML estimator and analytic expressions for the score and the information matrix. As application, we estimate the Blanchard and Quah model and show that our method provides further insights regarding non-invertibility using a standard macroeconometric model. These and further analyses are implemented in a well documented R-package.",Identifiability and Estimation of Possibly Non-Invertible SVARMA Models: A New Parametrisation,2020-02-11 15:35:14,Bernd Funovits,"http://arxiv.org/abs/2002.04346v2, http://arxiv.org/pdf/2002.04346v2",econ.EM 29030,em,"This paper analyses the number of free parameters and solutions of the structural difference equation obtained from a linear multivariate rational expectations model. First, it is shown that the number of free parameters depends on the structure of the zeros at zero of a certain matrix polynomial of the structural difference equation and the number of inputs of the rational expectations model. Second, the implications of requiring that some components of the endogenous variables be predetermined are analysed. Third, a condition for existence and uniqueness of a causal stationary solution is given.",The Dimension of the Set of Causal Solutions of Linear Multivariate Rational Expectations Models,2020-02-11 16:33:04,Bernd Funovits,"http://arxiv.org/abs/2002.04369v1, http://arxiv.org/pdf/2002.04369v1",econ.EM 29031,em,"We construct long-term prediction intervals for time-aggregated future values of univariate economic time series. We propose computational adjustments of the existing methods to improve coverage probability under a small sample constraint. A pseudo-out-of-sample evaluation shows that our methods perform at least as well as selected alternative methods based on model-implied Bayesian approaches and bootstrapping. Our most successful method yields prediction intervals for eight macroeconomic indicators over a horizon spanning several decades.",Long-term prediction intervals of economic time series,2020-02-13 11:11:18,"Marek Chudy, Sayar Karmakar, Wei Biao Wu","http://arxiv.org/abs/2002.05384v1, http://arxiv.org/pdf/2002.05384v1",econ.EM 29032,em,"Conjugate priors allow for fast inference in large dimensional vector autoregressive (VAR) models but, at the same time, introduce the restriction that each equation features the same set of explanatory variables. This paper proposes a straightforward means of post-processing posterior estimates of a conjugate Bayesian VAR to effectively perform equation-specific covariate selection. Compared to existing techniques using shrinkage alone, our approach combines shrinkage and sparsity in both the VAR coefficients and the error variance-covariance matrices, greatly reducing estimation uncertainty in large dimensions while maintaining computational tractability. We illustrate our approach by means of two applications. The first application uses synthetic data to investigate the properties of the model across different data-generating processes, the second application analyzes the predictive gains from sparsification in a forecasting exercise for US data.",Combining Shrinkage and Sparsity in Conjugate Vector Autoregressive Models,2020-02-20 17:45:38,"Niko Hauzenberger, Florian Huber, Luca Onorante","http://arxiv.org/abs/2002.08760v2, http://arxiv.org/pdf/2002.08760v2",econ.EM 29033,em,"This paper considers estimation and inference about tail features when the observations beyond some threshold are censored. We first show that ignoring such tail censoring could lead to substantial bias and size distortion, even if the censored probability is tiny. Second, we propose a new maximum likelihood estimator (MLE) based on the Pareto tail approximation and derive its asymptotic properties. Third, we provide a small sample modification to the MLE by resorting to Extreme Value theory. The MLE with this modification delivers excellent small sample performance, as shown by Monte Carlo simulations. We illustrate its empirical relevance by estimating (i) the tail index and the extreme quantiles of the US individual earnings with the Current Population Survey dataset and (ii) the tail index of the distribution of macroeconomic disasters and the coefficient of risk aversion using the dataset collected by Barro and Urs{\'u}a (2008). Our new empirical findings are substantially different from the existing literature.",Estimation and Inference about Tail Features with Tail Censored Data,2020-02-23 23:43:24,"Yulong Wang, Zhijie Xiao","http://arxiv.org/abs/2002.09982v1, http://arxiv.org/pdf/2002.09982v1",econ.EM 29290,em,"Discrete Choice Experiments (DCE) have been widely used in health economics, environmental valuation, and other disciplines. However, there is a lack of resources disclosing the whole procedure of carrying out a DCE. This document aims to assist anyone wishing to use the power of DCEs to understand people's behavior by providing a comprehensive guide to the procedure. This guide contains all the code needed to design, implement, and analyze a DCE using only free software.","A step-by-step guide to design, implement, and analyze a discrete choice experiment",2020-09-23 19:13:10,Daniel Pérez-Troncoso,"http://arxiv.org/abs/2009.11235v1, http://arxiv.org/pdf/2009.11235v1",econ.EM 29035,em,"This paper studies the identification, estimation, and hypothesis testing problem in complete and incomplete economic models with testable assumptions. Testable assumptions ($A$) give strong and interpretable empirical content to the models but they also carry the possibility that some distribution of observed outcomes may reject these assumptions. A natural way to avoid this is to find a set of relaxed assumptions ($\tilde{A}$) that cannot be rejected by any distribution of observed outcome and the identified set of the parameter of interest is not changed when the original assumption is not rejected. The main contribution of this paper is to characterize the properties of such a relaxed assumption $\tilde{A}$ using a generalized definition of refutability and confirmability. I also propose a general method to construct such $\tilde{A}$. A general estimation and inference procedure is proposed and can be applied to most incomplete economic models. I apply my methodology to the instrument monotonicity assumption in Local Average Treatment Effect (LATE) estimation and to the sector selection assumption in a binary outcome Roy model of employment sector choice. In the LATE application, I use my general method to construct a set of relaxed assumptions $\tilde{A}$ that can never be rejected, and the identified set of LATE is the same as imposing $A$ when $A$ is not rejected. LATE is point identified under my extension $\tilde{A}$ in the LATE application. In the binary outcome Roy model, I use my method of incomplete models to relax Roy's sector selection assumption and characterize the identified set of the binary potential outcome as a polyhedron.",Estimating Economic Models with Testable Assumptions: Theory and Applications,2020-02-24 20:58:41,Moyu Liao,"http://arxiv.org/abs/2002.10415v3, http://arxiv.org/pdf/2002.10415v3",econ.EM 29036,em,"We examine the impact of annual hours worked on annual earnings by decomposing changes in the real annual earnings distribution into composition, structural and hours effects. We do so via a nonseparable simultaneous model of hours, wages and earnings. Using the Current Population Survey for the survey years 1976--2019, we find that changes in the female distribution of annual hours of work are important in explaining movements in inequality in female annual earnings. This captures the substantial changes in their employment behavior over this period. Movements in the male hours distribution only affect the lower part of their earnings distribution and reflect the sensitivity of these workers' annual hours of work to cyclical factors.",Hours Worked and the U.S. Distribution of Real Annual Earnings 1976-2019,2020-02-26 01:55:07,"Iván Fernández-Val, Franco Peracchi, Aico van Vuuren, Francis Vella","http://arxiv.org/abs/2002.11211v3, http://arxiv.org/pdf/2002.11211v3",econ.EM 29037,em,"This paper combines causal mediation analysis with double machine learning to control for observed confounders in a data-driven way under a selection-on-observables assumption in a high-dimensional setting. We consider the average indirect effect of a binary treatment operating through an intermediate variable (or mediator) on the causal path between the treatment and the outcome, as well as the unmediated direct effect. Estimation is based on efficient score functions, which possess a multiple robustness property w.r.t. misspecifications of the outcome, mediator, and treatment models. This property is key for selecting these models by double machine learning, which is combined with data splitting to prevent overfitting in the estimation of the effects of interest. We demonstrate that the direct and indirect effect estimators are asymptotically normal and root-n consistent under specific regularity conditions and investigate the finite sample properties of the suggested methods in a simulation study when considering lasso as machine learner. We also provide an empirical application to the U.S. National Longitudinal Survey of Youth, assessing the indirect effect of health insurance coverage on general health operating via routine checkups as mediator, as well as the direct effect. We find a moderate short term effect of health insurance coverage on general health which is, however, not mediated by routine checkups.",Causal mediation analysis with double machine learning,2020-02-28 16:39:49,"Helmut Farbmacher, Martin Huber, Lukáš Lafférs, Henrika Langen, Martin Spindler","http://arxiv.org/abs/2002.12710v6, http://arxiv.org/pdf/2002.12710v6",econ.EM 29038,em,"Alternative data sets are widely used for macroeconomic nowcasting together with machine learning--based tools. The latter are often applied without a complete picture of their theoretical nowcasting properties. Against this background, this paper proposes a theoretically grounded nowcasting methodology that allows researchers to incorporate alternative Google Search Data (GSD) among the predictors and that combines targeted preselection, Ridge regularization, and Generalized Cross Validation. Breaking with most existing literature, which focuses on asymptotic in-sample theoretical properties, we establish the theoretical out-of-sample properties of our methodology and support them by Monte-Carlo simulations. We apply our methodology to GSD to nowcast GDP growth rate of several countries during various economic periods. Our empirical findings support the idea that GSD tend to increase nowcasting accuracy, even after controlling for official variables, but that the gain differs between periods of recessions and of macroeconomic stability.",When are Google data useful to nowcast GDP? An approach via pre-selection and shrinkage,2020-07-01 09:58:00,"Laurent Ferrara, Anna Simoni","http://dx.doi.org/10.1080/07350015.2022.2116025, http://arxiv.org/abs/2007.00273v3, http://arxiv.org/pdf/2007.00273v3",econ.EM 29039,em,"In this paper, we estimate and leverage latent constant group structure to generate the point, set, and density forecasts for short dynamic panel data. We implement a nonparametric Bayesian approach to simultaneously identify coefficients and group membership in the random effects which are heterogeneous across groups but fixed within a group. This method allows us to flexibly incorporate subjective prior knowledge on the group structure that potentially improves the predictive accuracy. In Monte Carlo experiments, we demonstrate that our Bayesian grouped random effects (BGRE) estimators produce accurate estimates and score predictive gains over standard panel data estimators. With a data-driven group structure, the BGRE estimators exhibit comparable accuracy of clustering with the Kmeans algorithm and outperform a two-step Bayesian grouped estimator whose group structure relies on Kmeans. In the empirical analysis, we apply our method to forecast the investment rate across a broad range of firms and illustrate that the estimated latent group structure improves forecasts relative to standard panel data estimators.",Forecasting with Bayesian Grouped Random Effects in Panel Data,2020-07-05 22:48:27,Boyuan Zhang,"http://arxiv.org/abs/2007.02435v8, http://arxiv.org/pdf/2007.02435v8",econ.EM 29170,em,"We develop a Stata command xthenreg to implement the first-differenced GMM estimation of the dynamic panel threshold model, which Seo and Shin (2016, Journal of Econometrics 195: 169-186) have proposed. Furthermore, We derive the asymptotic variance formula for a kink constrained GMM estimator of the dynamic threshold model and include an estimation algorithm. We also propose a fast bootstrap algorithm to implement the bootstrap for the linearity test. The use of the command is illustrated through a Monte Carlo simulation and an economic application.",Estimation of Dynamic Panel Threshold Model using Stata,2019-02-27 06:19:33,"Myung Hwan Seo, Sueyoul Kim, Young-Joo Kim","http://dx.doi.org/10.1177/1536867X19874243, http://arxiv.org/abs/1902.10318v1, http://arxiv.org/pdf/1902.10318v1",econ.EM 29040,em,"This paper presents a novel estimator of orthogonal GARCH models, which combines (eigenvalue and -vector) targeting estimation with stepwise (univariate) estimation. We denote this the spectral targeting estimator. This two-step estimator is consistent under finite second order moments, while asymptotic normality holds under finite fourth order moments. The estimator is especially well suited for modelling larger portfolios: we compare the empirical performance of the spectral targeting estimator to that of the quasi maximum likelihood estimator for five portfolios of 25 assets. The spectral targeting estimator dominates in terms of computational complexity, being up to 57 times faster in estimation, while both estimators produce similar out-of-sample forecasts, indicating that the spectral targeting estimator is well suited for high-dimensional empirical applications.",Spectral Targeting Estimation of $λ$-GARCH models,2020-07-06 11:53:59,Simon Hetland,"http://arxiv.org/abs/2007.02588v1, http://arxiv.org/pdf/2007.02588v1",econ.EM 29041,em,"We study the effects of counterfactual teacher-to-classroom assignments on average student achievement in elementary and middle schools in the US. We use the Measures of Effective Teaching (MET) experiment to semiparametrically identify the average reallocation effects (AREs) of such assignments. Our findings suggest that changes in within-district teacher assignments could have appreciable effects on student achievement. Unlike policies which require hiring additional teachers (e.g., class-size reduction measures), or those aimed at changing the stock of teachers (e.g., VAM-guided teacher tenure policies), alternative teacher-to-classroom assignments are resource neutral; they raise student achievement through a more efficient deployment of existing teachers.",Teacher-to-classroom assignment and student achievement,2020-07-06 14:20:59,"Bryan S. Graham, Geert Ridder, Petra Thiemann, Gema Zamarro","http://arxiv.org/abs/2007.02653v2, http://arxiv.org/pdf/2007.02653v2",econ.EM 29042,em,"This paper studies optimal decision rules, including estimators and tests, for weakly identified GMM models. We derive the limit experiment for weakly identified GMM, and propose a theoretically-motivated class of priors which give rise to quasi-Bayes decision rules as a limiting case. Together with results in the previous literature, this establishes desirable properties for the quasi-Bayes approach regardless of model identification status, and we recommend quasi-Bayes for settings where identification is a concern. We further propose weighted average power-optimal identification-robust frequentist tests and confidence sets, and prove a Bernstein-von Mises-type result for the quasi-Bayes posterior under weak identification.",Optimal Decision Rules for Weak GMM,2020-07-08 14:48:10,"Isaiah Andrews, Anna Mikusheva","http://arxiv.org/abs/2007.04050v7, http://arxiv.org/pdf/2007.04050v7",econ.EM 29043,em,"In this paper, we test the contribution of foreign management on firms' competitiveness. We use a novel dataset on the careers of 165,084 managers employed by 13,106 companies in the United Kingdom in the period 2009-2017. We find that domestic manufacturing firms become, on average, between 7% and 12% more productive after hiring the first foreign managers, whereas foreign-owned firms register no significant improvement. In particular, we test that previous industry-specific experience is the primary driver of productivity gains in domestic firms (15.6%), in a way that allows the latter to catch up with foreign-owned firms. Managers from the European Union are highly valuable, as they represent about half of the recruits in our data. Our identification strategy combines matching techniques, difference-in-difference, and pre-recruitment trends to challenge reverse causality. Results are robust to placebo tests and to different estimators of Total Factor Productivity. Eventually, we argue that upcoming limits to the mobility of foreign talents after the Brexit event can hamper the allocation of productive managerial resources.",Talents from Abroad. Foreign Managers and Productivity in the United Kingdom,2020-07-08 15:07:13,"Dimitrios Exadaktylos, Massimo Riccaboni, Armando Rungi","http://arxiv.org/abs/2007.04055v1, http://arxiv.org/pdf/2007.04055v1",econ.EM 29044,em,"We study treatment-effect estimation using panel data. The treatment may be non-binary, non-absorbing, and the outcome may be affected by treatment lags. We make a parallel-trends assumption, and propose event-study estimators of the effect of being exposed to a weakly higher treatment dose for $\ell$ periods. We also propose normalized estimators, that estimate a weighted average of the effects of the current treatment and its lags. We also analyze commonly-used two-way-fixed-effects regressions. Unlike our estimators, they can be biased in the presence of heterogeneous treatment effects. A local-projection version of those regressions is biased even with homogeneous effects.",Difference-in-Differences Estimators of Intertemporal Treatment Effects,2020-07-08 20:01:22,"Clément de Chaisemartin, Xavier D'Haultfoeuille","http://arxiv.org/abs/2007.04267v12, http://arxiv.org/pdf/2007.04267v12",econ.EM 29045,em,"This paper develops an empirical balancing approach for the estimation of treatment effects under two-sided noncompliance using a binary conditionally independent instrumental variable. The method weighs both treatment and outcome information with inverse probabilities to produce exact finite sample balance across instrument level groups. It is free of functional form assumptions on the outcome or the treatment selection step. By tailoring the loss function for the instrument propensity scores, the resulting treatment effect estimates exhibit both low bias and a reduced variance in finite samples compared to conventional inverse probability weighting methods. The estimator is automatically weight normalized and has similar bias properties compared to conventional two-stage least squares estimation under constant causal effects for the compliers. We provide conditions for asymptotic normality and semiparametric efficiency and demonstrate how to utilize additional information about the treatment selection step for bias reduction in finite samples. The method can be easily combined with regularization or other statistical learning approaches to deal with a high-dimensional number of observed confounding variables. Monte Carlo simulations suggest that the theoretical advantages translate well to finite samples. The method is illustrated in an empirical example.",Efficient Covariate Balancing for the Local Average Treatment Effect,2020-07-08 21:04:46,Phillip Heiler,"http://arxiv.org/abs/2007.04346v1, http://arxiv.org/pdf/2007.04346v1",econ.EM 29053,em,"This paper considers estimation and inference for heterogeneous counterfactual effects with high-dimensional data. We propose a novel robust score for debiased estimation of the unconditional quantile regression (Firpo, Fortin, and Lemieux, 2009) as a measure of heterogeneous counterfactual marginal effects. We propose a multiplier bootstrap inference and develop asymptotic theories to guarantee the size control in large sample. Simulation studies support our theories. Applying the proposed method to Job Corps survey data, we find that a policy which counterfactually extends the duration of exposures to the Job Corps training program will be effective especially for the targeted subpopulations of lower potential wage earners.",Unconditional Quantile Regression with High Dimensional Data,2020-07-27 19:13:41,"Yuya Sasaki, Takuya Ura, Yichong Zhang","http://arxiv.org/abs/2007.13659v4, http://arxiv.org/pdf/2007.13659v4",econ.EM 29046,em,"This paper analyzes a semiparametric model of network formation in the presence of unobserved agent-specific heterogeneity. The objective is to identify and estimate the preference parameters associated with homophily on observed attributes when the distributions of the unobserved factors are not parametrically specified. This paper offers two main contributions to the literature on network formation. First, it establishes a new point identification result for the vector of parameters that relies on the existence of a special repressor. The identification proof is constructive and characterizes a closed-form for the parameter of interest. Second, it introduces a simple two-step semiparametric estimator for the vector of parameters with a first-step kernel estimator. The estimator is computationally tractable and can be applied to both dense and sparse networks. Moreover, I show that the estimator is consistent and has a limiting normal distribution as the number of individuals in the network increases. Monte Carlo experiments demonstrate that the estimator performs well in finite samples and in networks with different levels of sparsity.",A Semiparametric Network Formation Model with Unobserved Linear Heterogeneity,2020-07-10 17:09:41,Luis E. Candelaria,"http://arxiv.org/abs/2007.05403v2, http://arxiv.org/pdf/2007.05403v2",econ.EM 29047,em,"This paper characterises dynamic linkages arising from shocks with heterogeneous degrees of persistence. Using frequency domain techniques, we introduce measures that identify smoothly varying links of a transitory and persistent nature. Our approach allows us to test for statistical differences in such dynamic links. We document substantial differences in transitory and persistent linkages among US financial industry volatilities, argue that they track heterogeneously persistent sources of systemic risk, and thus may serve as a useful tool for market participants.",Persistence in Financial Connectedness and Systemic Risk,2020-07-14 18:45:33,"Jozef Barunik, Michael Ellington","http://arxiv.org/abs/2007.07842v4, http://arxiv.org/pdf/2007.07842v4",econ.EM 29048,em,"This paper studies the latent index representation of the conditional LATE model, making explicit the role of covariates in treatment selection. We find that if the directions of the monotonicity condition are the same across all values of the conditioning covariate, which is often assumed in the literature, then the treatment choice equation has to satisfy a separability condition between the instrument and the covariate. This global representation result establishes testable restrictions imposed on the way covariates enter the treatment choice equation. We later extend the representation theorem to incorporate multiple ordered levels of treatment.",Global Representation of the Conditional LATE Model: A Separability Result,2020-07-16 07:30:59,"Yu-Chang Chen, Haitian Xie","http://dx.doi.org/10.1111/obes.12476, http://arxiv.org/abs/2007.08106v3, http://arxiv.org/pdf/2007.08106v3",econ.EM 29049,em,"I devise a novel approach to evaluate the effectiveness of fiscal policy in the short run with multi-category treatment effects and inverse probability weighting based on the potential outcome framework. This study's main contribution to the literature is the proposed modified conditional independence assumption to improve the evaluation of fiscal policy. Using this approach, I analyze the effects of government spending on the US economy from 1992 to 2019. The empirical study indicates that large fiscal contraction generates a negative effect on the economic growth rate, and small and large fiscal expansions realize a positive effect. However, these effects are not significant in the traditional multiple regression approach. I conclude that this new approach significantly improves the evaluation of fiscal policy.",Government spending and multi-category treatment effects:The modified conditional independence assumption,2020-07-16 18:16:35,Koiti Yano,"http://arxiv.org/abs/2007.08396v3, http://arxiv.org/pdf/2007.08396v3",econ.EM 29050,em,"We propose using a permutation test to detect discontinuities in an underlying economic model at a known cutoff point. Relative to the existing literature, we show that this test is well suited for event studies based on time-series data. The test statistic measures the distance between the empirical distribution functions of observed data in two local subsamples on the two sides of the cutoff. Critical values are computed via a standard permutation algorithm. Under a high-level condition that the observed data can be coupled by a collection of conditionally independent variables, we establish the asymptotic validity of the permutation test, allowing the sizes of the local subsamples to be either be fixed or grow to infinity. In the latter case, we also establish that the permutation test is consistent. We demonstrate that our high-level condition can be verified in a broad range of problems in the infill asymptotic time-series setting, which justifies using the permutation test to detect jumps in economic variables such as volatility, trading activity, and liquidity. These potential applications are illustrated in an empirical case study for selected FOMC announcements during the ongoing COVID-19 pandemic.",Permutation-based tests for discontinuities in event studies,2020-07-20 05:12:52,"Federico A. Bugni, Jia Li, Qiyuan Li","http://arxiv.org/abs/2007.09837v4, http://arxiv.org/pdf/2007.09837v4",econ.EM 29051,em,"Mean, median, and mode are three essential measures of the centrality of probability distributions. In program evaluation, the average treatment effect (mean) and the quantile treatment effect (median) have been intensively studied in the past decades. The mode treatment effect, however, has long been neglected in program evaluation. This paper fills the gap by discussing both the estimation and inference of the mode treatment effect. I propose both traditional kernel and machine learning methods to estimate the mode treatment effect. I also derive the asymptotic properties of the proposed estimators and find that both estimators follow the asymptotic normality but with the rate of convergence slower than the regular rate $\sqrt{N}$, which is different from the rates of the classical average and quantile treatment effect estimators.",The Mode Treatment Effect,2020-07-22 21:05:56,Neng-Chieh Chang,"http://arxiv.org/abs/2007.11606v1, http://arxiv.org/pdf/2007.11606v1",econ.EM 29052,em,"The multinomial probit model is a popular tool for analyzing choice behaviour as it allows for correlation between choice alternatives. Because current model specifications employ a full covariance matrix of the latent utilities for the choice alternatives, they are not scalable to a large number of choice alternatives. This paper proposes a factor structure on the covariance matrix, which makes the model scalable to large choice sets. The main challenge in estimating this structure is that the model parameters require identifying restrictions. We identify the parameters by a trace-restriction on the covariance matrix, which is imposed through a reparametrization of the factor structure. We specify interpretable prior distributions on the model parameters and develop an MCMC sampler for parameter estimation. The proposed approach significantly improves performance in large choice sets relative to existing multinomial probit specifications. Applications to purchase data show the economic importance of including a large number of choice alternatives in consumer choice analysis.",Scalable Bayesian estimation in the multinomial probit model,2020-07-27 02:38:14,"Ruben Loaiza-Maya, Didier Nibbering","http://arxiv.org/abs/2007.13247v2, http://arxiv.org/pdf/2007.13247v2",econ.EM 29054,em,"Applied macroeconomists often compute confidence intervals for impulse responses using local projections, i.e., direct linear regressions of future outcomes on current covariates. This paper proves that local projection inference robustly handles two issues that commonly arise in applications: highly persistent data and the estimation of impulse responses at long horizons. We consider local projections that control for lags of the variables in the regression. We show that lag-augmented local projections with normal critical values are asymptotically valid uniformly over (i) both stationary and non-stationary data, and also over (ii) a wide range of response horizons. Moreover, lag augmentation obviates the need to correct standard errors for serial correlation in the regression residuals. Hence, local projection inference is arguably both simpler than previously thought and more robust than standard autoregressive inference, whose validity is known to depend sensitively on the persistence of the data and on the length of the horizon.",Local Projection Inference is Simpler and More Robust Than You Think,2020-07-28 01:03:23,"José Luis Montiel Olea, Mikkel Plagborg-Møller","http://dx.doi.org/10.3982/ECTA18756, http://arxiv.org/abs/2007.13888v3, http://arxiv.org/pdf/2007.13888v3",econ.EM 29055,em,"Commonly used methods of production function and markup estimation assume that a firm's output quantity can be observed as data, but typical datasets contain only revenue, not output quantity. We examine the nonparametric identification of production function and markup from revenue data when a firm faces a general nonparametri demand function under imperfect competition. Under standard assumptions, we provide the constructive nonparametric identification of various firm-level objects: gross production function, total factor productivity, price markups over marginal costs, output prices, output quantities, a demand system, and a representative consumer's utility function.","Nonparametric Identification of Production Function, Total Factor Productivity, and Markup from Revenue Data",2020-10-31 02:34:40,"Hiroyuki Kasahara, Yoichi Sugita","http://arxiv.org/abs/2011.00143v1, http://arxiv.org/pdf/2011.00143v1",econ.EM 29056,em,"Macroeconomists increasingly use external sources of exogenous variation for causal inference. However, unless such external instruments (proxies) capture the underlying shock without measurement error, existing methods are silent on the importance of that shock for macroeconomic fluctuations. We show that, in a general moving average model with external instruments, variance decompositions for the instrumented shock are interval-identified, with informative bounds. Various additional restrictions guarantee point identification of both variance and historical decompositions. Unlike SVAR analysis, our methods do not require invertibility. Applied to U.S. data, they give a tight upper bound on the importance of monetary shocks for inflation dynamics.",Instrumental Variable Identification of Dynamic Variance Decompositions,2020-11-03 02:32:44,"Mikkel Plagborg-Møller, Christian K. Wolf","http://arxiv.org/abs/2011.01380v2, http://arxiv.org/pdf/2011.01380v2",econ.EM 29057,em,"Forecasters often use common information and hence make common mistakes. We propose a new approach, Factor Graphical Model (FGM), to forecast combinations that separates idiosyncratic forecast errors from the common errors. FGM exploits the factor structure of forecast errors and the sparsity of the precision matrix of the idiosyncratic errors. We prove the consistency of forecast combination weights and mean squared forecast error estimated using FGM, supporting the results with extensive simulations. Empirical applications to forecasting macroeconomic series shows that forecast combination using FGM outperforms combined forecasts using equal weights and graphical models without incorporating factor structure of forecast errors.",Learning from Forecast Errors: A New Approach to Forecast Combinations,2020-11-04 03:16:16,"Tae-Hwy Lee, Ekaterina Seregina","http://arxiv.org/abs/2011.02077v2, http://arxiv.org/pdf/2011.02077v2",econ.EM 29058,em,"We use a decision-theoretic framework to study the problem of forecasting discrete outcomes when the forecaster is unable to discriminate among a set of plausible forecast distributions because of partial identification or concerns about model misspecification or structural breaks. We derive ""robust"" forecasts which minimize maximum risk or regret over the set of forecast distributions. We show that for a large class of models including semiparametric panel data models for dynamic discrete choice, the robust forecasts depend in a natural way on a small number of convex optimization problems which can be simplified using duality methods. Finally, we derive ""efficient robust"" forecasts to deal with the problem of first having to estimate the set of forecast distributions and develop a suitable asymptotic efficiency theory. Forecasts obtained by replacing nuisance parameters that characterize the set of forecast distributions with efficient first-stage estimators can be strictly dominated by our efficient robust forecasts.",Robust Forecasting,2020-11-06 04:17:22,"Timothy Christensen, Hyungsik Roger Moon, Frank Schorfheide","http://arxiv.org/abs/2011.03153v4, http://arxiv.org/pdf/2011.03153v4",econ.EM 29059,em,"Following in the footsteps of the literature on empirical welfare maximization, this paper wants to contribute by stressing the policymaker perspective via a practical illustration of an optimal policy assignment problem. More specifically, by focusing on the class of threshold-based policies, we first set up the theoretical underpinnings of the policymaker selection problem, to then offer a practical solution to this problem via an empirical illustration using the popular LaLonde (1986) training program dataset. The paper proposes an implementation protocol for the optimal solution that is straightforward to apply and easy to program with standard statistical software.",Optimal Policy Learning: From Theory to Practice,2020-11-10 12:25:33,Giovanni Cerulli,"http://arxiv.org/abs/2011.04993v1, http://arxiv.org/pdf/2011.04993v1",econ.EM 29060,em,"This paper studies identification of the effect of a mis-classified, binary, endogenous regressor when a discrete-valued instrumental variable is available. We begin by showing that the only existing point identification result for this model is incorrect. We go on to derive the sharp identified set under mean independence assumptions for the instrument and measurement error. The resulting bounds are novel and informative, but fail to point identify the effect of interest. This motivates us to consider alternative and slightly stronger assumptions: we show that adding second and third moment independence assumptions suffices to identify the model.","Identifying the effect of a mis-classified, binary, endogenous regressor",2020-11-14 14:35:13,"Francis J. DiTraglia, Camilo Garcia-Jimeno","http://dx.doi.org/10.1016/j.jeconom.2019.01.007, http://arxiv.org/abs/2011.07272v1, http://arxiv.org/pdf/2011.07272v1",econ.EM 29848,em,"This study considers the treatment choice problem when outcome variables are binary. We focus on statistical treatment rules that plug in fitted values based on nonparametric kernel regression and show that optimizing two parameters enables the calculation of the maximum regret. Using this result, we propose a novel bandwidth selection method based on the minimax regret criterion. Finally, we perform a numerical analysis to compare the optimal bandwidth choices for the binary and normally distributed outcomes.",Bandwidth Selection for Treatment Choice with Binary Outcomes,2023-08-28 10:46:05,Takuya Ishihara,"http://arxiv.org/abs/2308.14375v2, http://arxiv.org/pdf/2308.14375v2",econ.EM 29061,em,"To estimate causal effects from observational data, an applied researcher must impose beliefs. The instrumental variables exclusion restriction, for example, represents the belief that the instrument has no direct effect on the outcome of interest. Yet beliefs about instrument validity do not exist in isolation. Applied researchers often discuss the likely direction of selection and the potential for measurement error in their articles but lack formal tools for incorporating this information into their analyses. Failing to use all relevant information not only leaves money on the table; it runs the risk of leading to a contradiction in which one holds mutually incompatible beliefs about the problem at hand. To address these issues, we first characterize the joint restrictions relating instrument invalidity, treatment endogeneity, and non-differential measurement error in a workhorse linear model, showing how beliefs over these three dimensions are mutually constrained by each other and the data. Using this information, we propose a Bayesian framework to help researchers elicit their beliefs, incorporate them into estimation, and ensure their mutual coherence. We conclude by illustrating our framework in a number of examples drawn from the empirical microeconomics literature.","A Framework for Eliciting, Incorporating, and Disciplining Identification Beliefs in Linear Models",2020-11-14 14:43:44,"Francis J. DiTraglia, Camilo Garcia-Jimeno","http://dx.doi.org/10.1080/07350015.2020.1753528, http://arxiv.org/abs/2011.07276v1, http://arxiv.org/pdf/2011.07276v1",econ.EM 29062,em,"In this paper we propose a semi-parametric Bayesian Generalized Least Squares estimator. In a generic setting where each error is a vector, the parametric Generalized Least Square estimator maintains the assumption that each error vector has the same distributional parameters. In reality, however, errors are likely to be heterogeneous regarding their distributions. To cope with such heterogeneity, a Dirichlet process prior is introduced for the distributional parameters of the errors, leading to the error distribution being a mixture of a variable number of normal distributions. Our method let the number of normal components be data driven. Semi-parametric Bayesian estimators for two specific cases are then presented: the Seemingly Unrelated Regression for equation systems and the Random Effects Model for panel data. We design a series of simulation experiments to explore the performance of our estimators. The results demonstrate that our estimators obtain smaller posterior standard deviations and mean squared errors than the Bayesian estimators using a parametric mixture of normal distributions or a normal distribution. We then apply our semi-parametric Bayesian estimators for equation systems and panel data models to empirical data.",A Semi-Parametric Bayesian Generalized Least Squares Estimator,2020-11-20 10:50:15,"Ruochen Wu, Melvyn Weeks","http://arxiv.org/abs/2011.10252v2, http://arxiv.org/pdf/2011.10252v2",econ.EM 29063,em,"This paper proposes a new class of M-estimators that double weight for the twin problems of nonrandom treatment assignment and missing outcomes, both of which are common issues in the treatment effects literature. The proposed class is characterized by a `robustness' property, which makes it resilient to parametric misspecification in either a conditional model of interest (for example, mean or quantile function) or the two weighting functions. As leading applications, the paper discusses estimation of two specific causal parameters; average and quantile treatment effects (ATE, QTEs), which can be expressed as functions of the doubly weighted estimator, under misspecification of the framework's parametric components. With respect to the ATE, this paper shows that the proposed estimator is doubly robust even in the presence of missing outcomes. Finally, to demonstrate the estimator's viability in empirical settings, it is applied to Calonico and Smith (2017)'s reconstructed sample from the National Supported Work training program.",Doubly weighted M-estimation for nonrandom assignment and missing outcomes,2020-11-23 18:48:39,Akanksha Negi,"http://arxiv.org/abs/2011.11485v1, http://arxiv.org/pdf/2011.11485v1",econ.EM 29064,em,"This paper develops a first-stage linear regression representation for the instrumental variables (IV) quantile regression (QR) model. The quantile first-stage is analogous to the least squares case, i.e., a linear projection of the endogenous variables on the instruments and other exogenous covariates, with the difference that the QR case is a weighted projection. The weights are given by the conditional density function of the innovation term in the QR structural model, conditional on the endogeneous and exogenous covariates, and the instruments as well, at a given quantile. We also show that the required Jacobian identification conditions for IVQR models are embedded in the quantile first-stage. We then suggest inference procedures to evaluate the adequacy of instruments by evaluating their statistical significance using the first-stage result. The test is developed in an over-identification context, since consistent estimation of the weights for implementation of the first-stage requires at least one valid instrument to be available. Monte Carlo experiments provide numerical evidence that the proposed tests work as expected in terms of empirical size and power in finite samples. An empirical application illustrates that checking for the statistical significance of the instruments at different quantiles is important. The proposed procedures may be specially useful in QR since the instruments may be relevant at some quantiles but not at others.",A first-stage representation for instrumental variables quantile regression,2021-02-02 01:26:54,"Javier Alejo, Antonio F. Galvao, Gabriel Montes-Rojas","http://arxiv.org/abs/2102.01212v4, http://arxiv.org/pdf/2102.01212v4",econ.EM 29065,em,"How much do individuals contribute to team output? I propose an econometric framework to quantify individual contributions when only the output of their teams is observed. The identification strategy relies on following individuals who work in different teams over time. I consider two production technologies. For a production function that is additive in worker inputs, I propose a regression estimator and show how to obtain unbiased estimates of variance components that measure the contributions of heterogeneity and sorting. To estimate nonlinear models with complementarity, I propose a mixture approach under the assumption that individual types are discrete, and rely on a mean-field variational approximation for estimation. To illustrate the methods, I estimate the impact of economists on their research output, and the contributions of inventors to the quality of their patents.","Teams: Heterogeneity, Sorting, and Complementarity",2021-02-03 02:52:12,Stephane Bonhomme,"http://arxiv.org/abs/2102.01802v1, http://arxiv.org/pdf/2102.01802v1",econ.EM 29160,em,"This paper studies the joint inference on conditional volatility parameters and the innovation moments by means of bootstrap to test for the existence of moments for GARCH(p,q) processes. We propose a residual bootstrap to mimic the joint distribution of the quasi-maximum likelihood estimators and the empirical moments of the residuals and also prove its validity. A bootstrap-based test for the existence of moments is proposed, which provides asymptotically correctly-sized tests without losing its consistency property. It is simple to implement and extends to other GARCH-type settings. A simulation study demonstrates the test's size and power properties in finite samples and an empirical application illustrates the testing approach.",A Bootstrap Test for the Existence of Moments for GARCH Processes,2019-02-05 20:32:20,Alexander Heinemann,"http://arxiv.org/abs/1902.01808v3, http://arxiv.org/pdf/1902.01808v3",econ.EM 29066,em,"We study discrete panel data methods where unobserved heterogeneity is revealed in a first step, in environments where population heterogeneity is not discrete. We focus on two-step grouped fixed-effects (GFE) estimators, where individuals are first classified into groups using kmeans clustering, and the model is then estimated allowing for group-specific heterogeneity. Our framework relies on two key properties: heterogeneity is a function - possibly nonlinear and time-varying - of a low-dimensional continuous latent type, and informative moments are available for classification. We illustrate the method in a model of wages and labor market participation, and in a probit model with time-varying heterogeneity. We derive asymptotic expansions of two-step GFE estimators as the number of groups grows with the two dimensions of the panel. We propose a data-driven rule for the number of groups, and discuss bias reduction and inference.",Discretizing Unobserved Heterogeneity,2021-02-03 19:03:19,Stéphane Bonhomme Thibaut Lamadon Elena Manresa,"http://arxiv.org/abs/2102.02124v1, http://arxiv.org/pdf/2102.02124v1",econ.EM 29067,em,"We present a class of one-to-one matching models with perfectly transferable utility. We discuss identification and inference in these separable models, and we show how their comparative statics are readily analyzed.",The Econometrics and Some Properties of Separable Matching Models,2021-02-04 14:55:10,"Alfred Galichon, Bernard Salanié","http://dx.doi.org/10.1257/aer.p20171113, http://arxiv.org/abs/2102.02564v1, http://arxiv.org/pdf/2102.02564v1",econ.EM 29068,em,"The notion of hypothetical bias (HB) constitutes, arguably, the most fundamental issue in relation to the use of hypothetical survey methods. Whether or to what extent choices of survey participants and subsequent inferred estimates translate to real-world settings continues to be debated. While HB has been extensively studied in the broader context of contingent valuation, it is much less understood in relation to choice experiments (CE). This paper reviews the empirical evidence for HB in CE in various fields of applied economics and presents an integrative framework for how HB relates to external validity. Results suggest mixed evidence on the prevalence, extent and direction of HB as well as considerable context and measurement dependency. While HB is found to be an undeniable issue when conducting CEs, the empirical evidence on HB does not render CEs unable to represent real-world preferences. While health-related choice experiments often find negligible degrees of HB, experiments in consumer behaviour and transport domains suggest that significant degrees of HB are ubiquitous. Assessments of bias in environmental valuation studies provide mixed evidence. Also, across these disciplines many studies display HB in their total willingness to pay estimates and opt-in rates but not in their hypothetical marginal rates of substitution (subject to scale correction). Further, recent findings in psychology and brain imaging studies suggest neurocognitive mechanisms underlying HB that may explain some of the discrepancies and unexpected findings in the mainstream CE literature. The review also observes how the variety of operational definitions of HB prohibits consistent measurement of HB in CE. The paper further identifies major sources of HB and possible moderating factors. Finally, it explains how HB represents one component of the wider concept of external validity.",Hypothetical bias in stated choice experiments: Part I. Integrative synthesis of empirical evidence and conceptualisation of external validity,2021-02-05 03:45:50,"Milad Haghani, Michiel C. J. Bliemer, John M. Rose, Harmen Oppewal, Emily Lancsar","http://dx.doi.org/10.1016/j.jocm.2021.100309, http://arxiv.org/abs/2102.02940v1, http://arxiv.org/pdf/2102.02940v1",econ.EM 29069,em,"This paper reviews methods of hypothetical bias (HB) mitigation in choice experiments (CEs). It presents a bibliometric analysis and summary of empirical evidence of their effectiveness. The paper follows the review of empirical evidence on the existence of HB presented in Part I of this study. While the number of CE studies has rapidly increased since 2010, the critical issue of HB has been studied in only a small fraction of CE studies. The present review includes both ex-ante and ex-post bias mitigation methods. Ex-ante bias mitigation methods include cheap talk, real talk, consequentiality scripts, solemn oath scripts, opt-out reminders, budget reminders, honesty priming, induced truth telling, indirect questioning, time to think and pivot designs. Ex-post methods include follow-up certainty calibration scales, respondent perceived consequentiality scales, and revealed-preference-assisted estimation. It is observed that the use of mitigation methods markedly varies across different sectors of applied economics. The existing empirical evidence points to their overall effectives in reducing HB, although there is some variation. The paper further discusses how each mitigation method can counter a certain subset of HB sources. Considering the prevalence of HB in CEs and the effectiveness of bias mitigation methods, it is recommended that implementation of at least one bias mitigation method (or a suitable combination where possible) becomes standard practice in conducting CEs. Mitigation method(s) suited to the particular application should be implemented to ensure that inferences and subsequent policy decisions are as much as possible free of HB.",Hypothetical bias in stated choice experiments: Part II. Macro-scale analysis of literature and effectiveness of bias mitigation methods,2021-02-05 03:53:21,"Milad Haghani, Michiel C. J. Bliemer, John M. Rose, Harmen Oppewal, Emily Lancsar","http://dx.doi.org/10.1016/j.jocm.2021.100322, http://arxiv.org/abs/2102.02945v1, http://arxiv.org/pdf/2102.02945v1",econ.EM 29070,em,"We provide a geometric formulation of the problem of identification of the matching surplus function and we show how the estimation problem can be solved by the introduction of a generalized entropy function over the set of matchings.",Identification of Matching Complementarities: A Geometric Viewpoint,2021-02-07 21:31:54,Alfred Galichon,"http://dx.doi.org/10.1108/S0731-9053(2013)0000032005, http://arxiv.org/abs/2102.03875v1, http://arxiv.org/pdf/2102.03875v1",econ.EM 29071,em,"This paper studies inference in a randomized controlled trial (RCT) with covariate-adaptive randomization (CAR) and imperfect compliance of a binary treatment. In this context, we study inference on the LATE. As in Bugni et al. (2018,2019), CAR refers to randomization schemes that first stratify according to baseline covariates and then assign treatment status so as to achieve ""balance"" within each stratum. In contrast to these papers, however, we allow participants of the RCT to endogenously decide to comply or not with the assigned treatment status. We study the properties of an estimator of the LATE derived from a ""fully saturated"" IV linear regression, i.e., a linear regression of the outcome on all indicators for all strata and their interaction with the treatment decision, with the latter instrumented with the treatment assignment. We show that the proposed LATE estimator is asymptotically normal, and we characterize its asymptotic variance in terms of primitives of the problem. We provide consistent estimators of the standard errors and asymptotically exact hypothesis tests. In the special case when the target proportion of units assigned to each treatment does not vary across strata, we can also consider two other estimators of the LATE, including the one based on the ""strata fixed effects"" IV linear regression, i.e., a linear regression of the outcome on indicators for all strata and the treatment decision, with the latter instrumented with the treatment assignment. Our characterization of the asymptotic variance of the LATE estimators allows us to understand the influence of the parameters of the RCT. We use this to propose strategies to minimize their asymptotic variance in a hypothetical RCT based on data from a pilot study. We illustrate the practical relevance of these results using a simulation study and an empirical application based on Dupas et al. (2018).",Inference under Covariate-Adaptive Randomization with Imperfect Compliance,2021-02-08 01:36:26,"Federico A. Bugni, Mengsi Gao","http://arxiv.org/abs/2102.03937v3, http://arxiv.org/pdf/2102.03937v3",econ.EM 29079,em,"Using results from convex analysis, we investigate a novel approach to identification and estimation of discrete choice models which we call the Mass Transport Approach (MTA). We show that the conditional choice probabilities and the choice-specific payoffs in these models are related in the sense of conjugate duality, and that the identification problem is a mass transport problem. Based on this, we propose a new two-step estimator for these models; interestingly, the first step of our estimator involves solving a linear program which is identical to the classic assignment (two-sided matching) game of Shapley and Shubik (1971). The application of convex-analytic tools to dynamic discrete choice models, and the connection with two-sided matching models, is new in the literature.",Duality in dynamic discrete-choice models,2021-02-08 18:50:03,"Khai Xiang Chiong, Alfred Galichon, Matt Shum","http://dx.doi.org/10.3982/QE436, http://arxiv.org/abs/2102.06076v2, http://arxiv.org/pdf/2102.06076v2",econ.EM 29072,em,"In a landmark contribution to the structural vector autoregression (SVARs) literature, Rubio-Ramirez, Waggoner, and Zha (2010, `Structural Vector Autoregressions: Theory of Identification and Algorithms for Inference,' Review of Economic Studies) shows a necessary and sufficient condition for equality restrictions to globally identify the structural parameters of a SVAR. The simplest form of the necessary and sufficient condition shown in Theorem 7 of Rubio-Ramirez et al (2010) checks the number of zero restrictions and the ranks of particular matrices without requiring knowledge of the true value of the structural or reduced-form parameters. However, this note shows by counterexample that this condition is not sufficient for global identification. Analytical investigation of the counterexample clarifies why their sufficiency claim breaks down. The problem with the rank condition is that it allows for the possibility that restrictions are redundant, in the sense that one or more restrictions may be implied by other restrictions, in which case the implied restriction contains no identifying information. We derive a modified necessary and sufficient condition for SVAR global identification and clarify how it can be assessed in practice.",A note on global identification in structural vector autoregressions,2021-02-08 11:14:27,"Emanuele Bacchiocchi, Toru Kitagawa","http://arxiv.org/abs/2102.04048v2, http://arxiv.org/pdf/2102.04048v2",econ.EM 29073,em,"We propose an easily implementable test of the validity of a set of theoretical restrictions on the relationship between economic variables, which do not necessarily identify the data generating process. The restrictions can be derived from any model of interactions, allowing censoring and multiple equilibria. When the restrictions are parameterized, the test can be inverted to yield confidence regions for partially identified parameters, thereby complementing other proposals, primarily Chernozhukov et al. [Chernozhukov, V., Hong, H., Tamer, E., 2007. Estimation and confidence regions for parameter sets in econometric models. Econometrica 75, 1243-1285].",A test of non-identifying restrictions and confidence regions for partially identified parameters,2021-02-08 15:01:13,"Alfred Galichon, Marc Henry","http://dx.doi.org/10.1016/j.jeconom.2009.01.010, http://arxiv.org/abs/2102.04151v1, http://arxiv.org/pdf/2102.04151v1",econ.EM 29074,em,"A general framework is given to analyze the falsifiability of economic models based on a sample of their observable components. It is shown that, when the restrictions implied by the economic theory are insufficient to identify the unknown quantities of the structure, the duality of optimal transportation with zero-one cost function delivers interpretable and operational formulations of the hypothesis of specification correctness from which tests can be constructed to falsify the model.",Optimal transportation and the falsifiability of incompletely specified economic models,2021-02-08 15:25:46,"Ivar Ekeland, Alfred Galichon, Marc Henry","http://dx.doi.org/10.1007/s00199-008-0432-y, http://arxiv.org/abs/2102.04162v2, http://arxiv.org/pdf/2102.04162v2",econ.EM 29075,em,"Despite their popularity, machine learning predictions are sensitive to potential unobserved predictors. This paper proposes a general algorithm that assesses how the omission of an unobserved variable with high explanatory power could affect the predictions of the model. Moreover, the algorithm extends the usage of machine learning from pointwise predictions to inference and sensitivity analysis. In the application, we show how the framework can be applied to data with inherent uncertainty, such as students' scores in a standardized assessment on financial literacy. First, using Bayesian Additive Regression Trees (BART), we predict students' financial literacy scores (FLS) for a subgroup of students with missing FLS. Then, we assess the sensitivity of predictions by comparing the predictions and performance of models with and without a highly explanatory synthetic predictor. We find no significant difference in the predictions and performances of the augmented (i.e., the model with the synthetic predictor) and original model. This evidence sheds a light on the stability of the predictive model used in the application. The proposed methodology can be used, above and beyond our motivating empirical example, in a wide range of machine learning applications in social and health sciences.",Assessing Sensitivity of Machine Learning Predictions.A Novel Toolbox with an Application to Financial Literacy,2021-02-08 20:42:10,"Falco J. Bargagli Stoffi, Kenneth De Beckker, Joana E. Maldonado, Kristof De Witte","http://arxiv.org/abs/2102.04382v1, http://arxiv.org/pdf/2102.04382v1",econ.EM 29076,em,"We propose a methodology for constructing confidence regions with partially identified models of general form. The region is obtained by inverting a test of internal consistency of the econometric structure. We develop a dilation bootstrap methodology to deal with sampling uncertainty without reference to the hypothesized economic structure. It requires bootstrapping the quantile process for univariate data and a novel generalization of the latter to higher dimensions. Once the dilation is chosen to control the confidence level, the unknown true distribution of the observed data can be replaced by the known empirical distribution and confidence regions can then be obtained as in Galichon and Henry (2011) and Beresteanu, Molchanov and Molinari (2011).",Dilation bootstrap,2021-02-08 17:13:37,"Alfred Galichon, Marc Henry","http://dx.doi.org/10.1016/j.jeconom.2013.07.001, http://arxiv.org/abs/2102.04457v1, http://arxiv.org/pdf/2102.04457v1",econ.EM 29077,em,"This article proposes a generalized notion of extreme multivariate dependence between two random vectors which relies on the extremality of the cross-covariance matrix between these two vectors. Using a partial ordering on the cross-covariance matrices, we also generalize the notion of positive upper dependence. We then proposes a means to quantify the strength of the dependence between two given multivariate series and to increase this strength while preserving the marginal distributions. This allows for the design of stress-tests of the dependence between two sets of financial variables, that can be useful in portfolio management or derivatives pricing.",Extreme dependence for multivariate data,2021-02-08 17:57:13,"Damien Bosc, Alfred Galichon","http://dx.doi.org/10.1080/14697688.2014.886777, http://arxiv.org/abs/2102.04461v1, http://arxiv.org/pdf/2102.04461v1",econ.EM 29078,em,"Responding to the U.S. opioid crisis requires a holistic approach supported by evidence from linking and analyzing multiple data sources. This paper discusses how 20 available resources can be combined to answer pressing public health questions related to the crisis. It presents a network view based on U.S. geographical units and other standard concepts, crosswalked to communicate the coverage and interlinkage of these resources. These opioid-related datasets can be grouped by four themes: (1) drug prescriptions, (2) opioid related harms, (3) opioid treatment workforce, jobs, and training, and (4) drug policy. An interactive network visualization was created and is freely available online; it lets users explore key metadata, relevant scholarly works, and data interlinkages in support of informed decision making through data analysis.","Interactive Network Visualization of Opioid Crisis Related Data- Policy, Pharmaceutical, Training, and More",2021-02-10 20:51:48,"Olga Scrivner, Elizabeth McAvoy, Thuy Nguyen, Tenzin Choeden, Kosali Simon, Katy Börner","http://arxiv.org/abs/2102.05596v1, http://arxiv.org/pdf/2102.05596v1",econ.EM 29081,em,"We consider structural vector autoregressions subject to 'narrative restrictions', which are inequality restrictions on functions of the structural shocks in specific periods. These restrictions raise novel problems related to identification and inference, and there is currently no frequentist procedure for conducting inference in these models. We propose a solution that is valid from both Bayesian and frequentist perspectives by: 1) formalizing the identification problem under narrative restrictions; 2) correcting a feature of the existing (single-prior) Bayesian approach that can distort inference; 3) proposing a robust (multiple-prior) Bayesian approach that is useful for assessing and eliminating the posterior sensitivity that arises in these models due to the likelihood having flat regions; and 4) showing that the robust Bayesian approach has asymptotic frequentist validity. We illustrate our methods by estimating the effects of US monetary policy under a variety of narrative restrictions.",Identification and Inference Under Narrative Restrictions,2021-02-12 14:38:55,"Raffaella Giacomini, Toru Kitagawa, Matthew Read","http://arxiv.org/abs/2102.06456v1, http://arxiv.org/pdf/2102.06456v1",econ.EM 29082,em,"Weak instruments present a major setback to empirical work. This paper introduces an estimator that admits weak, uncorrelated, or mean-independent instruments that are non-independent of endogenous covariates. Relative to conventional instrumental variable methods, the proposed estimator weakens the relevance condition considerably without imposing a stronger exclusion restriction. Identification mainly rests on (1) a weak conditional median exclusion restriction imposed on pairwise differences in disturbances and (2) non-independence between covariates and instruments. Under mild conditions, the estimator is consistent and asymptotically normal. Monte Carlo experiments showcase an excellent performance of the estimator, and two empirical examples illustrate its practical utility.",A Distance Covariance-based Estimator,2021-02-14 00:55:09,"Emmanuel Selorm Tsyawo, Abdul-Nasah Soale","http://arxiv.org/abs/2102.07008v1, http://arxiv.org/pdf/2102.07008v1",econ.EM 29083,em,"This paper contributes to the literature on hedonic models in two ways. First, it makes use of Queyranne's reformulation of a hedonic model in the discrete case as a network flow problem in order to provide a proof of existence and integrality of a hedonic equilibrium and efficient computation of hedonic prices. Second, elaborating on entropic methods developed in Galichon and Salani\'{e} (2014), this paper proposes a new identification strategy for hedonic models in a single market. This methodology allows one to introduce heterogeneities in both consumers' and producers' attributes and to recover producers' profits and consumers' utilities based on the observation of production and consumption patterns and the set of hedonic prices.",Entropy methods for identifying hedonic models,2021-02-15 14:49:21,"Arnaud Dupuy, Alfred Galichon, Marc Henry","http://dx.doi.org/10.1007/s11579-014-0125-1, http://arxiv.org/abs/2102.07491v1, http://arxiv.org/pdf/2102.07491v1",econ.EM 29084,em,"Unlike other techniques of causality inference, the use of valid instrumental variables can deal with unobserved sources of both variable errors, variable omissions, and sampling bias, and still arrive at consistent estimates of average treatment effects. The only problem is to find the valid instruments. Using the definition of Pearl (2009) of valid instrumental variables, a formal condition for validity can be stated for variables in generalized linear causal models. The condition can be applied in two different ways: As a tool for constructing valid instruments, or as a foundation for testing whether an instrument is valid. When perfectly valid instruments are not found, the squared bias of the IV-estimator induced by an imperfectly valid instrument -- estimated with bootstrapping -- can be added to its empirical variance in a mean-square-error-like reliability measure.",Constructing valid instrumental variables in generalized linear causal models from directed acyclic graphs,2021-02-16 13:09:15,Øyvind Hoveid,"http://arxiv.org/abs/2102.08056v1, http://arxiv.org/pdf/2102.08056v1",econ.EM 29085,em,"We propose a general framework for the specification testing of continuous treatment effect models. We assume a general residual function, which includes the average and quantile treatment effect models as special cases. The null models are identified under the unconfoundedness condition and contain a nonparametric weighting function. We propose a test statistic for the null model in which the weighting function is estimated by solving an expanding set of moment equations. We establish the asymptotic distributions of our test statistic under the null hypothesis and under fixed and local alternatives. The proposed test statistic is shown to be more efficient than that constructed from the true weighting function and can detect local alternatives deviated from the null models at the rate of $O(N^{-1/2})$. A simulation method is provided to approximate the null distribution of the test statistic. Monte-Carlo simulations show that our test exhibits a satisfactory finite-sample performance, and an application shows its practical value.",A Unified Framework for Specification Tests of Continuous Treatment Effect Models,2021-02-16 13:18:52,"Wei Huang, Oliver Linton, Zheng Zhang","http://arxiv.org/abs/2102.08063v2, http://arxiv.org/pdf/2102.08063v2",econ.EM 29086,em,"In light of the increasing interest to transform the fixed-route public transit (FRT) services into on-demand transit (ODT) services, there exists a strong need for a comprehensive evaluation of the effects of this shift on the users. Such an analysis can help the municipalities and service providers to design and operate more convenient, attractive, and sustainable transit solutions. To understand the user preferences, we developed three hybrid choice models: integrated choice and latent variable (ICLV), latent class (LC), and latent class integrated choice and latent variable (LC-ICLV) models. We used these models to analyze the public transit user's preferences in Belleville, Ontario, Canada. Hybrid choice models were estimated using a rich dataset that combined the actual level of service attributes obtained from Belleville's ODT service and self-reported usage behaviour obtained from a revealed preference survey of the ODT users. The latent class models divided the users into two groups with different travel behaviour and preferences. The results showed that the captive user's preference for ODT service was significantly affected by the number of unassigned trips, in-vehicle time, and main travel mode before the ODT service started. On the other hand, the non-captive user's service preference was significantly affected by the Time Sensitivity and the Online Service Satisfaction latent variables, as well as the performance of the ODT service and trip purpose. This study attaches importance to improving the reliability and performance of the ODT service and outlines directions for reducing operational costs by updating the required fleet size and assigning more vehicles for work-related trips.",On-Demand Transit User Preference Analysis using Hybrid Choice Models,2021-02-16 19:27:50,"Nael Alsaleh, Bilal Farooq, Yixue Zhang, Steven Farber","http://arxiv.org/abs/2102.08256v2, http://arxiv.org/pdf/2102.08256v2",econ.EM 29087,em,"This article discusses tests for nonlinear cointegration in the presence of variance breaks. We build on cointegration test approaches under heteroskedasticity (Cavaliere and Taylor, 2006, Journal of Time Series Analysis) and for nonlinearity (Choi and Saikkonen, 2010, Econometric Theory) to propose a bootstrap test and prove its consistency. A Monte Carlo study shows the approach to have good finite sample properties. We provide an empirical application to the environmental Kuznets curves (EKC), finding that the cointegration test provides little evidence for the EKC hypothesis. Additionally, we examine the nonlinear relation between the US money and the interest rate, finding that our test does not reject the null of a smooth transition cointegrating relation.",Testing for Nonlinear Cointegration under Heteroskedasticity,2021-02-17 18:14:19,"Christoph Hanck, Till Massing","http://arxiv.org/abs/2102.08809v2, http://arxiv.org/pdf/2102.08809v2",econ.EM 29088,em,"This paper provides a user's guide to the general theory of approximate randomization tests developed in Canay, Romano, and Shaikh (2017) when specialized to linear regressions with clustered data. An important feature of the methodology is that it applies to settings in which the number of clusters is small -- even as small as five. We provide a step-by-step algorithmic description of how to implement the test and construct confidence intervals for the parameter of interest. In doing so, we additionally present three novel results concerning the methodology: we show that the method admits an equivalent implementation based on weighted scores; we show the test and confidence intervals are invariant to whether the test statistic is studentized or not; and we prove convexity of the confidence intervals for scalar parameters. We also articulate the main requirements underlying the test, emphasizing in particular common pitfalls that researchers may encounter. Finally, we illustrate the use of the methodology with two applications that further illuminate these points. The companion {\tt R} and {\tt Stata} packages facilitate the implementation of the methodology and the replication of the empirical exercises.",On the implementation of Approximate Randomization Tests in Linear Models with a Small Number of Clusters,2021-02-18 01:32:52,"Yong Cai, Ivan A. Canay, Deborah Kim, Azeem M. Shaikh","http://arxiv.org/abs/2102.09058v4, http://arxiv.org/pdf/2102.09058v4",econ.EM 29089,em,"We propose a novel structural estimation framework in which we train a surrogate of an economic model with deep neural networks. Our methodology alleviates the curse of dimensionality and speeds up the evaluation and parameter estimation by orders of magnitudes, which significantly enhances one's ability to conduct analyses that require frequent parameter re-estimation. As an empirical application, we compare two popular option pricing models (the Heston and the Bates model with double-exponential jumps) against a non-parametric random forest model. We document that: a) the Bates model produces better out-of-sample pricing on average, but both structural models fail to outperform random forest for large areas of the volatility surface; b) random forest is more competitive at short horizons (e.g., 1-day), for short-dated options (with less than 7 days to maturity), and on days with poor liquidity; c) both structural models outperform random forest in out-of-sample delta hedging; d) the Heston model's relative performance has deteriorated significantly after the 2008 financial crisis.",Deep Structural Estimation: With an Application to Option Pricing,2021-02-18 11:15:47,"Hui Chen, Antoine Didisheim, Simon Scheidegger","http://arxiv.org/abs/2102.09209v1, http://arxiv.org/pdf/2102.09209v1",econ.EM 29090,em,"We propose a method for constructing confidence intervals that account for many forms of spatial correlation. The interval has the familiar `estimator plus and minus a standard error times a critical value' form, but we propose new methods for constructing the standard error and the critical value. The standard error is constructed using population principal components from a given `worst-case' spatial covariance model. The critical value is chosen to ensure coverage in a benchmark parametric model for the spatial correlations. The method is shown to control coverage in large samples whenever the spatial correlation is weak, i.e., with average pairwise correlations that vanish as the sample size gets large. We also provide results on correct coverage in a restricted but nonparametric class of strong spatial correlations, as well as on the efficiency of the method. In a design calibrated to match economic activity in U.S. states the method outperforms previous suggestions for spatially robust inference about the population mean.",Spatial Correlation Robust Inference,2021-02-18 17:04:43,"Ulrich K. Müller, Mark W. Watson","http://arxiv.org/abs/2102.09353v1, http://arxiv.org/pdf/2102.09353v1",econ.EM 29091,em,"This paper aims to provide reliable estimates for the COVID-19 contact rate of a Susceptible-Infected-Recovered (SIR) model. From observable data on confirmed, recovered, and deceased cases, a noisy measurement for the contact rate can be constructed. To filter out measurement errors and seasonality, a novel unobserved components (UC) model is set up. It specifies the log contact rate as a latent, fractionally integrated process of unknown integration order. The fractional specification reflects key characteristics of aggregate social behavior such as strong persistence and gradual adjustments to new information. A computationally simple modification of the Kalman filter is introduced and is termed the fractional filter. It allows to estimate UC models with richer long-run dynamics, and provides a closed-form expression for the prediction error of UC models. Based on the latter, a conditional-sum-of-squares (CSS) estimator for the model parameters is set up that is shown to be consistent and asymptotically normally distributed. The resulting contact rate estimates for several countries are well in line with the chronology of the pandemic, and allow to identify different contact regimes generated by policy interventions. As the fractional filter is shown to provide precise contact rate estimates at the end of the sample, it bears great potential for monitoring the pandemic in real time.",Monitoring the pandemic: A fractional filter for the COVID-19 contact rate,2021-02-19 20:55:45,Tobias Hartl,"http://arxiv.org/abs/2102.10067v1, http://arxiv.org/pdf/2102.10067v1",econ.EM 29092,em,"A novel approach to price indices, leading to an innovative solution in both a multi-period or a multilateral framework, is presented. The index turns out to be the generalized least squares solution of a regression model linking values and quantities of the commodities. The index reference basket, which is the union of the intersections of the baskets of all country/period taken in pair, has a coverage broader than extant indices. The properties of the index are investigated and updating formulas established. Applications to both real and simulated data provide evidence of the better index performance in comparison with extant alternatives.",A Novel Multi-Period and Multilateral Price Index,2021-02-21 09:44:18,"Consuelo Rubina Nava, Maria Grazia Zoia","http://arxiv.org/abs/2102.10528v1, http://arxiv.org/pdf/2102.10528v1",econ.EM 29094,em,"Here, we have analysed a GARCH(1,1) model with the aim to fit higher order moments for different companies' stock prices. When we assume a gaussian conditional distribution, we fail to capture any empirical data when fitting the first three even moments of financial time series. We show instead that a double gaussian conditional probability distribution better captures the higher order moments of the data. To demonstrate this point, we construct regions (phase diagrams), in the fourth and sixth order standardised moment space, where a GARCH(1,1) model can be used to fit these moments and compare them with the corresponding moments from empirical data for different sectors of the economy. We found that the ability of the GARCH model with a double gaussian conditional distribution to fit higher order moments is dictated by the time window our data spans. We can only fit data collected within specific time window lengths and only with certain parameters of the conditional double gaussian distribution. In order to incorporate the non-stationarity of financial series, we assume that the parameters of the GARCH model have time dependence.",Non-stationary GARCH modelling for fitting higher order moments of financial series within moving time windows,2021-02-23 14:05:23,"Luke De Clerk, Sergey Savel'ev","http://arxiv.org/abs/2102.11627v4, http://arxiv.org/pdf/2102.11627v4",econ.EM 29095,em,"We propose a computationally feasible way of deriving the identified features of models with multiple equilibria in pure or mixed strategies. It is shown that in the case of Shapley regular normal form games, the identified set is characterized by the inclusion of the true data distribution within the core of a Choquet capacity, which is interpreted as the generalized likelihood of the model. In turn, this inclusion is characterized by a finite set of inequalities and efficient and easily implementable combinatorial methods are described to check them. In all normal form games, the identified set is characterized in terms of the value of a submodular or convex optimization program. Efficient algorithms are then given and compared to check inclusion of a parameter in this identified set. The latter are illustrated with family bargaining games and oligopoly entry games.",Set Identification in Models with Multiple Equilibria,2021-02-24 15:20:11,"Alfred Galichon, Marc Henry","http://dx.doi.org/10.1093/restud/rdr008, http://arxiv.org/abs/2102.12249v1, http://arxiv.org/pdf/2102.12249v1",econ.EM 29096,em,"We provide a test for the specification of a structural model without identifying assumptions. We show the equivalence of several natural formulations of correct specification, which we take as our null hypothesis. From a natural empirical version of the latter, we derive a Kolmogorov-Smirnov statistic for Choquet capacity functionals, which we use to construct our test. We derive the limiting distribution of our test statistic under the null, and show that our test is consistent against certain classes of alternatives. When the model is given in parametric form, the test can be inverted to yield confidence regions for the identified parameter set. The approach can be applied to the estimation of models with sample selection, censored observables and to games with multiple equilibria.",Inference in Incomplete Models,2021-02-24 15:39:52,"Alfred Galichon, Marc Henry","http://arxiv.org/abs/2102.12257v1, http://arxiv.org/pdf/2102.12257v1",econ.EM 29097,em,"This paper estimates the break point for large-dimensional factor models with a single structural break in factor loadings at a common unknown date. First, we propose a quasi-maximum likelihood (QML) estimator of the change point based on the second moments of factors, which are estimated by principal component analysis. We show that the QML estimator performs consistently when the covariance matrix of the pre- or post-break factor loading, or both, is singular. When the loading matrix undergoes a rotational type of change while the number of factors remains constant over time, the QML estimator incurs a stochastically bounded estimation error. In this case, we establish an asymptotic distribution of the QML estimator. The simulation results validate the feasibility of this estimator when used in finite samples. In addition, we demonstrate empirical applications of the proposed method by applying it to estimate the break points in a U.S. macroeconomic dataset and a stock return dataset.",Quasi-maximum likelihood estimation of break point in high-dimensional factor models,2021-02-25 06:43:18,"Jiangtao Duan, Jushan Bai, Xu Han","http://arxiv.org/abs/2102.12666v3, http://arxiv.org/pdf/2102.12666v3",econ.EM 29098,em,"We propose a new control function (CF) method to estimate a binary response model in a triangular system with multiple unobserved heterogeneities The CFs are the expected values of the heterogeneity terms in the reduced form equations conditional on the histories of the endogenous and the exogenous variables. The method requires weaker restrictions compared to CF methods with similar imposed structures. If the support of endogenous regressors is large, average partial effects are point-identified even when instruments are discrete. Bounds are provided when the support assumption is violated. An application and Monte Carlo experiments compare several alternative methods with ours.",A Control Function Approach to Estimate Panel Data Binary Response Model,2021-02-25 18:26:41,Amaresh K Tiwari,"http://dx.doi.org/10.1080/07474938.2021.1983328, http://arxiv.org/abs/2102.12927v2, http://arxiv.org/pdf/2102.12927v2",econ.EM 29099,em,"This paper proposes an empirical method to implement the recentered influence function (RIF) regression of Firpo, Fortin and Lemieux (2009), a relevant method to study the effect of covariates on many statistics beyond the mean. In empirically relevant situations where the influence function is not available or difficult to compute, we suggest to use the \emph{sensitivity curve} (Tukey, 1977) as a feasible alternative. This may be computationally cumbersome when the sample size is large. The relevance of the proposed strategy derives from the fact that, under general conditions, the sensitivity curve converges in probability to the influence function. In order to save computational time we propose to use a cubic splines non-parametric method for a random subsample and then to interpolate to the rest of the cases where it was not computed. Monte Carlo simulations show good finite sample properties. We illustrate the proposed estimator with an application to the polarization index of Duclos, Esteban and Ray (2004).",RIF Regression via Sensitivity Curves,2021-12-02 20:24:43,"Javier Alejo, Gabriel Montes-Rojas, Walter Sosa-Escudero","http://arxiv.org/abs/2112.01435v1, http://arxiv.org/pdf/2112.01435v1",econ.EM 29119,em,"Startups have become in less than 50 years a major component of innovation and economic growth. An important feature of the startup phenomenon has been the wealth created through equity in startups to all stakeholders. These include the startup founders, the investors, and also the employees through the stock-option mechanism and universities through licenses of intellectual property. In the employee group, the allocation to important managers like the chief executive, vice-presidents and other officers, and independent board members is also analyzed. This report analyzes how equity was allocated in more than 400 startups, most of which had filed for an initial public offering. The author has the ambition of informing a general audience about best practice in equity split, in particular in Silicon Valley, the central place for startup innovation.",Equity in Startups,2017-11-02 12:33:44,Hervé Lebret,"http://arxiv.org/abs/1711.00661v1, http://arxiv.org/pdf/1711.00661v1",econ.EM 29100,em,"We study estimation of factor models in a fixed-T panel data setting and significantly relax the common correlated effects (CCE) assumptions pioneered by Pesaran (2006) and used in dozens of papers since. In the simplest case, we model the unobserved factors as functions of the cross-sectional averages of the explanatory variables and show that this is implied by Pesaran's assumptions when the number of factors does not exceed the number of explanatory variables. Our approach allows discrete explanatory variables and flexible functional forms in the covariates. Plus, it extends to a framework that easily incorporates general functions of cross-sectional moments, in addition to heterogeneous intercepts and time trends. Our proposed estimators include Pesaran's pooled correlated common effects (CCEP) estimator as a special case. We also show that in the presence of heterogeneous slopes our estimator is consistent under assumptions much weaker than those previously used. We derive the fixed-T asymptotic normality of a general estimator and show how to adjust for estimation of the population moments in the factor loading equation.",Simple Alternatives to the Common Correlated Effects Model,2021-12-02 21:37:52,"Nicholas L. Brown, Peter Schmidt, Jeffrey M. Wooldridge","http://dx.doi.org/10.13140/RG.2.2.12655.76969/1, http://arxiv.org/abs/2112.01486v1, http://arxiv.org/pdf/2112.01486v1",econ.EM 29101,em,"Until recently, there has been a consensus that clinicians should condition patient risk assessments on all observed patient covariates with predictive power. The broad idea is that knowing more about patients enables more accurate predictions of their health risks and, hence, better clinical decisions. This consensus has recently unraveled with respect to a specific covariate, namely race. There have been increasing calls for race-free risk assessment, arguing that using race to predict patient outcomes contributes to racial disparities and inequities in health care. Writers calling for race-free risk assessment have not studied how it would affect the quality of clinical decisions. Considering the matter from the patient-centered perspective of medical economics yields a disturbing conclusion: Race-free risk assessment would harm patients of all races.",Patient-Centered Appraisal of Race-Free Clinical Risk Assessment,2021-12-03 02:37:07,Charles F. Manski,"http://arxiv.org/abs/2112.01639v2, http://arxiv.org/pdf/2112.01639v2",econ.EM 29102,em,"We develop a non-parametric multivariate time series model that remains agnostic on the precise relationship between a (possibly) large set of macroeconomic time series and their lagged values. The main building block of our model is a Gaussian process prior on the functional relationship that determines the conditional mean of the model, hence the name of Gaussian process vector autoregression (GP-VAR). A flexible stochastic volatility specification is used to provide additional flexibility and control for heteroskedasticity. Markov chain Monte Carlo (MCMC) estimation is carried out through an efficient and scalable algorithm which can handle large models. The GP-VAR is illustrated by means of simulated data and in a forecasting exercise with US data. Moreover, we use the GP-VAR to analyze the effects of macroeconomic uncertainty, with a particular emphasis on time variation and asymmetries in the transmission mechanisms.",Gaussian Process Vector Autoregressions and Macroeconomic Uncertainty,2021-12-03 19:16:10,"Niko Hauzenberger, Florian Huber, Massimiliano Marcellino, Nico Petz","http://arxiv.org/abs/2112.01995v3, http://arxiv.org/pdf/2112.01995v3",econ.EM 29103,em,"Despite the widespread use of graphs in empirical research, little is known about readers' ability to process the statistical information they are meant to convey (""visual inference""). We study visual inference within the context of regression discontinuity (RD) designs by measuring how accurately readers identify discontinuities in graphs produced from data generating processes calibrated on 11 published papers from leading economics journals. First, we assess the effects of different graphical representation methods on visual inference using randomized experiments. We find that bin widths and fit lines have the largest impacts on whether participants correctly perceive the presence or absence of a discontinuity. Our experimental results allow us to make evidence-based recommendations to practitioners, and we suggest using small bins with no fit lines as a starting point to construct RD graphs. Second, we compare visual inference on graphs constructed using our preferred method with widely used econometric inference procedures. We find that visual inference achieves similar or lower type I error (false positive) rates and complements econometric inference.",Visual Inference and Graphical Representation in Regression Discontinuity Designs,2021-12-06 18:02:14,"Christina Korting, Carl Lieberman, Jordan Matsudaira, Zhuan Pei, Yi Shen","http://arxiv.org/abs/2112.03096v2, http://arxiv.org/pdf/2112.03096v2",econ.EM 29104,em,"The `paradox of progress' is an empirical regularity that associates more education with larger income inequality. Two driving and competing factors behind this phenomenon are the convexity of the `Mincer equation' (that links wages and education) and the heterogeneity in its returns, as captured by quantile regressions. We propose a joint least-squares and quantile regression statistical framework to derive a decomposition in order to evaluate the relative contribution of each explanation. The estimators are based on the `functional derivative' approach. We apply the proposed decomposition strategy to the case of Argentina 1992 to 2015.",A decomposition method to evaluate the `paradox of progress' with evidence for Argentina,2021-12-07 20:20:26,"Javier Alejo, Leonardo Gasparini, Gabriel Montes-Rojas, Walter Sosa-Escudero","http://arxiv.org/abs/2112.03836v1, http://arxiv.org/pdf/2112.03836v1",econ.EM 29105,em,"Linear regressions with period and group fixed effects are widely used to estimate policies' effects: 26 of the 100 most cited papers published by the American Economic Review from 2015 to 2019 estimate such regressions. It has recently been shown that those regressions may produce misleading estimates, if the policy's effect is heterogeneous between groups or over time, as is often the case. This survey reviews a fast-growing literature that documents this issue, and that proposes alternative estimators robust to heterogeneous effects. We use those alternative estimators to revisit Wolfers (2006).",Two-Way Fixed Effects and Differences-in-Differences with Heterogeneous Treatment Effects: A Survey,2021-12-08 23:14:26,"Clément de Chaisemartin, Xavier D'Haultfœuille","http://arxiv.org/abs/2112.04565v6, http://arxiv.org/pdf/2112.04565v6",econ.EM 29126,em,"Some empirical results are more likely to be published than others. Such selective publication leads to biased estimates and distorted inference. This paper proposes two approaches for identifying the conditional probability of publication as a function of a study's results, the first based on systematic replication studies and the second based on meta-studies. For known conditional publication probabilities, we propose median-unbiased estimators and associated confidence sets that correct for selective publication. We apply our methods to recent large-scale replication studies in experimental economics and psychology, and to meta-studies of the effects of minimum wages and de-worming programs.",Identification of and correction for publication bias,2017-11-28 22:45:36,"Isaiah Andrews, Maximilian Kasy","http://arxiv.org/abs/1711.10527v1, http://arxiv.org/pdf/1711.10527v1",econ.EM 29106,em,"I suggest an enhancement of the procedure of Chiong, Hsieh, and Shum (2017) for calculating bounds on counterfactual demand in semiparametric discrete choice models. Their algorithm relies on a system of inequalities indexed by cycles of a large number $M$ of observed markets and hence seems to require computationally infeasible enumeration of all such cycles. I show that such enumeration is unnecessary because solving the ""fully efficient"" inequality system exploiting cycles of all possible lengths $K=1,\dots,M$ can be reduced to finding the length of the shortest path between every pair of vertices in a complete bidirected weighted graph on $M$ vertices. The latter problem can be solved using the Floyd--Warshall algorithm with computational complexity $O\left(M^3\right)$, which takes only seconds to run even for thousands of markets. Monte Carlo simulations illustrate the efficiency gain from using cycles of all lengths, which turns out to be positive, but small.","Efficient counterfactual estimation in semiparametric discrete choice models: a note on Chiong, Hsieh, and Shum (2017)",2021-12-09 03:49:56,Grigory Franguridi,"http://arxiv.org/abs/2112.04637v1, http://arxiv.org/pdf/2112.04637v1",econ.EM 29107,em,"This study contributes a house price prediction model selection in Tehran City based on the area between Lorenz curve (LC) and concentration curve (CC) of the predicted price by using 206,556 observed transaction data over the period from March 21, 2018, to February 19, 2021. Several different methods such as generalized linear models (GLM) and recursive partitioning and regression trees (RPART), random forests (RF) regression models, and neural network (NN) models were examined house price prediction. We used 90% of all data samples which were chosen randomly to estimate the parameters of pricing models and 10% of remaining datasets to test the accuracy of prediction. Results showed that the area between the LC and CC curves (which are known as ABC criterion) of real and predicted prices in the test data sample of the random forest regression model was less than by other models under study. The comparison of the calculated ABC criteria leads us to conclude that the nonlinear regression models such as RF regression models give an accurate prediction of house prices in Tehran City.",Housing Price Prediction Model Selection Based on Lorenz and Concentration Curves: Empirical Evidence from Tehran Housing Market,2021-12-12 12:44:28,Mohammad Mirbagherijam,"http://arxiv.org/abs/2112.06192v1, http://arxiv.org/pdf/2112.06192v1",econ.EM 29108,em,"A new Stata command, ldvqreg, is developed to estimate quantile regression models for the cases of censored (with lower and/or upper censoring) and binary dependent variables. The estimators are implemented using a smoothed version of the quantile regression objective function. Simulation exercises show that it correctly estimates the parameters and it should be implemented instead of the available quantile regression methods when censoring is present. An empirical application to women's labor supply in Uruguay is considered.",Quantile Regression under Limited Dependent Variable,2021-12-13 20:33:54,"Javier Alejo, Gabriel Montes-Rojas","http://arxiv.org/abs/2112.06822v1, http://arxiv.org/pdf/2112.06822v1",econ.EM 29109,em,"This article presents identification results for the marginal treatment effect (MTE) when there is sample selection. We show that the MTE is partially identified for individuals who are always observed regardless of treatment, and derive uniformly sharp bounds on this parameter under three increasingly restrictive sets of assumptions. The first result imposes standard MTE assumptions with an unrestricted sample selection mechanism. The second set of conditions imposes monotonicity of the sample selection variable with respect to treatment, considerably shrinking the identified set. Finally, we incorporate a stochastic dominance assumption which tightens the lower bound for the MTE. Our analysis extends to discrete instruments. The results rely on a mixture reformulation of the problem where the mixture weights are identified, extending Lee's (2009) trimming procedure to the MTE context. We propose estimators for the bounds derived and use data made available by Deb, Munking and Trivedi (2006) to empirically illustrate the usefulness of our approach.",Identifying Marginal Treatment Effects in the Presence of Sample Selection,2021-12-14 00:08:49,"Otávio Bartalotti, Désiré Kédagni, Vitor Possebom","http://arxiv.org/abs/2112.07014v1, http://arxiv.org/pdf/2112.07014v1",econ.EM 29110,em,"We develop a novel test of the instrumental variable identifying assumptions for heterogeneous treatment effect models with conditioning covariates. We assume semiparametric dependence between potential outcomes and conditioning covariates. This allows us to obtain testable equality and inequality restrictions among the subdensities of estimable partial residuals. We propose jointly testing these restrictions. To improve power, we introduce distillation, where a trimmed sample is used to test the inequality restrictions. In Monte Carlo exercises we find gains in finite sample power from testing restrictions jointly and distillation. We apply our test procedure to three instruments and reject the null for one.",Testing Instrument Validity with Covariates,2021-12-15 16:06:22,"Thomas Carr, Toru Kitagawa","http://arxiv.org/abs/2112.08092v2, http://arxiv.org/pdf/2112.08092v2",econ.EM 29111,em,"This paper examines the local linear regression (LLR) estimate of the conditional distribution function $F(y|x)$. We derive three uniform convergence results: the uniform bias expansion, the uniform convergence rate, and the uniform asymptotic linear representation. The uniformity in the above results is with respect to both $x$ and $y$ and therefore has not previously been addressed in the literature on local polynomial regression. Such uniform convergence results are especially useful when the conditional distribution estimator is the first stage of a semiparametric estimator. We demonstrate the usefulness of these uniform results with two examples: the stochastic equicontinuity condition in $y$, and the estimation of the integrated conditional distribution function.",Uniform Convergence Results for the Local Linear Regression Estimation of the Conditional Distribution,2021-12-16 04:04:23,Haitian Xie,"http://arxiv.org/abs/2112.08546v2, http://arxiv.org/pdf/2112.08546v2",econ.EM 29112,em,"We consider a two-stage estimation method for linear regression that uses the lasso in Tibshirani (1996) to screen variables and re-estimate the coefficients using the least-squares boosting method in Friedman (2001) on every set of selected variables. Based on the large-scale simulation experiment in Hastie et al. (2020), the performance of lassoed boosting is found to be as competitive as the relaxed lasso in Meinshausen (2007) and can yield a sparser model under certain scenarios. An application to predict equity returns also shows that lassoed boosting can give the smallest mean square prediction error among all methods under consideration.",Lassoed Boosting and Linear Prediction in Equities Market,2021-12-16 18:00:37,Xiao Huang,"http://arxiv.org/abs/2112.08934v2, http://arxiv.org/pdf/2112.08934v2",econ.EM 29113,em,"This paper studies the robustness of estimated policy effects to changes in the distribution of covariates. Robustness to covariate shifts is important, for example, when evaluating the external validity of quasi-experimental results, which are often used as a benchmark for evidence-based policy-making. I propose a novel scalar robustness metric. This metric measures the magnitude of the smallest covariate shift needed to invalidate a claim on the policy effect (for example, $ATE \geq 0$) supported by the quasi-experimental evidence. My metric links the heterogeneity of policy effects and robustness in a flexible, nonparametric way and does not require functional form assumptions. I cast the estimation of the robustness metric as a de-biased GMM problem. This approach guarantees a parametric convergence rate for the robustness metric while allowing for machine learning-based estimators of policy effect heterogeneity (for example, lasso, random forest, boosting, neural nets). I apply my procedure to the Oregon Health Insurance experiment. I study the robustness of policy effects estimates of health-care utilization and financial strain outcomes, relative to a shift in the distribution of context-specific covariates. Such covariates are likely to differ across US states, making quantification of robustness an important exercise for adoption of the insurance policy in states other than Oregon. I find that the effect on outpatient visits is the most robust among the metrics of health-care utilization considered.","Robustness, Heterogeneous Treatment Effects and Covariate Shifts",2021-12-17 02:53:42,Pietro Emilio Spini,"http://arxiv.org/abs/2112.09259v1, http://arxiv.org/pdf/2112.09259v1",econ.EM 29114,em,"Aims: To re-introduce the Heckman model as a valid empirical technique in alcohol studies. Design: To estimate the determinants of problem drinking using a Heckman and a two-part estimation model. Psychological and neuro-scientific studies justify my underlying estimation assumptions and covariate exclusion restrictions. Higher order tests checking for multicollinearity validate the use of Heckman over the use of two-part estimation models. I discuss the generalizability of the two models in applied research. Settings and Participants: Two pooled national population surveys from 2016 and 2017 were used: the Behavioral Risk Factor Surveillance Survey (BRFS), and the National Survey of Drug Use and Health (NSDUH). Measurements: Participation in problem drinking and meeting the criteria for problem drinking. Findings: Both U.S. national surveys perform well with the Heckman model and pass all higher order tests. The Heckman model corrects for selection bias and reveals the direction of bias, where the two-part model does not. For example, the coefficients on age are upward biased and unemployment is downward biased in the two-part where the Heckman model does not have a selection bias. Covariate exclusion restrictions are sensitive to survey conditions and are contextually generalizable. Conclusions: The Heckman model can be used for alcohol (smoking studies as well) if the underlying estimation specification passes higher order tests for multicollinearity and the exclusion restrictions are justified with integrity for the data used. Its use is merit-worthy because it corrects for and reveals the direction and the magnitude of selection bias where the two-part does not.",Heckman-Selection or Two-Part models for alcohol studies? Depends,2021-12-20 17:08:35,Reka Sundaram-Stukel,"http://arxiv.org/abs/2112.10542v2, http://arxiv.org/pdf/2112.10542v2",econ.EM 29115,em,"We study the Stigler model of citation flows among journals adapting the pairwise comparison model of Bradley and Terry to do ranking and selection of journal influence based on nonparametric empirical Bayes procedures. Comparisons with several other rankings are made.",Ranking and Selection from Pairwise Comparisons: Empirical Bayes Methods for Citation Analysis,2021-12-21 12:46:29,"Jiaying Gu, Roger Koenker","http://arxiv.org/abs/2112.11064v1, http://arxiv.org/pdf/2112.11064v1",econ.EM 29116,em,"We ask if there are alternative contest models that minimize error or information loss from misspecification and outperform the Pythagorean model. This article aims to use simulated data to select the optimal expected win percentage model among the choice of relevant alternatives. The choices include the traditional Pythagorean model and the difference-form contest success function (CSF). Method. We simulate 1,000 iterations of the 2014 MLB season for the purpose of estimating and analyzing alternative models of expected win percentage (team quality). We use the open-source, Strategic Baseball Simulator and develop an AutoHotKey script that programmatically executes the SBS application, chooses the correct settings for the 2014 season, enters a unique ID for the simulation data file, and iterates these steps 1,000 times. We estimate expected win percentage using the traditional Pythagorean model, as well as the difference-form CSF model that is used in game theory and public choice economics. Each model is estimated while accounting for fixed (team) effects. We find that the difference-form CSF model outperforms the traditional Pythagorean model in terms of explanatory power and in terms of misspecification-based information loss as estimated by the Akaike Information Criterion. Through parametric estimation, we further confirm that the simulator yields realistic statistical outcomes. The simulation methodology offers the advantage of greatly improved sample size. As the season is held constant, our simulation-based statistical inference also allows for estimation and model comparison without the (time series) issue of non-stationarity. The results suggest that improved win (productivity) estimation can be achieved through alternative CSF specifications.",An Analysis of an Alternative Pythagorean Expected Win Percentage Model: Applications Using Major League Baseball Team Quality Simulations,2021-12-30 01:08:24,"Justin Ehrlich, Christopher Boudreaux, James Boudreau, Shane Sanders","http://arxiv.org/abs/2112.14846v1, http://arxiv.org/pdf/2112.14846v1",econ.EM 29117,em,"In this paper we examine the relation between market returns and volatility measures through machine learning methods in a high-frequency environment. We implement a minute-by-minute rolling window intraday estimation method using two nonlinear models: Long-Short-Term Memory (LSTM) neural networks and Random Forests (RF). Our estimations show that the CBOE Volatility Index (VIX) is the strongest candidate predictor for intraday market returns in our analysis, specially when implemented through the LSTM model. This model also improves significantly the performance of the lagged market return as predictive variable. Finally, intraday RF estimation outputs indicate that there is no performance improvement with this method, and it may even worsen the results in some cases.",Modeling and Forecasting Intraday Market Returns: a Machine Learning Approach,2021-12-30 19:05:17,"Iuri H. Ferreira, Marcelo C. Medeiros","http://arxiv.org/abs/2112.15108v1, http://arxiv.org/pdf/2112.15108v1",econ.EM 29118,em,"Startups have become in less than 50 years a major component of innovation and economic growth. Silicon Valley has been the place where the startup phenomenon was the most obvious and Stanford University was a major component of that success. Companies such as Google, Yahoo, Sun Microsystems, Cisco, Hewlett Packard had very strong links with Stanford but even these vary famous success stories cannot fully describe the richness and diversity of the Stanford entrepreneurial activity. This report explores the dynamics of more than 5000 companies founded by Stanford University alumni and staff, through their value creation, their field of activities, their growth patterns and more. The report also explores some features of the founders of these companies such as their academic background or the number of years between their Stanford experience and their company creation.",Startups and Stanford University,2017-11-02 11:14:26,Hervé Lebret,"http://arxiv.org/abs/1711.00644v1, http://arxiv.org/pdf/1711.00644v1",econ.EM 29120,em,"I propose a treatment selection model that introduces unobserved heterogeneity in both choice sets and preferences to evaluate the average effects of a program offer. I show how to exploit the model structure to define parameters capturing these effects and then computationally characterize their identified sets under instrumental variable variation in choice sets. I illustrate these tools by analyzing the effects of providing an offer to the Head Start preschool program using data from the Head Start Impact Study. I find that such a policy affects a large number of children who take up the offer, and that they subsequently have positive effects on test scores. These effects arise from children who do not have any preschool as an outside option. A cost-benefit analysis reveals that the earning benefits associated with the test score gains can be large and outweigh the net costs associated with offer take up.",Identifying the Effects of a Program Offer with an Application to Head Start,2017-11-06 20:55:59,Vishal Kamat,"http://arxiv.org/abs/1711.02048v6, http://arxiv.org/pdf/1711.02048v6",econ.EM 29121,em,"I study identification, estimation and inference for spillover effects in experiments where units' outcomes may depend on the treatment assignments of other units within a group. I show that the commonly-used reduced-form linear-in-means regression identifies a weighted sum of spillover effects with some negative weights, and that the difference in means between treated and controls identifies a combination of direct and spillover effects entering with different signs. I propose nonparametric estimators for average direct and spillover effects that overcome these issues and are consistent and asymptotically normal under a precise relationship between the number of parameters of interest, the total sample size and the treatment assignment mechanism. These findings are illustrated using data from a conditional cash transfer program and with simulations. The empirical results reveal the potential pitfalls of failing to flexibly account for spillover effects in policy evaluation: the estimated difference in means and the reduced-form linear-in-means coefficients are all close to zero and statistically insignificant, whereas the nonparametric estimators I propose reveal large, nonlinear and significant spillover effects.",Identification and Estimation of Spillover Effects in Randomized Experiments,2017-11-08 01:04:44,Gonzalo Vazquez-Bare,"http://arxiv.org/abs/1711.02745v8, http://arxiv.org/pdf/1711.02745v8",econ.EM 29122,em,"Futures market contracts with varying maturities are traded concurrently and the speed at which they process information is of value in understanding the pricing discovery process. Using price discovery measures, including Putnins (2013) information leadership share and intraday data, we quantify the proportional contribution of price discovery between nearby and deferred contracts in the corn and live cattle futures markets. Price discovery is more systematic in the corn than in the live cattle market. On average, nearby contracts lead all deferred contracts in price discovery in the corn market, but have a relatively less dominant role in the live cattle market. In both markets, the nearby contract loses dominance when its relative volume share dips below 50%, which occurs about 2-3 weeks before expiration in corn and 5-6 weeks before expiration in live cattle. Regression results indicate that the share of price discovery is most closely linked to trading volume but is also affected, to far less degree, by time to expiration, backwardation, USDA announcements and market crashes. The effects of these other factors vary between the markets which likely reflect the difference in storability as well as other market-related characteristics.",Measuring Price Discovery between Nearby and Deferred Contracts in Storable and Non-Storable Commodity Futures Markets,2017-11-09 21:12:05,"Zhepeng Hu, Mindy Mallory, Teresa Serra, Philip Garcia","http://arxiv.org/abs/1711.03506v1, http://arxiv.org/pdf/1711.03506v1",econ.EM 29123,em,"Economic complexity reflects the amount of knowledge that is embedded in the productive structure of an economy. It resides on the premise of hidden capabilities - fundamental endowments underlying the productive structure. In general, measuring the capabilities behind economic complexity directly is difficult, and indirect measures have been suggested which exploit the fact that the presence of the capabilities is expressed in a country's mix of products. We complement these studies by introducing a probabilistic framework which leverages Bayesian non-parametric techniques to extract the dominant features behind the comparative advantage in exported products. Based on economic evidence and trade data, we place a restricted Indian Buffet Process on the distribution of countries' capability endowment, appealing to a culinary metaphor to model the process of capability acquisition. The approach comes with a unique level of interpretability, as it produces a concise and economically plausible description of the instantiated capabilities.",Economic Complexity Unfolded: Interpretable Model for the Productive Structure of Economies,2017-11-17 17:09:19,"Zoran Utkovski, Melanie F. Pradier, Viktor Stojkoski, Fernando Perez-Cruz, Ljupco Kocarev","http://dx.doi.org/10.1371/journal.pone.0200822, http://arxiv.org/abs/1711.07327v2, http://arxiv.org/pdf/1711.07327v2",econ.EM 29124,em,"This study briefly introduces the development of Shantou Special Economic Zone under Reform and Opening-Up Policy from 1980 through 2016 with a focus on policy making issues and its influences on local economy. This paper is divided into two parts, 1980 to 1991, 1992 to 2016 in accordance with the separation of the original Shantou District into three cities: Shantou, Chaozhou and Jieyang in the end of 1991. This study analyzes the policy making issues in the separation of the original Shantou District, the influences of the policy on Shantou's economy after separation, the possibility of merging the three cities into one big new economic district in the future and reasons that lead to the stagnant development of Shantou in recent 20 years. This paper uses statistical longitudinal analysis in analyzing economic problems with applications of non-parametric statistics through generalized additive model and time series forecasting methods. The paper is authored by Bowen Cai solely, who is the graduate student in the PhD program of Applied and Computational Mathematics and Statistics at the University of Notre Dame with concentration in big data analysis.",The Research on the Stagnant Development of Shantou Special Economic Zone Under Reform and Opening-Up Policy,2017-11-24 09:34:15,Bowen Cai,"http://arxiv.org/abs/1711.08877v1, http://arxiv.org/pdf/1711.08877v1",econ.EM 29125,em,"This paper presents the identification of heterogeneous elasticities in the Cobb-Douglas production function. The identification is constructive with closed-form formulas for the elasticity with respect to each input for each firm. We propose that the flexible input cost ratio plays the role of a control function under ""non-collinear heterogeneity"" between elasticities with respect to two flexible inputs. The ex ante flexible input cost share can be used to identify the elasticities with respect to flexible inputs for each firm. The elasticities with respect to labor and capital can be subsequently identified for each firm under the timing assumption admitting the functional independence.",Constructive Identification of Heterogeneous Elasticities in the Cobb-Douglas Production Function,2017-11-28 01:51:57,"Tong Li, Yuya Sasaki","http://arxiv.org/abs/1711.10031v1, http://arxiv.org/pdf/1711.10031v1",econ.EM 29127,em,"Research on growing American political polarization and antipathy primarily studies public institutions and political processes, ignoring private effects including strained family ties. Using anonymized smartphone-location data and precinct-level voting, we show that Thanksgiving dinners attended by opposing-party precinct residents were 30-50 minutes shorter than same-party dinners. This decline from a mean of 257 minutes survives extensive spatial and demographic controls. Dinner reductions in 2016 tripled for travelers from media markets with heavy political advertising --- an effect not observed in 2015 --- implying a relationship to election-related behavior. Effects appear asymmetric: while fewer Democratic-precinct residents traveled in 2016 than 2015, political differences shortened Thanksgiving dinners more among Republican-precinct residents. Nationwide, 34 million person-hours of cross-partisan Thanksgiving discourse were lost in 2016 to partisan effects.",The Effect of Partisanship and Political Advertising on Close Family Ties,2017-11-29 01:58:02,"M. Keith Chen, Ryne Rohla","http://dx.doi.org/10.1126/science.aaq1433, http://arxiv.org/abs/1711.10602v2, http://arxiv.org/pdf/1711.10602v2",econ.EM 29128,em,"The main purpose of this paper is to analyze threshold effects of official development assistance (ODA) on economic growth in WAEMU zone countries. To achieve this, the study is based on OECD and WDI data covering the period 1980-2015 and used Hansen's Panel Threshold Regression (PTR) model to ""bootstrap"" aid threshold above which its effectiveness is effective. The evidence strongly supports the view that the relationship between aid and economic growth is non-linear with a unique threshold which is 12.74% GDP. Above this value, the marginal effect of aid is 0.69 points, ""all things being equal to otherwise"". One of the main contribution of this paper is to show that WAEMU countries need investments that could be covered by the foreign aid. This later one should be considered just as a complementary resource. Thus, WEAMU countries should continue to strengthen their efforts in internal resource mobilization in order to fulfil this need.",Aide et Croissance dans les pays de l'Union Economique et Mon{é}taire Ouest Africaine (UEMOA) : retour sur une relation controvers{é}e,2018-04-13 16:07:11,Nimonka Bayale,"http://arxiv.org/abs/1805.00435v1, http://arxiv.org/pdf/1805.00435v1",econ.EM 29129,em,"In this paper, I endeavour to construct a new model, by extending the classic exogenous economic growth model by including a measurement which tries to explain and quantify the size of technological innovation ( A ) endogenously. I do not agree technology is a ""constant"" exogenous variable, because it is humans who create all technological innovations, and it depends on how much human and physical capital is allocated for its research. I inspect several possible approaches to do this, and then I test my model both against sample and real world evidence data. I call this method ""dynamic"" because it tries to model the details in resource allocations between research, labor and capital, by affecting each other interactively. In the end, I point out which is the new residual and the parts of the economic growth model which can be further improved.",Endogenous growth - A dynamic technology augmentation of the Solow model,2018-05-02 11:23:18,Murad Kasim,"http://arxiv.org/abs/1805.00668v1, http://arxiv.org/pdf/1805.00668v1",econ.EM 29130,em,"This paper studies the identification and estimation of the optimal linear approximation of a structural regression function. The parameter in the linear approximation is called the Optimal Linear Instrumental Variables Approximation (OLIVA). This paper shows that a necessary condition for standard inference on the OLIVA is also sufficient for the existence of an IV estimand in a linear model. The instrument in the IV estimand is unknown and may not be identified. A Two-Step IV (TSIV) estimator based on Tikhonov regularization is proposed, which can be implemented by standard regression routines. We establish the asymptotic normality of the TSIV estimator assuming neither completeness nor identification of the instrument. As an important application of our analysis, we robustify the classical Hausman test for exogeneity against misspecification of the linear structural model. We also discuss extensions to weighted least squares criteria. Monte Carlo simulations suggest an excellent finite sample performance for the proposed inferences. Finally, in an empirical application estimating the elasticity of intertemporal substitution (EIS) with US data, we obtain TSIV estimates that are much larger than their standard IV counterparts, with our robust Hausman test failing to reject the null hypothesis of exogeneity of real interest rates.",Optimal Linear Instrumental Variables Approximations,2018-05-08 23:44:27,"Juan Carlos Escanciano, Wei Li","http://arxiv.org/abs/1805.03275v3, http://arxiv.org/pdf/1805.03275v3",econ.EM 29131,em,"We study the identification and estimation of structural parameters in dynamic panel data logit models where decisions are forward-looking and the joint distribution of unobserved heterogeneity and observable state variables is nonparametric, i.e., fixed-effects model. We consider models with two endogenous state variables: the lagged decision variable, and the time duration in the last choice. This class of models includes as particular cases important economic applications such as models of market entry-exit, occupational choice, machine replacement, inventory and investment decisions, or dynamic demand of differentiated products. The identification of structural parameters requires a sufficient statistic that controls for unobserved heterogeneity not only in current utility but also in the continuation value of the forward-looking decision problem. We obtain the minimal sufficient statistic and prove identification of some structural parameters using a conditional likelihood approach. We apply this estimator to a machine replacement model.",Sufficient Statistics for Unobserved Heterogeneity in Structural Dynamic Logit Models,2018-05-10 19:27:33,"Victor Aguirregabiria, Jiaying Gu, Yao Luo","http://arxiv.org/abs/1805.04048v1, http://arxiv.org/pdf/1805.04048v1",econ.EM 29132,em,"This paper constructs individual-specific density forecasts for a panel of firms or households using a dynamic linear model with common and heterogeneous coefficients as well as cross-sectional heteroskedasticity. The panel considered in this paper features a large cross-sectional dimension N but short time series T. Due to the short T, traditional methods have difficulty in disentangling the heterogeneous parameters from the shocks, which contaminates the estimates of the heterogeneous parameters. To tackle this problem, I assume that there is an underlying distribution of heterogeneous parameters, model this distribution nonparametrically allowing for correlation between heterogeneous parameters and initial conditions as well as individual-specific regressors, and then estimate this distribution by combining information from the whole panel. Theoretically, I prove that in cross-sectional homoskedastic cases, both the estimated common parameters and the estimated distribution of the heterogeneous parameters achieve posterior consistency, and that the density forecasts asymptotically converge to the oracle forecast. Methodologically, I develop a simulation-based posterior sampling algorithm specifically addressing the nonparametric density estimation of unobserved heterogeneous parameters. Monte Carlo simulations and an empirical application to young firm dynamics demonstrate improvements in density forecasts relative to alternative approaches.",Density Forecasts in Panel Data Models: A Semiparametric Bayesian Perspective,2018-05-10 23:51:01,Laura Liu,"http://arxiv.org/abs/1805.04178v3, http://arxiv.org/pdf/1805.04178v3",econ.EM 29134,em,"This paper contributes to the literature on treatment effects estimation with machine learning inspired methods by studying the performance of different estimators based on the Lasso. Building on recent work in the field of high-dimensional statistics, we use the semiparametric efficient score estimation structure to compare different estimators. Alternative weighting schemes are considered and their suitability for the incorporation of machine learning estimators is assessed using theoretical arguments and various Monte Carlo experiments. Additionally we propose an own estimator based on doubly robust Kernel matching that is argued to be more robust to nuisance parameter misspecification. In the simulation study we verify theory based intuition and find good finite sample properties of alternative weighting scheme estimators like the one we propose.",The Finite Sample Performance of Treatment Effects Estimators based on the Lasso,2018-05-14 11:50:54,Michael Zimmert,"http://arxiv.org/abs/1805.05067v1, http://arxiv.org/pdf/1805.05067v1",econ.EM 29135,em,"This paper introduces a method for linking technological improvement rates (i.e. Moore's Law) and technology adoption curves (i.e. S-Curves). There has been considerable research surrounding Moore's Law and the generalized versions applied to the time dependence of performance for other technologies. The prior work has culminated with methodology for quantitative estimation of technological improvement rates for nearly any technology. This paper examines the implications of such regular time dependence for performance upon the timing of key events in the technological adoption process. We propose a simple crossover point in performance which is based upon the technological improvement rates and current level differences for target and replacement technologies. The timing for the cross-over is hypothesized as corresponding to the first 'knee'? in the technology adoption ""S-curve"" and signals when the market for a given technology will start to be rewarding for innovators. This is also when potential entrants are likely to intensely experiment with product-market fit and when the competition to achieve a dominant design begins. This conceptual framework is then back-tested by examining two technological changes brought about by the internet, namely music and video transmission. The uncertainty analysis around the cases highlight opportunities for organizations to reduce future technological uncertainty. Overall, the results from the case studies support the reliability and utility of the conceptual framework in strategic business decision-making with the caveat that while technical uncertainty is reduced, it is not eliminated.",Data-Driven Investment Decision-Making: Applying Moore's Law and S-Curves to Business Strategies,2018-05-16 17:09:04,"Christopher L. Benson, Christopher L. Magee","http://arxiv.org/abs/1805.06339v1, http://arxiv.org/pdf/1805.06339v1",econ.EM 29136,em,"Some aspects of the problem of stable marriage are discussed. There are two distinguished marriage plans: the fully transferable case, where money can be transferred between the participants, and the fully non transferable case where each participant has its own rigid preference list regarding the other gender. We continue to discuss intermediate partial transferable cases. Partial transferable plans can be approached as either special cases of cooperative games using the notion of a core, or as a generalization of the cyclical monotonicity property of the fully transferable case (fake promises). We shall introduced these two approaches, and prove the existence of stable marriage for the fully transferable and non-transferable plans.",Happy family of stable marriages,2018-05-17 13:33:04,Gershon Wolansky,"http://arxiv.org/abs/1805.06687v1, http://arxiv.org/pdf/1805.06687v1",econ.EM 29137,em,"This study back-tests a marginal cost of production model proposed to value the digital currency bitcoin. Results from both conventional regression and vector autoregression (VAR) models show that the marginal cost of production plays an important role in explaining bitcoin prices, challenging recent allegations that bitcoins are essentially worthless. Even with markets pricing bitcoin in the thousands of dollars each, the valuation model seems robust. The data show that a price bubble that began in the Fall of 2017 resolved itself in early 2018, converging with the marginal cost model. This suggests that while bubbles may appear in the bitcoin market, prices will tend to this bound and not collapse to zero.",Bitcoin price and its marginal cost of production: support for a fundamental value,2018-05-19 18:30:29,Adam Hayes,"http://arxiv.org/abs/1805.07610v1, http://arxiv.org/pdf/1805.07610v1",econ.EM 29138,em,"The issue of model selection in applied research is of vital importance. Since the true model in such research is not known, which model should be used from among various potential ones is an empirical question. There might exist several competitive models. A typical approach to dealing with this is classic hypothesis testing using an arbitrarily chosen significance level based on the underlying assumption that a true null hypothesis exists. In this paper we investigate how successful this approach is in determining the correct model for different data generating processes using time series data. An alternative approach based on more formal model selection techniques using an information criterion or cross-validation is suggested and evaluated in the time series environment via Monte Carlo experiments. This paper also explores the effectiveness of deciding what type of general relation exists between two variables (e.g. relation in levels or relation in first differences) using various strategies based on hypothesis testing and on information criteria with the presence or absence of unit roots.",Model Selection in Time Series Analysis: Using Information Criteria as an Alternative to Hypothesis Testing,2018-05-23 10:40:53,"R. Scott Hacker, Abdulnasser Hatemi-J","http://arxiv.org/abs/1805.08991v1, http://arxiv.org/pdf/1805.08991v1",econ.EM 29139,em,"This study investigates the dose-response effects of making music on youth development. Identification is based on the conditional independence assumption and estimation is implemented using a recent double machine learning estimator. The study proposes solutions to two highly practically relevant questions that arise for these new methods: (i) How to investigate sensitivity of estimates to tuning parameter choices in the machine learning part? (ii) How to assess covariate balancing in high-dimensional settings? The results show that improvements in objectively measured cognitive skills require at least medium intensity, while improvements in school grades are already observed for low intensity of practice.",A Double Machine Learning Approach to Estimate the Effects of Musical Practice on Student's Skills,2018-05-23 10:58:08,Michael C. Knaus,"http://arxiv.org/abs/1805.10300v2, http://arxiv.org/pdf/1805.10300v2",econ.EM 29932,em,"We propose logit-based IV and augmented logit-based IV estimators that serve as alternatives to the traditionally used 2SLS estimator in the model where both the endogenous treatment variable and the corresponding instrument are binary. Our novel estimators are as easy to compute as the 2SLS estimator but have an advantage over the 2SLS estimator in terms of causal interpretability. In particular, in certain cases where the probability limits of both our estimators and the 2SLS estimator take the form of weighted-average treatment effects, our estimators are guaranteed to yield non-negative weights whereas the 2SLS estimator is not.",Logit-based alternatives to two-stage least squares,2023-12-16 08:47:43,"Denis Chetverikov, Jinyong Hahn, Zhipeng Liao, Shuyang Sheng","http://arxiv.org/abs/2312.10333v1, http://arxiv.org/pdf/2312.10333v1",econ.EM 29140,em,"This article introduces two absolutely continuous global-local shrinkage priors to enable stochastic variable selection in the context of high-dimensional matrix exponential spatial specifications. Existing approaches as a means to dealing with overparameterization problems in spatial autoregressive specifications typically rely on computationally demanding Bayesian model-averaging techniques. The proposed shrinkage priors can be implemented using Markov chain Monte Carlo methods in a flexible and efficient way. A simulation study is conducted to evaluate the performance of each of the shrinkage priors. Results suggest that they perform particularly well in high-dimensional environments, especially when the number of parameters to estimate exceeds the number of observations. For an empirical illustration we use pan-European regional economic growth data.",Flexible shrinkage in high-dimensional Bayesian spatial autoregressive models,2018-05-28 12:01:55,"Michael Pfarrhofer, Philipp Piribauer","http://dx.doi.org/10.1016/j.spasta.2018.10.004, http://arxiv.org/abs/1805.10822v1, http://arxiv.org/pdf/1805.10822v1",econ.EM 29141,em,"We propose a method that reconciles two popular approaches to structural estimation and inference: Using a complete - yet approximate model versus imposing a set of credible behavioral conditions. This is done by distorting the approximate model to satisfy these conditions. We provide the asymptotic theory and Monte Carlo evidence, and illustrate that counterfactual experiments are possible. We apply the methodology to the model of long run risks in aggregate consumption (Bansal and Yaron, 2004), where the complete model is generated using the Campbell and Shiller (1988) approximation. Using US data, we investigate the empirical importance of the neglected non-linearity. We find that distorting the model to satisfy the non-linear equilibrium condition is strongly preferred by the data while the quality of the approximation is yet another reason for the downward bias to estimates of the intertemporal elasticity of substitution and the upward bias in risk aversion.",Equilibrium Restrictions and Approximate Models -- With an application to Pricing Macroeconomic Risk,2018-05-28 14:27:20,Andreas Tryphonides,"http://arxiv.org/abs/1805.10869v3, http://arxiv.org/pdf/1805.10869v3",econ.EM 29142,em,"The United States' power market is featured by the lack of judicial power at the federal level. The market thus provides a unique testing environment for the market organization structure. At the same time, the econometric modeling and forecasting of electricity market consumption become more challenging. Import and export, which generally follow simple rules in European countries, can be a result of direct market behaviors. This paper seeks to build a general model for power consumption and using the model to test several hypotheses.",Modeling the residential electricity consumption within a restructured power market,2018-05-28 22:19:00,Chelsea Sun,"http://arxiv.org/abs/1805.11138v2, http://arxiv.org/pdf/1805.11138v2",econ.EM 29143,em,"The policy relevant treatment effect (PRTE) measures the average effect of switching from a status-quo policy to a counterfactual policy. Estimation of the PRTE involves estimation of multiple preliminary parameters, including propensity scores, conditional expectation functions of the outcome and covariates given the propensity score, and marginal treatment effects. These preliminary estimators can affect the asymptotic distribution of the PRTE estimator in complicated and intractable manners. In this light, we propose an orthogonal score for double debiased estimation of the PRTE, whereby the asymptotic distribution of the PRTE estimator is obtained without any influence of preliminary parameter estimators as far as they satisfy mild requirements of convergence rates. To our knowledge, this paper is the first to develop limit distribution theories for inference about the PRTE.",Estimation and Inference for Policy Relevant Treatment Effects,2018-05-29 17:34:35,"Yuya Sasaki, Takuya Ura","http://arxiv.org/abs/1805.11503v4, http://arxiv.org/pdf/1805.11503v4",econ.EM 29144,em,"Partial mean with generated regressors arises in several econometric problems, such as the distribution of potential outcomes with continuous treatments and the quantile structural function in a nonseparable triangular model. This paper proposes a nonparametric estimator for the partial mean process, where the second step consists of a kernel regression on regressors that are estimated in the first step. The main contribution is a uniform expansion that characterizes in detail how the estimation error associated with the generated regressor affects the limiting distribution of the marginal integration estimator. The general results are illustrated with two examples: the generalized propensity score for a continuous treatment (Hirano and Imbens, 2004) and control variables in triangular models (Newey, Powell, and Vella, 1999; Imbens and Newey, 2009). An empirical application to the Job Corps program evaluation demonstrates the usefulness of the method.",Partial Mean Processes with Generated Regressors: Continuous Treatment Effects and Nonseparable Models,2018-11-01 02:37:25,Ying-Ying Lee,"http://arxiv.org/abs/1811.00157v1, http://arxiv.org/pdf/1811.00157v1",econ.EM 29145,em,"I develop a new identification strategy for treatment effects when noisy measurements of unobserved confounding factors are available. I use proxy variables to construct a random variable conditional on which treatment variables become exogenous. The key idea is that, under appropriate conditions, there exists a one-to-one mapping between the distribution of unobserved confounding factors and the distribution of proxies. To ensure sufficient variation in the constructed control variable, I use an additional variable, termed excluded variable, which satisfies certain exclusion restrictions and relevance conditions. I establish asymptotic distributional results for semiparametric and flexible parametric estimators of causal parameters. I illustrate empirical relevance and usefulness of my results by estimating causal effects of attending selective college on earnings.",Treatment Effect Estimation with Noisy Conditioning Variables,2018-11-02 01:53:48,Kenichi Nagasawa,"http://arxiv.org/abs/1811.00667v4, http://arxiv.org/pdf/1811.00667v4",econ.EM 29146,em,"We develop a new statistical procedure to test whether the dependence structure is identical between two groups. Rather than relying on a single index such as Pearson's correlation coefficient or Kendall's Tau, we consider the entire dependence structure by investigating the dependence functions (copulas). The critical values are obtained by a modified randomization procedure designed to exploit asymptotic group invariance conditions. Implementation of the test is intuitive and simple, and does not require any specification of a tuning parameter or weight function. At the same time, the test exhibits excellent finite sample performance, with the null rejection rates almost equal to the nominal level even when the sample size is extremely small. Two empirical applications concerning the dependence between income and consumption, and the Brexit effect on European financial market integration are provided.",Randomization Tests for Equality in Dependence Structure,2018-11-06 03:59:00,Juwon Seo,"http://arxiv.org/abs/1811.02105v1, http://arxiv.org/pdf/1811.02105v1",econ.EM 29147,em,"Finite mixture models are useful in applied econometrics. They can be used to model unobserved heterogeneity, which plays major roles in labor economics, industrial organization and other fields. Mixtures are also convenient in dealing with contaminated sampling models and models with multiple equilibria. This paper shows that finite mixture models are nonparametrically identified under weak assumptions that are plausible in economic applications. The key is to utilize the identification power implied by information in covariates variation. First, three identification approaches are presented, under distinct and non-nested sets of sufficient conditions. Observable features of data inform us which of the three approaches is valid. These results apply to general nonparametric switching regressions, as well as to structural econometric models, such as auction models with unobserved heterogeneity. Second, some extensions of the identification results are developed. In particular, a mixture regression where the mixing weights depend on the value of the regressors in a fully unrestricted manner is shown to be nonparametrically identifiable. This means a finite mixture model with function-valued unobserved heterogeneity can be identified in a cross-section setting, without restricting the dependence pattern between the regressor and the unobserved heterogeneity. In this aspect it is akin to fixed effects panel data models which permit unrestricted correlation between unobserved heterogeneity and covariates. Third, the paper shows that fully nonparametric estimation of the entire mixture model is possible, by forming a sample analogue of one of the new identification strategies. The estimator is shown to possess a desirable polynomial rate of convergence as in a standard nonparametric estimation problem, despite nonregular features of the model.",Nonparametric Analysis of Finite Mixtures,2018-11-07 05:16:14,"Yuichi Kitamura, Louise Laage","http://arxiv.org/abs/1811.02727v1, http://arxiv.org/pdf/1811.02727v1",econ.EM 29148,em,"Single index linear models for binary response with random coefficients have been extensively employed in many econometric settings under various parametric specifications of the distribution of the random coefficients. Nonparametric maximum likelihood estimation (NPMLE) as proposed by Cosslett (1983) and Ichimura and Thompson (1998), in contrast, has received less attention in applied work due primarily to computational difficulties. We propose a new approach to computation of NPMLEs for binary response models that significantly increase their computational tractability thereby facilitating greater flexibility in applications. Our approach, which relies on recent developments involving the geometry of hyperplane arrangements, is contrasted with the recently proposed deconvolution method of Gautier and Kitamura (2013). An application to modal choice for the journey to work in the Washington DC area illustrates the methods.",Nonparametric maximum likelihood methods for binary response models with random coefficients,2018-11-08 12:33:02,"Jiaying Gu, Roger Koenker","http://arxiv.org/abs/1811.03329v3, http://arxiv.org/pdf/1811.03329v3",econ.EM 29149,em,"This study proposes a point estimator of the break location for a one-time structural break in linear regression models. If the break magnitude is small, the least-squares estimator of the break date has two modes at the ends of the finite sample period, regardless of the true break location. To solve this problem, I suggest an alternative estimator based on a modification of the least-squares objective function. The modified objective function incorporates estimation uncertainty that varies across potential break dates. The new break point estimator is consistent and has a unimodal finite sample distribution under small break magnitudes. A limit distribution is provided under an in-fill asymptotic framework. Monte Carlo simulation results suggest that the new estimator outperforms the least-squares estimator. I apply the method to estimate the break date in U.S. real GDP growth and U.S. and UK stock return prediction models.",Estimation of a Structural Break Point in Linear Regression Models,2018-11-09 03:10:11,Yaein Baek,"http://arxiv.org/abs/1811.03720v3, http://arxiv.org/pdf/1811.03720v3",econ.EM 29150,em,"This paper analyses the use of bootstrap methods to test for parameter change in linear models estimated via Two Stage Least Squares (2SLS). Two types of test are considered: one where the null hypothesis is of no change and the alternative hypothesis involves discrete change at k unknown break-points in the sample; and a second test where the null hypothesis is that there is discrete parameter change at l break-points in the sample against an alternative in which the parameters change at l + 1 break-points. In both cases, we consider inferences based on a sup-Wald-type statistic using either the wild recursive bootstrap or the wild fixed bootstrap. We establish the asymptotic validity of these bootstrap tests under a set of general conditions that allow the errors to exhibit conditional and/or unconditional heteroskedasticity, and report results from a simulation study that indicate the tests yield reliable inferences in the sample sizes often encountered in macroeconomics. The analysis covers the cases where the first-stage estimation of 2SLS involves a model whose parameters are either constant or themselves subject to discrete parameter change. If the errors exhibit unconditional heteroskedasticity and/or the reduced form is unstable then the bootstrap methods are particularly attractive because the limiting distributions of the test statistics are not pivotal.",Bootstrapping Structural Change Tests,2018-11-09 23:15:33,"Otilia Boldea, Adriana Cornea-Madeira, Alastair R. Hall","http://dx.doi.org/10.1016/j.jeconom.2019.05.019, http://arxiv.org/abs/1811.04125v1, http://arxiv.org/pdf/1811.04125v1",econ.EM 29151,em,"Identification of multinomial choice models is often established by using special covariates that have full support. This paper shows how these identification results can be extended to a large class of multinomial choice models when all covariates are bounded. I also provide a new $\sqrt{n}$-consistent asymptotically normal estimator of the finite-dimensional parameters of the model.",Identification and estimation of multinomial choice models with latent special covariates,2018-11-14 01:48:40,Nail Kashaev,"http://arxiv.org/abs/1811.05555v3, http://arxiv.org/pdf/1811.05555v3",econ.EM 29152,em,"In this paper, we investigate seemingly unrelated regression (SUR) models that allow the number of equations (N) to be large, and to be comparable to the number of the observations in each equation (T). It is well known in the literature that the conventional SUR estimator, for example, the generalized least squares (GLS) estimator of Zellner (1962) does not perform well. As the main contribution of the paper, we propose a new feasible GLS estimator called the feasible graphical lasso (FGLasso) estimator. For a feasible implementation of the GLS estimator, we use the graphical lasso estimation of the precision matrix (the inverse of the covariance matrix of the equation system errors) assuming that the underlying unknown precision matrix is sparse. We derive asymptotic theories of the new estimator and investigate its finite sample properties via Monte-Carlo simulations.",Estimation of High-Dimensional Seemingly Unrelated Regression Models,2018-11-14 02:19:46,"Lidan Tan, Khai X. Chiong, Hyungsik Roger Moon","http://arxiv.org/abs/1811.05567v1, http://arxiv.org/pdf/1811.05567v1",econ.EM 29161,em,"We study partial identification of the preference parameters in the one-to-one matching model with perfectly transferable utilities. We do so without imposing parametric distributional assumptions on the unobserved heterogeneity and with data on one large market. We provide a tractable characterisation of the identified set under various classes of nonparametric distributional assumptions on the unobserved heterogeneity. Using our methodology, we re-examine some of the relevant questions in the empirical literature on the marriage market, which have been previously studied under the Logit assumption. Our results reveal that many findings in the aforementioned literature are primarily driven by such parametric restrictions.",Partial Identification in Matching Models for the Marriage Market,2019-02-15 00:37:28,"Cristina Gualdani, Shruti Sinha","http://arxiv.org/abs/1902.05610v6, http://arxiv.org/pdf/1902.05610v6",econ.EM 29153,em,"In this study, Bayesian inference is developed for structural vector autoregressive models in which the structural parameters are identified via Markov-switching heteroskedasticity. In such a model, restrictions that are just-identifying in the homoskedastic case, become over-identifying and can be tested. A set of parametric restrictions is derived under which the structural matrix is globally or partially identified and a Savage-Dickey density ratio is used to assess the validity of the identification conditions. The latter is facilitated by analytical derivations that make the computations fast and numerical standard errors small. As an empirical example, monetary models are compared using heteroskedasticity as an additional device for identification. The empirical results support models with money in the interest rate reaction function.",Bayesian Inference for Structural Vector Autoregressions Identified by Markov-Switching Heteroskedasticity,2018-11-20 13:29:18,"Helmut Lütkepohl, Tomasz Woźniak","http://dx.doi.org/10.1016/j.jedc.2020.103862, http://arxiv.org/abs/1811.08167v1, http://arxiv.org/pdf/1811.08167v1",econ.EM 29154,em,"In this paper we aim to improve existing empirical exchange rate models by accounting for uncertainty with respect to the underlying structural representation. Within a flexible Bayesian non-linear time series framework, our modeling approach assumes that different regimes are characterized by commonly used structural exchange rate models, with their evolution being driven by a Markov process. We assume a time-varying transition probability matrix with transition probabilities depending on a measure of the monetary policy stance of the central bank at the home and foreign country. We apply this model to a set of eight exchange rates against the US dollar. In a forecasting exercise, we show that model evidence varies over time and a model approach that takes this empirical evidence seriously yields improvements in accuracy of density forecasts for most currency pairs considered.",Model instability in predictive exchange rate regressions,2018-11-21 19:40:00,"Niko Hauzenberger, Florian Huber","http://arxiv.org/abs/1811.08818v2, http://arxiv.org/pdf/1811.08818v2",econ.EM 29155,em,"Volatilities, in high-dimensional panels of economic time series with a dynamic factor structure on the levels or returns, typically also admit a dynamic factor decomposition. We consider a two-stage dynamic factor model method recovering the common and idiosyncratic components of both levels and log-volatilities. Specifically, in a first estimation step, we extract the common and idiosyncratic shocks for the levels, from which a log-volatility proxy is computed. In a second step, we estimate a dynamic factor model, which is equivalent to a multiplicative factor structure for volatilities, for the log-volatility panel. By exploiting this two-stage factor approach, we build one-step-ahead conditional prediction intervals for large $n \times T$ panels of returns. Those intervals are based on empirical quantiles, not on conditional variances; they can be either equal- or unequal- tailed. We provide uniform consistency and consistency rates results for the proposed estimators as both $n$ and $T$ tend to infinity. We study the finite-sample properties of our estimators by means of Monte Carlo simulations. Finally, we apply our methodology to a panel of asset returns belonging to the S&P100 index in order to compute one-step-ahead conditional prediction intervals for the period 2006-2013. A comparison with the componentwise GARCH benchmark (which does not take advantage of cross-sectional information) demonstrates the superiority of our approach, which is genuinely multivariate (and high-dimensional), nonparametric, and model-free.","Generalized Dynamic Factor Models and Volatilities: Consistency, rates, and prediction intervals",2018-11-25 19:06:08,"Matteo Barigozzi, Marc Hallin","http://dx.doi.org/10.1016/j.jeconom.2020.01.003, http://arxiv.org/abs/1811.10045v2, http://arxiv.org/pdf/1811.10045v2",econ.EM 29156,em,"This paper studies model selection in semiparametric econometric models. It develops a consistent series-based model selection procedure based on a Bayesian Information Criterion (BIC) type criterion to select between several classes of models. The procedure selects a model by minimizing the semiparametric Lagrange Multiplier (LM) type test statistic from Korolev (2018) but additionally rewards simpler models. The paper also develops consistent upward testing (UT) and downward testing (DT) procedures based on the semiparametric LM type specification test. The proposed semiparametric LM-BIC and UT procedures demonstrate good performance in simulations. To illustrate the use of these semiparametric model selection procedures, I apply them to the parametric and semiparametric gasoline demand specifications from Yatchew and No (2001). The LM-BIC procedure selects the semiparametric specification that is nonparametric in age but parametric in all other variables, which is in line with the conclusions in Yatchew and No (2001). The results of the UT and DT procedures heavily depend on the choice of tuning parameters and assumptions about the model errors.",LM-BIC Model Selection in Semiparametric Models,2018-11-26 23:29:18,Ivan Korolev,"http://arxiv.org/abs/1811.10676v1, http://arxiv.org/pdf/1811.10676v1",econ.EM 29157,em,"This paper studies a fixed-design residual bootstrap method for the two-step estimator of Francq and Zako\""ian (2015) associated with the conditional Expected Shortfall. For a general class of volatility models the bootstrap is shown to be asymptotically valid under the conditions imposed by Beutner et al. (2018). A simulation study is conducted revealing that the average coverage rates are satisfactory for most settings considered. There is no clear evidence to have a preference for any of the three proposed bootstrap intervals. This contrasts results in Beutner et al. (2018) for the VaR, for which the reversed-tails interval has a superior performance.",A Residual Bootstrap for Conditional Expected Shortfall,2018-11-27 01:03:46,"Alexander Heinemann, Sean Telg","http://arxiv.org/abs/1811.11557v1, http://arxiv.org/pdf/1811.11557v1",econ.EM 29158,em,"We provide a complete asymptotic distribution theory for clustered data with a large number of independent groups, generalizing the classic laws of large numbers, uniform laws, central limit theory, and clustered covariance matrix estimation. Our theory allows for clustered observations with heterogeneous and unbounded cluster sizes. Our conditions cleanly nest the classical results for i.n.i.d. observations, in the sense that our conditions specialize to the classical conditions under independent sampling. We use this theory to develop a full asymptotic distribution theory for estimation based on linear least-squares, 2SLS, nonlinear MLE, and nonlinear GMM.",Asymptotic Theory for Clustered Samples,2019-02-05 02:46:04,"Bruce E. Hansen, Seojeong Lee","http://arxiv.org/abs/1902.01497v1, http://arxiv.org/pdf/1902.01497v1",econ.EM 29159,em,"In this paper we propose a general framework to analyze prediction in time series models and show how a wide class of popular time series models satisfies this framework. We postulate a set of high-level assumptions, and formally verify these assumptions for the aforementioned time series models. Our framework coincides with that of Beutner et al. (2019, arXiv:1710.00643) who establish the validity of conditional confidence intervals for predictions made in this framework. The current paper therefore complements the results in Beutner et al. (2019, arXiv:1710.00643) by providing practically relevant applications of their theory.",A General Framework for Prediction in Time Series Models,2019-02-05 13:06:04,"Eric Beutner, Alexander Heinemann, Stephan Smeekes","http://arxiv.org/abs/1902.01622v1, http://arxiv.org/pdf/1902.01622v1",econ.EM 29162,em,"The identification of the network effect is based on either group size variation, the structure of the network or the relative position in the network. I provide easy-to-verify necessary conditions for identification of undirected network models based on the number of distinct eigenvalues of the adjacency matrix. Identification of network effects is possible; although in many empirical situations existing identification strategies may require the use of many instruments or instruments that could be strongly correlated with each other. The use of highly correlated instruments or many instruments may lead to weak identification or many instruments bias. This paper proposes regularized versions of the two-stage least squares (2SLS) estimators as a solution to these problems. The proposed estimators are consistent and asymptotically normal. A Monte Carlo study illustrates the properties of the regularized estimators. An empirical application, assessing a local government tax competition model, shows the empirical relevance of using regularization methods.",Weak Identification and Estimation of Social Interaction Models,2019-02-16 22:36:11,Guy Tchuente,"http://arxiv.org/abs/1902.06143v1, http://arxiv.org/pdf/1902.06143v1",econ.EM 29163,em,"This paper is concerned with learning decision makers' preferences using data on observed choices from a finite set of risky alternatives. We propose a discrete choice model with unobserved heterogeneity in consideration sets and in standard risk aversion. We obtain sufficient conditions for the model's semi-nonparametric point identification, including in cases where consideration depends on preferences and on some of the exogenous variables. Our method yields an estimator that is easy to compute and is applicable in markets with large choice sets. We illustrate its properties using a dataset on property insurance purchases.",Discrete Choice under Risk with Limited Consideration,2019-02-18 19:05:32,"Levon Barseghyan, Francesca Molinari, Matthew Thirkettle","http://arxiv.org/abs/1902.06629v3, http://arxiv.org/pdf/1902.06629v3",econ.EM 29164,em,"The synthetic control method is often used in treatment effect estimation with panel data where only a few units are treated and a small number of post-treatment periods are available. Current estimation and inference procedures for synthetic control methods do not allow for the existence of spillover effects, which are plausible in many applications. In this paper, we consider estimation and inference for synthetic control methods, allowing for spillover effects. We propose estimators for both direct treatment effects and spillover effects and show they are asymptotically unbiased. In addition, we propose an inferential procedure and show it is asymptotically unbiased. Our estimation and inference procedure applies to cases with multiple treated units or periods, and where the underlying factor model is either stationary or cointegrated. In simulations, we confirm that the presence of spillovers renders current methods biased and have distorted sizes, whereas our methods yield properly sized tests and retain reasonable power. We apply our method to a classic empirical example that investigates the effect of California's tobacco control program as in Abadie et al. (2010) and find evidence of spillovers.",Estimation and Inference for Synthetic Control Methods with Spillover Effects,2019-02-20 02:19:26,"Jianfei Cao, Connor Dowd","http://arxiv.org/abs/1902.07343v2, http://arxiv.org/pdf/1902.07343v2",econ.EM 29165,em,"I show how to reveal ambiguity-sensitive preferences over a single natural event. In the proposed elicitation mechanism, agents mix binarized bets on the uncertain event and its complement under varying betting odds. The mechanism identifies the interval of relevant probabilities for maxmin and maxmax preferences. For variational preferences and smooth second-order preferences, the mechanism reveals inner bounds, that are sharp under high stakes. For small stakes, mixing under second-order preferences is dominated by the variance of the second-order distribution. Additionally, the mechanism can distinguish extreme ambiguity aversion as in maxmin preferences and moderate ambiguity aversion as in variational or smooth second-order preferences. An experimental implementation suggests that participants perceive almost as much ambiguity for the stock index and actions of other participants as for the Ellsberg urn, indicating the importance of ambiguity in real-world decision-making.",Eliciting ambiguity with mixing bets,2019-02-20 11:19:21,Patrick Schmidt,"http://arxiv.org/abs/1902.07447v4, http://arxiv.org/pdf/1902.07447v4",econ.EM 29166,em,"Ordered probit and logit models have been frequently used to estimate the mean ranking of happiness outcomes (and other ordinal data) across groups. However, it has been recently highlighted that such ranking may not be identified in most happiness applications. We suggest researchers focus on median comparison instead of the mean. This is because the median rank can be identified even if the mean rank is not. Furthermore, median ranks in probit and logit models can be readily estimated using standard statistical softwares. The median ranking, as well as ranking for other quantiles, can also be estimated semiparametrically and we provide a new constrained mixed integer optimization procedure for implementation. We apply it to estimate a happiness equation using General Social Survey data of the US.",Robust Ranking of Happiness Outcomes: A Median Regression Perspective,2019-02-20 21:50:07,"Le-Yu Chen, Ekaterina Oparina, Nattavudh Powdthavee, Sorawoot Srisuma","http://arxiv.org/abs/1902.07696v3, http://arxiv.org/pdf/1902.07696v3",econ.EM 29167,em,"We bound features of counterfactual choices in the nonparametric random utility model of demand, i.e. if observable choices are repeated cross-sections and one allows for unrestricted, unobserved heterogeneity. In this setting, tight bounds are developed on counterfactual discrete choice probabilities and on the expectation and c.d.f. of (functionals of) counterfactual stochastic demand.",Nonparametric Counterfactuals in Random Utility Models,2019-02-22 06:07:40,"Yuichi Kitamura, Jörg Stoye","http://arxiv.org/abs/1902.08350v2, http://arxiv.org/pdf/1902.08350v2",econ.EM 29168,em,"We propose a counterfactual Kaplan-Meier estimator that incorporates exogenous covariates and unobserved heterogeneity of unrestricted dimensionality in duration models with random censoring. Under some regularity conditions, we establish the joint weak convergence of the proposed counterfactual estimator and the unconditional Kaplan-Meier (1958) estimator. Applying the functional delta method, we make inference on the cumulative hazard policy effect, that is, the change of duration dependence in response to a counterfactual policy. We also evaluate the finite sample performance of the proposed counterfactual estimation method in a Monte Carlo study.",Counterfactual Inference in Duration Models with Random Censoring,2019-02-22 17:17:05,Jiun-Hua Su,"http://arxiv.org/abs/1902.08502v1, http://arxiv.org/pdf/1902.08502v1",econ.EM 29169,em,"We show that when a high-dimensional data matrix is the sum of a low-rank matrix and a random error matrix with independent entries, the low-rank component can be consistently estimated by solving a convex minimization problem. We develop a new theoretical argument to establish consistency without assuming sparsity or the existence of any moments of the error matrix, so that fat-tailed continuous random errors such as Cauchy are allowed. The results are illustrated by simulations.",Robust Principal Component Analysis with Non-Sparse Errors,2019-02-23 07:55:29,"Jushan Bai, Junlong Feng","http://arxiv.org/abs/1902.08735v2, http://arxiv.org/pdf/1902.08735v2",econ.EM 29171,em,"An important goal of empirical demand analysis is choice and welfare prediction on counterfactual budget sets arising from potential policy-interventions. Such predictions are more credible when made without arbitrary functional-form/distributional assumptions, and instead based solely on economic rationality, i.e. that choice is consistent with utility maximization by a heterogeneous population. This paper investigates nonparametric economic rationality in the empirically important context of binary choice. We show that under general unobserved heterogeneity, economic rationality is equivalent to a pair of Slutsky-like shape-restrictions on choice-probability functions. The forms of these restrictions differ from Slutsky-inequalities for continuous goods. Unlike McFadden-Richter's stochastic revealed preference, our shape-restrictions (a) are global, i.e. their forms do not depend on which and how many budget-sets are observed, (b) are closed-form, hence easy to impose on parametric/semi/non-parametric models in practical applications, and (c) provide computationally simple, theory-consistent bounds on demand and welfare predictions on counterfactual budget-sets.",The Empirical Content of Binary Choice Models,2019-02-28 13:57:42,Debopam Bhattacharya,"http://arxiv.org/abs/1902.11012v4, http://arxiv.org/pdf/1902.11012v4",econ.EM 29172,em,"McFadden's random-utility model of multinomial choice has long been the workhorse of applied research. We establish shape-restrictions under which multinomial choice-probability functions can be rationalized via random-utility models with nonparametric unobserved heterogeneity and general income-effects. When combined with an additional restriction, the above conditions are equivalent to the canonical Additive Random Utility Model. The sufficiency-proof is constructive, and facilitates nonparametric identification of preference-distributions without requiring identification-at-infinity type arguments. A corollary shows that Slutsky-symmetry, a key condition for previous rationalizability results, is equivalent to absence of income-effects. Our results imply theory-consistent nonparametric bounds for choice-probabilities on counterfactual budget-sets. They also apply to widely used random-coefficient models, upon conditioning on observable choice characteristics. The theory of partial differential equations plays a key role in our analysis.",Integrability and Identification in Multinomial Choice Models,2019-02-28 14:12:30,Debopam Bhattacharya,"http://arxiv.org/abs/1902.11017v4, http://arxiv.org/pdf/1902.11017v4",econ.EM 29173,em,"This paper studies estimation of linear panel regression models with heterogeneous coefficients, when both the regressors and the residual contain a possibly common, latent, factor structure. Our theory is (nearly) efficient, because based on the GLS principle, and also robust to the specification of such factor structure because it does not require any information on the number of factors nor estimation of the factor structure itself. We first show how the unfeasible GLS estimator not only affords an efficiency improvement but, more importantly, provides a bias-adjusted estimator with the conventional limiting distribution, for situations where the OLS is affected by a first-order bias. The technical challenge resolved in the paper is to show how these properties are preserved for a class of feasible GLS estimators in a double-asymptotics setting. Our theory is illustrated by means of Monte Carlo exercises and, then, with an empirical application using individual asset returns and firms' characteristics data.",Robust Nearly-Efficient Estimation of Large Panels with Factor Structures,2019-02-28 19:01:13,"Marco Avarucci, Paolo Zaffaroni","http://arxiv.org/abs/1902.11181v1, http://arxiv.org/pdf/1902.11181v1",econ.EM 29174,em,"This paper studies high-dimensional regression models with lasso when data is sampled under multi-way clustering. First, we establish convergence rates for the lasso and post-lasso estimators. Second, we propose a novel inference method based on a post-double-selection procedure and show its asymptotic validity. Our procedure can be easily implemented with existing statistical packages. Simulation results demonstrate that the proposed procedure works well in finite sample. We illustrate the proposed method with a couple of empirical applications to development and growth economics.",Lasso under Multi-way Clustering: Estimation and Post-selection Inference,2019-05-06 18:45:57,"Harold D. Chiang, Yuya Sasaki","http://arxiv.org/abs/1905.02107v3, http://arxiv.org/pdf/1905.02107v3",econ.EM 29175,em,"We propose a new estimation method for heterogeneous causal effects which utilizes a regression discontinuity (RD) design for multiple datasets with different thresholds. The standard RD design is frequently used in applied researches, but the result is very limited in that the average treatment effects is estimable only at the threshold on the running variable. In application studies it is often the case that thresholds are different among databases from different regions or firms. For example thresholds for scholarship differ with states. The proposed estimator based on the augmented inverse probability weighted local linear estimator can estimate the average effects at an arbitrary point on the running variable between the thresholds under mild conditions, while the method adjust for the difference of the distributions of covariates among datasets. We perform simulations to investigate the performance of the proposed estimator in the finite samples.",Regression Discontinuity Design with Multiple Groups for Heterogeneous Causal Effect Estimation,2019-05-11 07:11:49,"Takayuki Toda, Ayako Wakano, Takahiro Hoshino","http://arxiv.org/abs/1905.04443v1, http://arxiv.org/pdf/1905.04443v1",econ.EM 29176,em,"We use novel nonparametric techniques to test for the presence of non-classical measurement error in reported life satisfaction (LS) and study the potential effects from ignoring it. Our dataset comes from Wave 3 of the UK Understanding Society that is surveyed from 35,000 British households. Our test finds evidence of measurement error in reported LS for the entire dataset as well as for 26 out of 32 socioeconomic subgroups in the sample. We estimate the joint distribution of reported and latent LS nonparametrically in order to understand the mis-reporting behavior. We show this distribution can then be used to estimate parametric models of latent LS. We find measurement error bias is not severe enough to distort the main drivers of LS. But there is an important difference that is policy relevant. We find women tend to over-report their latent LS relative to men. This may help explain the gender puzzle that questions why women are reportedly happier than men despite being worse off on objective outcomes such as income and employment.",Analyzing Subjective Well-Being Data with Misclassification,2019-05-15 12:05:11,"Ekaterina Oparina, Sorawoot Srisuma","http://arxiv.org/abs/1905.06037v1, http://arxiv.org/pdf/1905.06037v1",econ.EM 29933,em,"This paper shows that the endogeneity test using the control function approach in linear instrumental variable models is a variant of the Hausman test. Moreover, we find that the test statistics used in these tests can be numerically ordered, indicating their relative power properties in finite samples.",Some Finite-Sample Results on the Hausman Test,2023-12-17 02:14:02,"Jinyong Hahn, Zhipeng Liao, Nan Liu, Shuyang Sheng","http://arxiv.org/abs/2312.10558v1, http://arxiv.org/pdf/2312.10558v1",econ.EM 29177,em,"In this research paper, I have performed time series analysis and forecasted the monthly value of housing starts for the year 2019 using several econometric methods - ARIMA(X), VARX, (G)ARCH and machine learning algorithms - artificial neural networks, ridge regression, K-Nearest Neighbors, and support vector regression, and created an ensemble model. The ensemble model stacks the predictions from various individual models, and gives a weighted average of all predictions. The analyses suggest that the ensemble model has performed the best among all the models as the prediction errors are the lowest, while the econometric models have higher error rates.",Time Series Analysis and Forecasting of the US Housing Starts using Econometric and Machine Learning Model,2019-05-20 05:17:28,Sudiksha Joshi,"http://arxiv.org/abs/1905.07848v1, http://arxiv.org/pdf/1905.07848v1",econ.EM 29178,em,"Time-varying parameter (TVP) models have the potential to be over-parameterized, particularly when the number of variables in the model is large. Global-local priors are increasingly used to induce shrinkage in such models. But the estimates produced by these priors can still have appreciable uncertainty. Sparsification has the potential to reduce this uncertainty and improve forecasts. In this paper, we develop computationally simple methods which both shrink and sparsify TVP models. In a simulated data exercise we show the benefits of our shrink-then-sparsify approach in a variety of sparse and dense TVP regressions. In a macroeconomic forecasting exercise, we find our approach to substantially improve forecast performance relative to shrinkage alone.",Inducing Sparsity and Shrinkage in Time-Varying Parameter Models,2019-05-26 14:13:09,"Florian Huber, Gary Koop, Luca Onorante","http://arxiv.org/abs/1905.10787v2, http://arxiv.org/pdf/1905.10787v2",econ.EM 29179,em,"This paper considers unit-root tests in large n and large T heterogeneous panels with cross-sectional dependence generated by unobserved factors. We reconsider the two prevalent approaches in the literature, that of Moon and Perron (2004) and the PANIC setup proposed in Bai and Ng (2004). While these have been considered as completely different setups, we show that, in case of Gaussian innovations, the frameworks are asymptotically equivalent in the sense that both experiments are locally asymptotically normal (LAN) with the same central sequence. Using Le Cam's theory of statistical experiments we determine the local asymptotic power envelope and derive an optimal test jointly in both setups. We show that the popular Moon and Perron (2004) and Bai and Ng (2010) tests only attain the power envelope in case there is no heterogeneity in the long-run variance of the idiosyncratic components. The new test is asymptotically uniformly most powerful irrespective of possible heterogeneity. Moreover, it turns out that for any test, satisfying a mild regularity condition, the size and local asymptotic power are the same under both data generating processes. Thus, applied researchers do not need to decide on one of the two frameworks to conduct unit root tests. Monte-Carlo simulations corroborate our asymptotic results and document significant gains in finite-sample power if the variances of the idiosyncratic shocks differ substantially among the cross sectional units.",Local Asymptotic Equivalence of the Bai and Ng (2004) and Moon and Perron (2004) Frameworks for Panel Unit Root Testing,2019-05-27 16:09:49,"Oliver Wichert, I. Gaia Becheri, Feike C. Drost, Ramon van den Akker","http://arxiv.org/abs/1905.11184v1, http://arxiv.org/pdf/1905.11184v1",econ.EM 29180,em,"This paper develops a threshold regression model where an unknown relationship between two variables nonparametrically determines the threshold. We allow the observations to be cross-sectionally dependent so that the model can be applied to determine an unknown spatial border for sample splitting over a random field. We derive the uniform rate of convergence and the nonstandard limiting distribution of the nonparametric threshold estimator. We also obtain the root-n consistency and the asymptotic normality of the regression coefficient estimator. Our model has broad empirical relevance as illustrated by estimating the tipping point in social segregation problems as a function of demographic characteristics; and determining metropolitan area boundaries using nighttime light intensity collected from satellite imagery. We find that the new empirical results are substantially different from those in the existing studies.",Threshold Regression with Nonparametric Sample Splitting,2019-05-30 19:07:46,"Yoonseok Lee, Yulong Wang","http://arxiv.org/abs/1905.13140v3, http://arxiv.org/pdf/1905.13140v3",econ.EM 29181,em,"This paper studies large $N$ and large $T$ conditional quantile panel data models with interactive fixed effects. We propose a nuclear norm penalized estimator of the coefficients on the covariates and the low-rank matrix formed by the fixed effects. The estimator solves a convex minimization problem, not requiring pre-estimation of the (number of the) fixed effects. It also allows the number of covariates to grow slowly with $N$ and $T$. We derive an error bound on the estimator that holds uniformly in quantile level. The order of the bound implies uniform consistency of the estimator and is nearly optimal for the low-rank component. Given the error bound, we also propose a consistent estimator of the number of fixed effects at any quantile level. To derive the error bound, we develop new theoretical arguments under primitive assumptions and new results on random matrices that may be of independent interest. We demonstrate the performance of the estimator via Monte Carlo simulations.",Regularized Quantile Regression with Interactive Fixed Effects,2019-11-01 03:44:14,Junlong Feng,"http://arxiv.org/abs/1911.00166v4, http://arxiv.org/pdf/1911.00166v4",econ.EM 29182,em,"This paper considers panel data models where the conditional quantiles of the dependent variables are additively separable as unknown functions of the regressors and the individual effects. We propose two estimators of the quantile partial effects while controlling for the individual heterogeneity. The first estimator is based on local linear quantile regressions, and the second is based on local linear smoothed quantile regressions, both of which are easy to compute in practice. Within the large T framework, we provide sufficient conditions under which the two estimators are shown to be asymptotically normally distributed. In particular, for the first estimator, it is shown that $N<