id
int64 28.8k
36k
| category
stringclasses 3
values | text
stringlengths 44
3.03k
| title
stringlengths 10
236
| published
stringlengths 19
19
| author
stringlengths 6
943
| link
stringlengths 66
127
| primary_category
stringclasses 62
values |
---|---|---|---|---|---|---|---|
28,968 | em | Despite its critical importance, the famous X-model elaborated by Ziel and
Steinert (2016) has neither bin been widely studied nor further developed. And
yet, the possibilities to improve the model are as numerous as the fields it
can be applied to. The present paper takes advantage of a technique proposed by
Coulon et al. (2014) to enhance the X-model. Instead of using the wholesale
supply and demand curves as inputs for the model, we rely on the transformed
versions of these curves with a perfectly inelastic demand. As a result,
computational requirements of our X-model reduce and its forecasting power
increases substantially. Moreover, our X-model becomes more robust towards
outliers present in the initial auction curves data. | X-model: further development and possible modifications | 2019-07-22 12:59:08 | Sergei Kulakov | http://arxiv.org/abs/1907.09206v1, http://arxiv.org/pdf/1907.09206v1 | econ.EM |
28,969 | em | In their IZA Discussion Paper 10247, Johansson and Lee claim that the main
result (Proposition 3) in Abbring and Van den Berg (2003b) does not hold. We
show that their claim is incorrect. At a certain point within their line of
reasoning, they make a rather basic error while transforming one random
variable into another random variable, and this leads them to draw incorrect
conclusions. As a result, their paper can be discarded. | Rebuttal of "On Nonparametric Identification of Treatment Effects in Duration Models" | 2019-07-20 12:18:44 | Jaap H. Abbring, Gerard J. van den Berg | http://arxiv.org/abs/1907.09886v1, http://arxiv.org/pdf/1907.09886v1 | econ.EM |
28,970 | em | This study examines statistical performance of tests for time-varying
properties under misspecified conditional mean and variance. When we test for
time-varying properties of the conditional mean in the case in which data have
no time-varying mean but have time-varying variance, asymptotic tests have size
distortions. This is improved by the use of a bootstrap method. Similarly, when
we test for time-varying properties of the conditional variance in the case in
which data have time-varying mean but no time-varying variance, asymptotic
tests have large size distortions. This is not improved even by the use of
bootstrap methods. We show that tests for time-varying properties of the
conditional mean by the bootstrap are robust regardless of the time-varying
variance model, whereas tests for time-varying properties of the conditional
variance do not perform well in the presence of misspecified time-varying mean. | Testing for time-varying properties under misspecified conditional mean and variance | 2019-07-28 19:47:10 | Daiki Maki, Yasushi Ota | http://arxiv.org/abs/1907.12107v2, http://arxiv.org/pdf/1907.12107v2 | econ.EM |
28,971 | em | This study compares statistical properties of ARCH tests that are robust to
the presence of the misspecified conditional mean. The approaches employed in
this study are based on two nonparametric regressions for the conditional mean.
First is the ARCH test using Nadayara-Watson kernel regression. Second is the
ARCH test using the polynomial approximation regression. The two approaches do
not require specification of the conditional mean and can adapt to various
nonlinear models, which are unknown a priori. Accordingly, they are robust to
misspecified conditional mean models. Simulation results show that ARCH tests
based on the polynomial approximation regression approach have better
statistical properties than ARCH tests using Nadayara-Watson kernel regression
approach for various nonlinear models. | Robust tests for ARCH in the presence of the misspecified conditional mean: A comparison of nonparametric approches | 2019-07-30 09:19:18 | Daiki Maki, Yasushi Ota | http://arxiv.org/abs/1907.12752v2, http://arxiv.org/pdf/1907.12752v2 | econ.EM |
28,972 | em | This paper provides a necessary and sufficient instruments condition assuring
two-step generalized method of moments (GMM) based on the forward orthogonal
deviations transformation is numerically equivalent to two-step GMM based on
the first-difference transformation. The condition also tells us when system
GMM, based on differencing, can be computed using forward orthogonal
deviations. Additionally, it tells us when forward orthogonal deviations and
differencing do not lead to the same GMM estimator. When estimators based on
these two transformations differ, Monte Carlo simulations indicate that
estimators based on forward orthogonal deviations have better finite sample
properties than estimators based on differencing. | A Comparison of First-Difference and Forward Orthogonal Deviations GMM | 2019-07-30 16:19:35 | Robert F. Phillips | http://arxiv.org/abs/1907.12880v1, http://arxiv.org/pdf/1907.12880v1 | econ.EM |
29,012 | em | This paper introduces a version of the interdependent value model of Milgrom
and Weber (1982), where the signals are given by an index gathering signal
shifters observed by the econometrician and private ones specific to each
bidders. The model primitives are shown to be nonparametrically identified from
first-price auction bids under a testable mild rank condition. Identification
holds for all possible signal values. This allows to consider a wide range of
counterfactuals where this is important, as expected revenue in second-price
auction. An estimation procedure is briefly discussed. | Nonparametric identification of an interdependent value model with buyer covariates from first-price auction bids | 2019-10-23 19:12:17 | Nathalie Gimenes, Emmanuel Guerre | http://arxiv.org/abs/1910.10646v1, http://arxiv.org/pdf/1910.10646v1 | econ.EM |
28,973 | em | Given the unconfoundedness assumption, we propose new nonparametric
estimators for the reduced dimensional conditional average treatment effect
(CATE) function. In the first stage, the nuisance functions necessary for
identifying CATE are estimated by machine learning methods, allowing the number
of covariates to be comparable to or larger than the sample size. The second
stage consists of a low-dimensional local linear regression, reducing CATE to a
function of the covariate(s) of interest. We consider two variants of the
estimator depending on whether the nuisance functions are estimated over the
full sample or over a hold-out sample. Building on Belloni at al. (2017) and
Chernozhukov et al. (2018), we derive functional limit theory for the
estimators and provide an easy-to-implement procedure for uniform inference
based on the multiplier bootstrap. The empirical application revisits the
effect of maternal smoking on a baby's birth weight as a function of the
mother's age. | Estimation of Conditional Average Treatment Effects with High-Dimensional Data | 2019-08-07 02:40:47 | Qingliang Fan, Yu-Chin Hsu, Robert P. Lieli, Yichong Zhang | http://arxiv.org/abs/1908.02399v5, http://arxiv.org/pdf/1908.02399v5 | econ.EM |
28,974 | em | We consider nonparametric identification of independent private value
first-price auction models, in which the analyst only observes winning bids.
Our benchmark model assumes an exogenous number of bidders N. We show that, if
the bidders observe N, the resulting discontinuities in the winning bid density
can be used to identify the distribution of N. The private value distribution
can be nonparametrically identified in a second step. This extends, under
testable identification conditions, to the case where N is a number of
potential buyers, who bid with some unknown probability. Identification also
holds in presence of additive unobserved heterogeneity drawn from some
parametric distributions. A last class of extensions deals with cartels which
can change size across auctions due to varying bidder cartel membership.
Identification still holds if the econometrician observes winner identities and
winning bids, provided a (unknown) bidder is always a cartel member. The cartel
participation probabilities of other bidders can also be identified. An
application to USFS timber auction data illustrates the usefulness of
discontinuities to analyze bidder participation. | Nonparametric Identification of First-Price Auction with Unobserved Competition: A Density Discontinuity Framework | 2019-08-15 13:06:05 | Emmanuel Guerre, Yao Luo | http://arxiv.org/abs/1908.05476v2, http://arxiv.org/pdf/1908.05476v2 | econ.EM |
28,975 | em | Establishing that a demand mapping is injective is core first step for a
variety of methodologies. When a version of the law of demand holds, global
injectivity can be checked by seeing whether the demand mapping is constant
over any line segments. When we add the assumption of differentiability, we
obtain necessary and sufficient conditions for injectivity that generalize
classical \cite{gale1965jacobian} conditions for quasi-definite Jacobians. | Injectivity and the Law of Demand | 2019-08-15 22:13:43 | Roy Allen | http://arxiv.org/abs/1908.05714v1, http://arxiv.org/pdf/1908.05714v1 | econ.EM |
28,976 | em | Policy evaluation is central to economic data analysis, but economists mostly
work with observational data in view of limited opportunities to carry out
controlled experiments. In the potential outcome framework, the panel data
approach (Hsiao, Ching and Wan, 2012) constructs the counterfactual by
exploiting the correlation between cross-sectional units in panel data. The
choice of cross-sectional control units, a key step in its implementation, is
nevertheless unresolved in data-rich environment when many possible controls
are at the researcher's disposal. We propose the forward selection method to
choose control units, and establish validity of the post-selection inference.
Our asymptotic framework allows the number of possible controls to grow much
faster than the time dimension. The easy-to-implement algorithms and their
theoretical guarantee extend the panel data approach to big data settings. | Forward-Selected Panel Data Approach for Program Evaluation | 2019-08-16 12:00:57 | Zhentao Shi, Jingyi Huang | http://arxiv.org/abs/1908.05894v3, http://arxiv.org/pdf/1908.05894v3 | econ.EM |
28,977 | em | A family of models of individual discrete choice are constructed by means of
statistical averaging of choices made by a subject in a reinforcement learning
process, where the subject has short, k-term memory span. The choice
probabilities in these models combine in a non-trivial, non-linear way the
initial learning bias and the experience gained through learning. The
properties of such models are discussed and, in particular, it is shown that
probabilities deviate from Luce's Choice Axiom, even if the initial bias
adheres to it. Moreover, we shown that the latter property is recovered as the
memory span becomes large.
Two applications in utility theory are considered. In the first, we use the
discrete choice model to generate binary preference relation on simple
lotteries. We show that the preferences violate transitivity and independence
axioms of expected utility theory. Furthermore, we establish the dependence of
the preferences on frames, with risk aversion for gains, and risk seeking for
losses. Based on these findings we propose next a parametric model of choice
based on the probability maximization principle, as a model for deviations from
expected utility principle. To illustrate the approach we apply it to the
classical problem of demand for insurance. | A model of discrete choice based on reinforcement learning under short-term memory | 2019-08-16 22:15:33 | Misha Perepelitsa | http://arxiv.org/abs/1908.06133v1, http://arxiv.org/pdf/1908.06133v1 | econ.EM |
28,978 | em | We propose a new finite sample corrected variance estimator for the linear
generalized method of moments (GMM) including the one-step, two-step, and
iterated estimators. Our formula additionally corrects for the
over-identification bias in variance estimation on top of the commonly used
finite sample correction of Windmeijer (2005) which corrects for the bias from
estimating the efficient weight matrix, so is doubly corrected. An important
feature of the proposed double correction is that it automatically provides
robustness to misspecification of the moment condition. In contrast, the
conventional variance estimator and the Windmeijer correction are inconsistent
under misspecification. That is, the proposed double correction formula
provides a convenient way to obtain improved inference under correct
specification and robustness against misspecification at the same time. | A Doubly Corrected Robust Variance Estimator for Linear GMM | 2019-08-21 15:41:08 | Jungbin Hwang, Byunghoon Kang, Seojeong Lee | http://arxiv.org/abs/1908.07821v2, http://arxiv.org/pdf/1908.07821v2 | econ.EM |
29,013 | em | This paper deals with the time-varying high dimensional covariance matrix
estimation. We propose two covariance matrix estimators corresponding with a
time-varying approximate factor model and a time-varying approximate
characteristic-based factor model, respectively. The models allow the factor
loadings, factor covariance matrix, and error covariance matrix to change
smoothly over time. We study the rate of convergence of each estimator. Our
simulation and empirical study indicate that time-varying covariance matrix
estimators generally perform better than time-invariant covariance matrix
estimators. Also, if characteristics are available that genuinely explain true
loadings, the characteristics can be used to estimate loadings more precisely
in finite samples; their helpfulness increases when loadings rapidly change. | Estimating a Large Covariance Matrix in Time-varying Factor Models | 2019-10-26 03:08:24 | Jaeheon Jung | http://arxiv.org/abs/1910.11965v1, http://arxiv.org/pdf/1910.11965v1 | econ.EM |
28,979 | em | This paper considers the practically important case of nonparametrically
estimating heterogeneous average treatment effects that vary with a limited
number of discrete and continuous covariates in a selection-on-observables
framework where the number of possible confounders is very large. We propose a
two-step estimator for which the first step is estimated by machine learning.
We show that this estimator has desirable statistical properties like
consistency, asymptotic normality and rate double robustness. In particular, we
derive the coupled convergence conditions between the nonparametric and the
machine learning steps. We also show that estimating population average
treatment effects by averaging the estimated heterogeneous effects is
semi-parametrically efficient. The new estimator is an empirical example of the
effects of mothers' smoking during pregnancy on the resulting birth weight. | Nonparametric estimation of causal heterogeneity under high-dimensional confounding | 2019-08-23 15:18:37 | Michael Zimmert, Michael Lechner | http://arxiv.org/abs/1908.08779v1, http://arxiv.org/pdf/1908.08779v1 | econ.EM |
28,980 | em | The literature on stochastic programming typically restricts attention to
problems that fulfill constraint qualifications. The literature on estimation
and inference under partial identification frequently restricts the geometry of
identified sets with diverse high-level assumptions. These superficially appear
to be different approaches to closely related problems. We extensively analyze
their relation. Among other things, we show that for partial identification
through pure moment inequalities, numerous assumptions from the literature
essentially coincide with the Mangasarian-Fromowitz constraint qualification.
This clarifies the relation between well-known contributions, including within
econometrics, and elucidates stringency, as well as ease of verification, of
some high-level assumptions in seminal papers. | Constraint Qualifications in Partial Identification | 2019-08-24 10:34:43 | Hiroaki Kaido, Francesca Molinari, Jörg Stoye | http://dx.doi.org/10.1017/S0266466621000207, http://arxiv.org/abs/1908.09103v4, http://arxiv.org/pdf/1908.09103v4 | econ.EM |
28,981 | em | We develop a new extreme value theory for repeated cross-sectional and panel
data to construct asymptotically valid confidence intervals (CIs) for
conditional extremal quantiles from a fixed number $k$ of nearest-neighbor tail
observations. As a by-product, we also construct CIs for extremal quantiles of
coefficients in linear random coefficient models. For any fixed $k$, the CIs
are uniformly valid without parametric assumptions over a set of nonparametric
data generating processes associated with various tail indices. Simulation
studies show that our CIs exhibit superior small-sample coverage and length
properties than alternative nonparametric methods based on asymptotic
normality. Applying the proposed method to Natality Vital Statistics, we study
factors of extremely low birth weights. We find that signs of major effects are
the same as those found in preceding studies based on parametric models, but
with different magnitudes. | Fixed-k Inference for Conditional Extremal Quantiles | 2019-09-01 01:39:33 | Yuya Sasaki, Yulong Wang | http://arxiv.org/abs/1909.00294v3, http://arxiv.org/pdf/1909.00294v3 | econ.EM |
28,982 | em | We study the incidental parameter problem for the ``three-way'' Poisson
{Pseudo-Maximum Likelihood} (``PPML'') estimator recently recommended for
identifying the effects of trade policies and in other panel data gravity
settings. Despite the number and variety of fixed effects involved, we confirm
PPML is consistent for fixed $T$ and we show it is in fact the only estimator
among a wide range of PML gravity estimators that is generally consistent in
this context when $T$ is fixed. At the same time, asymptotic confidence
intervals in fixed-$T$ panels are not correctly centered at the true point
estimates, and cluster-robust variance estimates used to construct standard
errors are generally biased as well. We characterize each of these biases
analytically and show both numerically and empirically that they are salient
even for real-data settings with a large number of countries. We also offer
practical remedies that can be used to obtain more reliable inferences of the
effects of trade policies and other time-varying gravity variables, which we
make available via an accompanying Stata package called ppml_fe_bias. | Bias and Consistency in Three-way Gravity Models | 2019-09-03 20:54:06 | Martin Weidner, Thomas Zylkin | http://arxiv.org/abs/1909.01327v6, http://arxiv.org/pdf/1909.01327v6 | econ.EM |
28,983 | em | We analyze the challenges for inference in difference-in-differences (DID)
when there is spatial correlation. We present novel theoretical insights and
empirical evidence on the settings in which ignoring spatial correlation should
lead to more or less distortions in DID applications. We show that details such
as the time frame used in the estimation, the choice of the treated and control
groups, and the choice of the estimator, are key determinants of distortions
due to spatial correlation. We also analyze the feasibility and trade-offs
involved in a series of alternatives to take spatial correlation into account.
Given that, we provide relevant recommendations for applied researchers on how
to mitigate and assess the possibility of inference distortions due to spatial
correlation. | Inference in Difference-in-Differences: How Much Should We Trust in Independent Clusters? | 2019-09-04 16:19:25 | Bruno Ferman | http://arxiv.org/abs/1909.01782v7, http://arxiv.org/pdf/1909.01782v7 | econ.EM |
28,984 | em | This paper explores the estimation of a panel data model with cross-sectional
interaction that is flexible both in its approach to specifying the network of
connections between cross-sectional units, and in controlling for unobserved
heterogeneity. It is assumed that there are different sources of information
available on a network, which can be represented in the form of multiple
weights matrices. These matrices may reflect observed links, different measures
of connectivity, groupings or other network structures, and the number of
matrices may be increasing with sample size. A penalised quasi-maximum
likelihood estimator is proposed which aims to alleviate the risk of network
misspecification by shrinking the coefficients of irrelevant weights matrices
to exactly zero. Moreover, controlling for unobserved factors in estimation
provides a safeguard against the misspecification that might arise from
unobserved heterogeneity. The asymptotic properties of the estimator are
derived in a framework where the true value of each parameter remains fixed as
the total number of parameters increases. A Monte Carlo simulation is used to
assess finite sample performance, and in an empirical application the method is
applied to study the prevalence of network spillovers in determining growth
rates across countries. | Shrinkage Estimation of Network Spillovers with Factor Structured Errors | 2019-09-06 14:28:41 | Ayden Higgins, Federico Martellosio | http://arxiv.org/abs/1909.02823v4, http://arxiv.org/pdf/1909.02823v4 | econ.EM |
29,254 | em | We consider the problem of inference in Difference-in-Differences (DID) when
there are few treated units and errors are spatially correlated. We first show
that, when there is a single treated unit, some existing inference methods
designed for settings with few treated and many control units remain
asymptotically valid when errors are weakly dependent. However, these methods
may be invalid with more than one treated unit. We propose alternatives that
are asymptotically valid in this setting, even when the relevant distance
metric across units is unavailable. | Inference in Difference-in-Differences with Few Treated Units and Spatial Correlation | 2020-06-30 20:58:43 | Luis Alvarez, Bruno Ferman | http://arxiv.org/abs/2006.16997v7, http://arxiv.org/pdf/2006.16997v7 | econ.EM |
28,985 | em | The Economy Watcher Survey, which is a market survey published by the
Japanese government, contains \emph{assessments of current and future economic
conditions} by people from various fields. Although this survey provides
insights regarding economic policy for policymakers, a clear definition of the
word "future" in future economic conditions is not provided. Hence, the
assessments respondents provide in the survey are simply based on their
interpretations of the meaning of "future." This motivated us to reveal the
different interpretations of the future in their judgments of future economic
conditions by applying weakly supervised learning and text mining. In our
research, we separate the assessments of future economic conditions into
economic conditions of the near and distant future using learning from positive
and unlabeled data (PU learning). Because the dataset includes data from
several periods, we devised new architecture to enable neural networks to
conduct PU learning based on the idea of multi-task learning to efficiently
learn a classifier. Our empirical analysis confirmed that the proposed method
could separate the future economic conditions, and we interpreted the
classification results to obtain intuitions for policymaking. | Identifying Different Definitions of Future in the Assessment of Future Economic Conditions: Application of PU Learning and Text Mining | 2019-09-08 02:13:46 | Masahiro Kato | http://arxiv.org/abs/1909.03348v3, http://arxiv.org/pdf/1909.03348v3 | econ.EM |
28,986 | em | This paper investigates double/debiased machine learning (DML) under multiway
clustered sampling environments. We propose a novel multiway cross fitting
algorithm and a multiway DML estimator based on this algorithm. We also develop
a multiway cluster robust standard error formula. Simulations indicate that the
proposed procedure has favorable finite sample performance. Applying the
proposed method to market share data for demand analysis, we obtain larger
two-way cluster robust standard errors than non-robust ones. | Multiway Cluster Robust Double/Debiased Machine Learning | 2019-09-08 19:03:37 | Harold D. Chiang, Kengo Kato, Yukun Ma, Yuya Sasaki | http://arxiv.org/abs/1909.03489v3, http://arxiv.org/pdf/1909.03489v3 | econ.EM |
28,987 | em | A desire to understand the decision of the UK to leave the European Union,
Brexit, in the referendum of June 2016 has continued to occupy academics, the
media and politicians. Using topological data analysis ball mapper we extract
information from multi-dimensional datasets gathered on Brexit voting and
regional socio-economic characteristics. While we find broad patterns
consistent with extant empirical work, we also evidence that support for Leave
drew from a far more homogenous demographic than Remain. Obtaining votes from
this concise set was more straightforward for Leave campaigners than was
Remain's task of mobilising a diverse group to oppose Brexit. | An Economic Topology of the Brexit vote | 2019-09-08 19:05:40 | Pawel Dlotko, Lucy Minford, Simon Rudkin, Wanling Qiu | http://arxiv.org/abs/1909.03490v2, http://arxiv.org/pdf/1909.03490v2 | econ.EM |
28,988 | em | We recast the synthetic controls for evaluating policies as a counterfactual
prediction problem and replace its linear regression with a nonparametric model
inspired by machine learning. The proposed method enables us to achieve
accurate counterfactual predictions and we provide theoretical guarantees. We
apply our method to a highly debated policy: the relocation of the US embassy
to Jerusalem. In Israel and Palestine, we find that the average number of
weekly conflicts has increased by roughly 103\% over 48 weeks since the
relocation was announced on December 6, 2017. By using conformal inference and
placebo tests, we justify our model and find the increase to be statistically
significant. | Tree-based Synthetic Control Methods: Consequences of moving the US Embassy | 2019-09-09 19:15:03 | Nicolaj Søndergaard Mühlbach, Mikkel Slot Nielsen | http://arxiv.org/abs/1909.03968v3, http://arxiv.org/pdf/1909.03968v3 | econ.EM |
28,989 | em | We analyze the properties of matching estimators when there are few treated,
but many control observations. We show that, under standard assumptions, the
nearest neighbor matching estimator for the average treatment effect on the
treated is asymptotically unbiased in this framework. However, when the number
of treated observations is fixed, the estimator is not consistent, and it is
generally not asymptotically normal. Since standard inference methods are
inadequate, we propose alternative inference methods, based on the theory of
randomization tests under approximate symmetry, that are asymptotically valid
in this framework. We show that these tests are valid under relatively strong
assumptions when the number of treated observations is fixed, and under weaker
assumptions when the number of treated observations increases, but at a lower
rate relative to the number of control observations. | Matching Estimators with Few Treated and Many Control Observations | 2019-09-11 17:49:03 | Bruno Ferman | http://arxiv.org/abs/1909.05093v4, http://arxiv.org/pdf/1909.05093v4 | econ.EM |
28,990 | em | The paper proposes a quantile-regression inference framework for first-price
auctions with symmetric risk-neutral bidders under the independent
private-value paradigm. It is first shown that a private-value quantile
regression generates a quantile regression for the bids. The private-value
quantile regression can be easily estimated from the bid quantile regression
and its derivative with respect to the quantile level. This also allows to test
for various specification or exogeneity null hypothesis using the observed bids
in a simple way. A new local polynomial technique is proposed to estimate the
latter over the whole quantile level interval. Plug-in estimation of
functionals is also considered, as needed for the expected revenue or the case
of CRRA risk-averse bidders, which is amenable to our framework. A
quantile-regression analysis to USFS timber is found more appropriate than the
homogenized-bid methodology and illustrates the contribution of each
explanatory variables to the private-value distribution. Linear interactive
sieve extensions are proposed and studied in the Appendices. | Quantile regression methods for first-price auctions | 2019-09-12 13:05:37 | Nathalie Gimenes, Emmanuel Guerre | http://arxiv.org/abs/1909.05542v2, http://arxiv.org/pdf/1909.05542v2 | econ.EM |
28,991 | em | This paper develops a consistent series-based specification test for
semiparametric panel data models with fixed effects. The test statistic
resembles the Lagrange Multiplier (LM) test statistic in parametric models and
is based on a quadratic form in the restricted model residuals. The use of
series methods facilitates both estimation of the null model and computation of
the test statistic. The asymptotic distribution of the test statistic is
standard normal, so that appropriate critical values can easily be computed.
The projection property of series estimators allows me to develop a degrees of
freedom correction. This correction makes it possible to account for the
estimation variance and obtain refined asymptotic results. It also
substantially improves the finite sample performance of the test. | A Consistent LM Type Specification Test for Semiparametric Panel Data Models | 2019-09-12 16:42:16 | Ivan Korolev | http://arxiv.org/abs/1909.05649v1, http://arxiv.org/pdf/1909.05649v1 | econ.EM |
28,992 | em | One simple, and often very effective, way to attenuate the impact of nuisance
parameters on maximum likelihood estimation of a parameter of interest is to
recenter the profile score for that parameter. We apply this general principle
to the quasi-maximum likelihood estimator (QMLE) of the autoregressive
parameter $\lambda$ in a spatial autoregression. The resulting estimator for
$\lambda$ has better finite sample properties compared to the QMLE for
$\lambda$, especially in the presence of a large number of covariates. It can
also solve the incidental parameter problem that arises, for example, in social
interaction models with network fixed effects, or in spatial panel models with
individual or time fixed effects. However, spatial autoregressions present
specific challenges for this type of adjustment, because recentering the
profile score may cause the adjusted estimate to be outside the usual parameter
space for $\lambda$. Conditions for this to happen are given, and implications
are discussed. For inference, we propose confidence intervals based on a
Lugannani--Rice approximation to the distribution of the adjusted QMLE of
$\lambda$. Based on our simulations, the coverage properties of these intervals
are excellent even in models with a large number of covariates. | Adjusted QMLE for the spatial autoregressive parameter | 2019-09-18 02:23:50 | Federico Martellosio, Grant Hillier | http://arxiv.org/abs/1909.08141v1, http://arxiv.org/pdf/1909.08141v1 | econ.EM |
28,993 | em | This paper investigates and extends the computationally attractive
nonparametric random coefficients estimator of Fox, Kim, Ryan, and Bajari
(2011). We show that their estimator is a special case of the nonnegative
LASSO, explaining its sparse nature observed in many applications. Recognizing
this link, we extend the estimator, transforming it to a special case of the
nonnegative elastic net. The extension improves the estimator's recovery of the
true support and allows for more accurate estimates of the random coefficients'
distribution. Our estimator is a generalization of the original estimator and
therefore, is guaranteed to have a model fit at least as good as the original
one. A theoretical analysis of both estimators' properties shows that, under
conditions, our generalized estimator approximates the true distribution more
accurately. Two Monte Carlo experiments and an application to a travel mode
data set illustrate the improved performance of the generalized estimator. | Nonparametric Estimation of the Random Coefficients Model: An Elastic Net Approach | 2019-09-18 16:22:28 | Florian Heiss, Stephan Hetzenecker, Maximilian Osterhaus | http://arxiv.org/abs/1909.08434v2, http://arxiv.org/pdf/1909.08434v2 | econ.EM |
28,994 | em | In this paper, a statistical model for panel data with unobservable grouped
factor structures which are correlated with the regressors and the group
membership can be unknown. The factor loadings are assumed to be in different
subspaces and the subspace clustering for factor loadings are considered. A
method called least squares subspace clustering estimate (LSSC) is proposed to
estimate the model parameters by minimizing the least-square criterion and to
perform the subspace clustering simultaneously. The consistency of the proposed
subspace clustering is proved and the asymptotic properties of the estimation
procedure are studied under certain conditions. A Monte Carlo simulation study
is used to illustrate the advantages of the proposed method. Further
considerations for the situations that the number of subspaces for factors, the
dimension of factors and the dimension of subspaces are unknown are also
discussed. For illustrative purposes, the proposed method is applied to study
the linkage between income and democracy across countries while subspace
patterns of unobserved factors and factor loadings are allowed. | Subspace Clustering for Panel Data with Interactive Effects | 2019-09-22 04:51:11 | Jiangtao Duan, Wei Gao, Hao Qu, Hon Keung Tony | http://arxiv.org/abs/1909.09928v2, http://arxiv.org/pdf/1909.09928v2 | econ.EM |
28,995 | em | We show that moment inequalities in a wide variety of economic applications
have a particular linear conditional structure. We use this structure to
construct uniformly valid confidence sets that remain computationally tractable
even in settings with nuisance parameters. We first introduce least favorable
critical values which deliver non-conservative tests if all moments are
binding. Next, we introduce a novel conditional inference approach which
ensures a strong form of insensitivity to slack moments. Our recommended
approach is a hybrid technique which combines desirable aspects of the least
favorable and conditional methods. The hybrid approach performs well in
simulations calibrated to Wollmann (2018), with favorable power and
computational time comparisons relative to existing alternatives. | Inference for Linear Conditional Moment Inequalities | 2019-09-22 21:24:09 | Isaiah Andrews, Jonathan Roth, Ariel Pakes | http://arxiv.org/abs/1909.10062v5, http://arxiv.org/pdf/1909.10062v5 | econ.EM |
28,996 | em | There are many environments in econometrics which require nonseparable
modeling of a structural disturbance. In a nonseparable model with endogenous
regressors, key conditions are validity of instrumental variables and
monotonicity of the model in a scalar unobservable variable. Under these
conditions the nonseparable model is equivalent to an instrumental quantile
regression model. A failure of the key conditions, however, makes instrumental
quantile regression potentially inconsistent. This paper develops a methodology
for testing the hypothesis whether the instrumental quantile regression model
is correctly specified. Our test statistic is asymptotically normally
distributed under correct specification and consistent against any alternative
model. In addition, test statistics to justify the model simplification are
established. Finite sample properties are examined in a Monte Carlo study and
an empirical illustration is provided. | Specification Testing in Nonparametric Instrumental Quantile Regression | 2019-09-23 05:41:14 | Christoph Breunig | http://dx.doi.org/10.1017/S0266466619000288, http://arxiv.org/abs/1909.10129v1, http://arxiv.org/pdf/1909.10129v1 | econ.EM |
28,997 | em | This paper proposes several tests of restricted specification in
nonparametric instrumental regression. Based on series estimators, test
statistics are established that allow for tests of the general model against a
parametric or nonparametric specification as well as a test of exogeneity of
the vector of regressors. The tests' asymptotic distributions under correct
specification are derived and their consistency against any alternative model
is shown. Under a sequence of local alternative hypotheses, the asymptotic
distributions of the tests is derived. Moreover, uniform consistency is
established over a class of alternatives whose distance to the null hypothesis
shrinks appropriately as the sample size increases. A Monte Carlo study
examines finite sample performance of the test statistics. | Goodness-of-Fit Tests based on Series Estimators in Nonparametric Instrumental Regression | 2019-09-23 05:55:22 | Christoph Breunig | http://dx.doi.org/10.1016/j.jeconom.2014.09.006, http://arxiv.org/abs/1909.10133v1, http://arxiv.org/pdf/1909.10133v1 | econ.EM |
28,998 | em | Nonparametric series regression often involves specification search over the
tuning parameter, i.e., evaluating estimates and confidence intervals with a
different number of series terms. This paper develops pointwise and uniform
inferences for conditional mean functions in nonparametric series estimations
that are uniform in the number of series terms. As a result, this paper
constructs confidence intervals and confidence bands with possibly
data-dependent series terms that have valid asymptotic coverage probabilities.
This paper also considers a partially linear model setup and develops inference
methods for the parametric part uniform in the number of series terms. The
finite sample performance of the proposed methods is investigated in various
simulation setups as well as in an illustrative example, i.e., the
nonparametric estimation of the wage elasticity of the expected labor supply
from Blomquist and Newey (2002). | Inference in Nonparametric Series Estimation with Specification Searches for the Number of Series Terms | 2019-09-26 17:45:13 | Byunghoon Kang | http://arxiv.org/abs/1909.12162v2, http://arxiv.org/pdf/1909.12162v2 | econ.EM |
28,999 | em | In this study, we investigate estimation and inference on a low-dimensional
causal parameter in the presence of high-dimensional controls in an
instrumental variable quantile regression. Our proposed econometric procedure
builds on the Neyman-type orthogonal moment conditions of a previous study
Chernozhukov, Hansen and Wuthrich (2018) and is thus relatively insensitive to
the estimation of the nuisance parameters. The Monte Carlo experiments show
that the estimator copes well with high-dimensional controls. We also apply the
procedure to empirically reinvestigate the quantile treatment effect of 401(k)
participation on accumulated wealth. | Debiased/Double Machine Learning for Instrumental Variable Quantile Regressions | 2019-09-27 13:11:18 | Jau-er Chen, Chien-Hsun Huang, Jia-Jyun Tien | http://arxiv.org/abs/1909.12592v3, http://arxiv.org/pdf/1909.12592v3 | econ.EM |
29,000 | em | Price indexes in time and space is a most relevant topic in statistical
analysis from both the methodological and the application side. In this paper a
price index providing a novel and effective solution to price indexes over
several periods and among several countries, that is in both a multi-period and
a multilateral framework, is devised. The reference basket of the devised index
is the union of the intersections of the baskets of all periods/countries in
pairs. As such, it provides a broader coverage than usual indexes. Index
closed-form expressions and updating formulas are provided and properties
investigated. Last, applications with real and simulated data provide evidence
of the performance of the index at stake. | An econometric analysis of the Italian cultural supply | 2019-09-30 22:58:41 | Consuelo Nava, Maria Grazia Zoia | http://arxiv.org/abs/1910.00073v3, http://arxiv.org/pdf/1910.00073v3 | econ.EM |
29,001 | em | We study the informational content of factor structures in discrete
triangular systems. Factor structures have been employed in a variety of
settings in cross sectional and panel data models, and in this paper we
formally quantify their identifying power in a bivariate system often employed
in the treatment effects literature. Our main findings are that imposing a
factor structure yields point identification of parameters of interest, such as
the coefficient associated with the endogenous regressor in the outcome
equation, under weaker assumptions than usually required in these models. In
particular, we show that a "non-standard" exclusion restriction that requires
an explanatory variable in the outcome equation to be excluded from the
treatment equation is no longer necessary for identification, even in cases
where all of the regressors from the outcome equation are discrete. We also
establish identification of the coefficient of the endogenous regressor in
models with more general factor structures, in situations where one has access
to at least two continuous measurements of the common factor. | Informational Content of Factor Structures in Simultaneous Binary Response Models | 2019-10-03 09:29:40 | Shakeeb Khan, Arnaud Maurel, Yichong Zhang | http://arxiv.org/abs/1910.01318v3, http://arxiv.org/pdf/1910.01318v3 | econ.EM |
29,002 | em | This paper analyzes identifiability properties of structural vector
autoregressive moving average (SVARMA) models driven by independent and
non-Gaussian shocks. It is well known, that SVARMA models driven by Gaussian
errors are not identified without imposing further identifying restrictions on
the parameters. Even in reduced form and assuming stability and invertibility,
vector autoregressive moving average models are in general not identified
without requiring certain parameter matrices to be non-singular. Independence
and non-Gaussianity of the shocks is used to show that they are identified up
to permutations and scalings. In this way, typically imposed identifying
restrictions are made testable. Furthermore, we introduce a maximum-likelihood
estimator of the non-Gaussian SVARMA model which is consistent and
asymptotically normally distributed. | Identification and Estimation of SVARMA models with Independent and Non-Gaussian Inputs | 2019-10-09 19:06:46 | Bernd Funovits | http://arxiv.org/abs/1910.04087v1, http://arxiv.org/pdf/1910.04087v1 | econ.EM |
29,003 | em | We generalize well-known results on structural identifiability of vector
autoregressive models (VAR) to the case where the innovation covariance matrix
has reduced rank. Structural singular VAR models appear, for example, as
solutions of rational expectation models where the number of shocks is usually
smaller than the number of endogenous variables, and as an essential building
block in dynamic factor models. We show that order conditions for
identifiability are misleading in the singular case and provide a rank
condition for identifiability of the noise parameters. Since the Yule-Walker
equations may have multiple solutions, we analyze the effect of restrictions on
the system parameters on over- and underidentification in detail and provide
easily verifiable conditions. | Identifiability of Structural Singular Vector Autoregressive Models | 2019-10-09 19:18:57 | Bernd Funovits, Alexander Braumann | http://dx.doi.org/10.1111/jtsa.12576, http://arxiv.org/abs/1910.04096v2, http://arxiv.org/pdf/1910.04096v2 | econ.EM |
29,014 | em | This paper studies inter-trade durations in the NASDAQ limit order market and
finds that inter-trade durations in ultra-high frequency have two modes. One
mode is to the order of approximately 10^{-4} seconds, and the other is to the
order of 1 second. This phenomenon and other empirical evidence suggest that
there are two regimes associated with the dynamics of inter-trade durations,
and the regime switchings are driven by the changes of high-frequency traders
(HFTs) between providing and taking liquidity. To find how the two modes depend
on information in the limit order book (LOB), we propose a two-state
multifactor regime-switching (MF-RSD) model for inter-trade durations, in which
the probabilities transition matrices are time-varying and depend on some
lagged LOB factors. The MF-RSD model has good in-sample fitness and the
superior out-of-sample performance, compared with some benchmark duration
models. Our findings of the effects of LOB factors on the inter-trade durations
help to understand more about the high-frequency market microstructure. | A multifactor regime-switching model for inter-trade durations in the limit order market | 2019-12-02 16:30:42 | Zhicheng Li, Haipeng Xing, Xinyun Chen | http://arxiv.org/abs/1912.00764v1, http://arxiv.org/pdf/1912.00764v1 | econ.EM |
29,004 | em | This paper proposes averaging estimation methods to improve the finite-sample
efficiency of the instrumental variables quantile regression (IVQR) estimation.
First, I apply Cheng, Liao, Shi's (2019) averaging GMM framework to the IVQR
model. I propose using the usual quantile regression moments for averaging to
take advantage of cases when endogeneity is not too strong. I also propose
using two-stage least squares slope moments to take advantage of cases when
heterogeneity is not too strong. The empirical optimal weight formula of Cheng
et al. (2019) helps optimize the bias-variance tradeoff, ensuring uniformly
better (asymptotic) risk of the averaging estimator over the standard IVQR
estimator under certain conditions. My implementation involves many
computational considerations and builds on recent developments in the quantile
literature. Second, I propose a bootstrap method that directly averages among
IVQR, quantile regression, and two-stage least squares estimators. More
specifically, I find the optimal weights in the bootstrap world and then apply
the bootstrap-optimal weights to the original sample. The bootstrap method is
simpler to compute and generally performs better in simulations, but it lacks
the formal uniform dominance results of Cheng et al. (2019). Simulation results
demonstrate that in the multiple-regressors/instruments case, both the GMM
averaging and bootstrap estimators have uniformly smaller risk than the IVQR
estimator across data-generating processes (DGPs) with all kinds of
combinations of different endogeneity levels and heterogeneity levels. In DGPs
with a single endogenous regressor and instrument, where averaging estimation
is known to have least opportunity for improvement, the proposed averaging
estimators outperform the IVQR estimator in some cases but not others. | Averaging estimation for instrumental variables quantile regression | 2019-10-09 23:48:58 | Xin Liu | http://arxiv.org/abs/1910.04245v1, http://arxiv.org/pdf/1910.04245v1 | econ.EM |
29,005 | em | This paper proposes an imputation procedure that uses the factors estimated
from a tall block along with the re-rotated loadings estimated from a wide
block to impute missing values in a panel of data. Assuming that a strong
factor structure holds for the full panel of data and its sub-blocks, it is
shown that the common component can be consistently estimated at four different
rates of convergence without requiring regularization or iteration. An
asymptotic analysis of the estimation error is obtained. An application of our
analysis is estimation of counterfactuals when potential outcomes have a factor
structure. We study the estimation of average and individual treatment effects
on the treated and establish a normal distribution theory that can be useful
for hypothesis testing. | Matrix Completion, Counterfactuals, and Factor Analysis of Missing Data | 2019-10-15 15:18:35 | Jushan Bai, Serena Ng | http://arxiv.org/abs/1910.06677v5, http://arxiv.org/pdf/1910.06677v5 | econ.EM |
29,006 | em | This paper develops a new standard-error estimator for linear panel data
models. The proposed estimator is robust to heteroskedasticity, serial
correlation, and cross-sectional correlation of unknown forms. The serial
correlation is controlled by the Newey-West method. To control for
cross-sectional correlations, we propose to use the thresholding method,
without assuming the clusters to be known. We establish the consistency of the
proposed estimator. Monte Carlo simulations show the method works well. An
empirical application is considered. | Standard Errors for Panel Data Models with Unknown Clusters | 2019-10-16 18:21:36 | Jushan Bai, Sung Hoon Choi, Yuan Liao | http://arxiv.org/abs/1910.07406v2, http://arxiv.org/pdf/1910.07406v2 | econ.EM |
29,007 | em | This article provides a selective review on the recent literature on
econometric models of network formation. The survey starts with a brief
exposition on basic concepts and tools for the statistical description of
networks. I then offer a review of dyadic models, focussing on statistical
models on pairs of nodes and describe several developments of interest to the
econometrics literature. The article also presents a discussion of non-dyadic
models where link formation might be influenced by the presence or absence of
additional links, which themselves are subject to similar influences. This is
related to the statistical literature on conditionally specified models and the
econometrics of game theoretical models. I close with a (non-exhaustive)
discussion of potential areas for further development. | Econometric Models of Network Formation | 2019-10-17 12:18:59 | Aureo de Paula | http://arxiv.org/abs/1910.07781v2, http://arxiv.org/pdf/1910.07781v2 | econ.EM |
29,008 | em | Long memory in the sense of slowly decaying autocorrelations is a stylized
fact in many time series from economics and finance. The fractionally
integrated process is the workhorse model for the analysis of these time
series. Nevertheless, there is mixed evidence in the literature concerning its
usefulness for forecasting and how forecasting based on it should be
implemented.
Employing pseudo-out-of-sample forecasting on inflation and realized
volatility time series and simulations we show that methods based on fractional
integration clearly are superior to alternative methods not accounting for long
memory, including autoregressions and exponential smoothing. Our proposal of
choosing a fixed fractional integration parameter of $d=0.5$ a priori yields
the best results overall, capturing long memory behavior, but overcoming the
deficiencies of methods using an estimated parameter.
Regarding the implementation of forecasting methods based on fractional
integration, we use simulations to compare local and global semiparametric and
parametric estimators of the long memory parameter from the Whittle family and
provide asymptotic theory backed up by simulations to compare different mean
estimators. Both of these analyses lead to new results, which are also of
interest outside the realm of forecasting. | Forecasting under Long Memory and Nonstationarity | 2019-10-18 02:57:34 | Uwe Hassler, Marc-Oliver Pohle | http://dx.doi.org/10.1093/jjfinec/nbab017, http://arxiv.org/abs/1910.08202v1, http://arxiv.org/pdf/1910.08202v1 | econ.EM |
29,009 | em | This paper develops the inferential theory for latent factor models estimated
from large dimensional panel data with missing observations. We propose an
easy-to-use all-purpose estimator for a latent factor model by applying
principal component analysis to an adjusted covariance matrix estimated from
partially observed panel data. We derive the asymptotic distribution for the
estimated factors, loadings and the imputed values under an approximate factor
model and general missing patterns. The key application is to estimate
counterfactual outcomes in causal inference from panel data. The unobserved
control group is modeled as missing values, which are inferred from the latent
factor model. The inferential theory for the imputed values allows us to test
for individual treatment effects at any time under general adoption patterns
where the units can be affected by unobserved factors. | Large Dimensional Latent Factor Modeling with Missing Observations and Applications to Causal Inference | 2019-10-18 08:38:04 | Ruoxuan Xiong, Markus Pelger | http://arxiv.org/abs/1910.08273v6, http://arxiv.org/pdf/1910.08273v6 | econ.EM |
29,017 | em | We discuss the issue of estimating large-scale vector autoregressive (VAR)
models with stochastic volatility in real-time situations where data are
sampled at different frequencies. In the case of a large VAR with stochastic
volatility, the mixed-frequency data warrant an additional step in the already
computationally challenging Markov Chain Monte Carlo algorithm used to sample
from the posterior distribution of the parameters. We suggest the use of a
factor stochastic volatility model to capture a time-varying error covariance
structure. Because the factor stochastic volatility model renders the equations
of the VAR conditionally independent, settling for this particular stochastic
volatility model comes with major computational benefits. First, we are able to
improve upon the mixed-frequency simulation smoothing step by leveraging a
univariate and adaptive filtering algorithm. Second, the regression parameters
can be sampled equation-by-equation in parallel. These computational features
of the model alleviate the computational burden and make it possible to move
the mixed-frequency VAR to the high-dimensional regime. We illustrate the model
by an application to US data using our mixed-frequency VAR with 20, 34 and 119
variables. | Estimating Large Mixed-Frequency Bayesian VAR Models | 2019-12-04 22:59:03 | Sebastian Ankargren, Paulina Jonéus | http://arxiv.org/abs/1912.02231v1, http://arxiv.org/pdf/1912.02231v1 | econ.EM |
29,018 | em | We introduce a synthetic control methodology to study policies with staggered
adoption. Many policies, such as the board gender quota, are replicated by
other policy setters at different time frames. Our method estimates the dynamic
average treatment effects on the treated using variation introduced by the
staggered adoption of policies. Our method gives asymptotically unbiased
estimators of many interesting quantities and delivers asymptotically valid
inference. By using the proposed method and national labor data in Europe, we
find evidence that quota regulation on board diversity leads to a decrease in
part-time employment, and an increase in full-time employment for female
professionals. | Synthetic Control Inference for Staggered Adoption: Estimating the Dynamic Effects of Board Gender Diversity Policies | 2019-12-13 07:29:19 | Jianfei Cao, Shirley Lu | http://arxiv.org/abs/1912.06320v1, http://arxiv.org/pdf/1912.06320v1 | econ.EM |
29,019 | em | Haavelmo (1944) proposed a probabilistic structure for econometric modeling,
aiming to make econometrics useful for decision making. His fundamental
contribution has become thoroughly embedded in subsequent econometric research,
yet it could not answer all the deep issues that the author raised. Notably,
Haavelmo struggled to formalize the implications for decision making of the
fact that models can at most approximate actuality. In the same period, Wald
(1939, 1945) initiated his own seminal development of statistical decision
theory. Haavelmo favorably cited Wald, but econometrics did not embrace
statistical decision theory. Instead, it focused on study of identification,
estimation, and statistical inference. This paper proposes statistical decision
theory as a framework for evaluation of the performance of models in decision
making. I particularly consider the common practice of as-if optimization:
specification of a model, point estimation of its parameters, and use of the
point estimate to make a decision that would be optimal if the estimate were
accurate. A central theme is that one should evaluate as-if optimization or any
other model-based decision rule by its performance across the state space,
listing all states of nature that one believes feasible, not across the model
space. I apply the theme to prediction and treatment choice. Statistical
decision theory is conceptually simple, but application is often challenging.
Advancement of computation is the primary task to continue building the
foundations sketched by Haavelmo and Wald. | Econometrics For Decision Making: Building Foundations Sketched By Haavelmo And Wald | 2019-12-17 21:47:30 | Charles F. Manski | http://arxiv.org/abs/1912.08726v4, http://arxiv.org/pdf/1912.08726v4 | econ.EM |
29,020 | em | We analyze different types of simulations that applied researchers may use to
assess their inference methods. We show that different types of simulations
vary in many dimensions when considered as inference assessments. Moreover, we
show that natural ways of running simulations may lead to misleading
conclusions, and we propose alternatives. We then provide evidence that even
some simple assessments can detect problems in many different settings.
Alternative assessments that potentially better approximate the true data
generating process may detect problems that simpler assessments would not
detect. However, they are not uniformly dominant in this dimension, and may
imply some costs. | Assessing Inference Methods | 2019-12-18 21:09:57 | Bruno Ferman | http://arxiv.org/abs/1912.08772v13, http://arxiv.org/pdf/1912.08772v13 | econ.EM |
29,021 | em | Learning about cause and effect is arguably the main goal in applied
econometrics. In practice, the validity of these causal inferences is
contingent on a number of critical assumptions regarding the type of data that
has been collected and the substantive knowledge that is available. For
instance, unobserved confounding factors threaten the internal validity of
estimates, data availability is often limited to non-random, selection-biased
samples, causal effects need to be learned from surrogate experiments with
imperfect compliance, and causal knowledge has to be extrapolated across
structurally heterogeneous populations. A powerful causal inference framework
is required to tackle these challenges, which plague most data analysis to
varying degrees. Building on the structural approach to causality introduced by
Haavelmo (1943) and the graph-theoretic framework proposed by Pearl (1995), the
artificial intelligence (AI) literature has developed a wide array of
techniques for causal learning that allow to leverage information from various
imperfect, heterogeneous, and biased data sources (Bareinboim and Pearl, 2016).
In this paper, we discuss recent advances in this literature that have the
potential to contribute to econometric methodology along three dimensions.
First, they provide a unified and comprehensive framework for causal inference,
in which the aforementioned problems can be addressed in full generality.
Second, due to their origin in AI, they come together with sound, efficient,
and complete algorithmic criteria for automatization of the corresponding
identification task. And third, because of the nonparametric description of
structural models that graph-theoretic approaches build on, they combine the
strengths of both structural econometrics as well as the potential outcomes
framework, and thus offer an effective middle ground between these two
literature streams. | Causal Inference and Data Fusion in Econometrics | 2019-12-19 13:24:04 | Paul Hünermund, Elias Bareinboim | http://arxiv.org/abs/1912.09104v4, http://arxiv.org/pdf/1912.09104v4 | econ.EM |
29,022 | em | We study the use of Temporal-Difference learning for estimating the
structural parameters in dynamic discrete choice models. Our algorithms are
based on the conditional choice probability approach but use functional
approximations to estimate various terms in the pseudo-likelihood function. We
suggest two approaches: The first - linear semi-gradient - provides
approximations to the recursive terms using basis functions. The second -
Approximate Value Iteration - builds a sequence of approximations to the
recursive terms by solving non-parametric estimation problems. Our approaches
are fast and naturally allow for continuous and/or high-dimensional state
spaces. Furthermore, they do not require specification of transition densities.
In dynamic games, they avoid integrating over other players' actions, further
heightening the computational advantage. Our proposals can be paired with
popular existing methods such as pseudo-maximum-likelihood, and we propose
locally robust corrections for the latter to achieve parametric rates of
convergence. Monte Carlo simulations confirm the properties of our algorithms
in practice. | Temporal-Difference estimation of dynamic discrete choice models | 2019-12-19 22:21:49 | Karun Adusumilli, Dita Eckardt | http://arxiv.org/abs/1912.09509v2, http://arxiv.org/pdf/1912.09509v2 | econ.EM |
29,034 | em | Researchers increasingly wish to estimate time-varying parameter (TVP)
regressions which involve a large number of explanatory variables. Including
prior information to mitigate over-parameterization concerns has led to many
using Bayesian methods. However, Bayesian Markov Chain Monte Carlo (MCMC)
methods can be very computationally demanding. In this paper, we develop
computationally efficient Bayesian methods for estimating TVP models using an
integrated rotated Gaussian approximation (IRGA). This exploits the fact that
whereas constant coefficients on regressors are often important, most of the
TVPs are often unimportant. Since Gaussian distributions are invariant to
rotations we can split the the posterior into two parts: one involving the
constant coefficients, the other involving the TVPs. Approximate methods are
used on the latter and, conditional on these, the former are estimated with
precision using MCMC methods. In empirical exercises involving artificial data
and a large macroeconomic data set, we show the accuracy and computational
benefits of IRGA methods. | Bayesian Inference in High-Dimensional Time-varying Parameter Models using Integrated Rotated Gaussian Approximations | 2020-02-24 17:07:50 | Florian Huber, Gary Koop, Michael Pfarrhofer | http://arxiv.org/abs/2002.10274v1, http://arxiv.org/pdf/2002.10274v1 | econ.EM |
29,023 | em | Dynamic treatment regimes are treatment allocations tailored to heterogeneous
individuals. The optimal dynamic treatment regime is a regime that maximizes
counterfactual welfare. We introduce a framework in which we can partially
learn the optimal dynamic regime from observational data, relaxing the
sequential randomization assumption commonly employed in the literature but
instead using (binary) instrumental variables. We propose the notion of sharp
partial ordering of counterfactual welfares with respect to dynamic regimes and
establish mapping from data to partial ordering via a set of linear programs.
We then characterize the identified set of the optimal regime as the set of
maximal elements associated with the partial ordering. We relate the notion of
partial ordering with a more conventional notion of partial identification
using topological sorts. Practically, topological sorts can be served as a
policy benchmark for a policymaker. We apply our method to understand returns
to schooling and post-school training as a sequence of treatments by combining
data from multiple sources. The framework of this paper can be used beyond the
current context, e.g., in establishing rankings of multiple treatments or
policies across different counterfactual scenarios. | Optimal Dynamic Treatment Regimes and Partial Welfare Ordering | 2019-12-20 21:43:01 | Sukjin Han | http://arxiv.org/abs/1912.10014v4, http://arxiv.org/pdf/1912.10014v4 | econ.EM |
29,024 | em | This paper presents a novel deep learning-based travel behaviour choice
model.Our proposed Residual Logit (ResLogit) model formulation seamlessly
integrates a Deep Neural Network (DNN) architecture into a multinomial logit
model. Recently, DNN models such as the Multi-layer Perceptron (MLP) and the
Recurrent Neural Network (RNN) have shown remarkable success in modelling
complex and noisy behavioural data. However, econometric studies have argued
that machine learning techniques are a `black-box' and difficult to interpret
for use in the choice analysis.We develop a data-driven choice model that
extends the systematic utility function to incorporate non-linear cross-effects
using a series of residual layers and using skipped connections to handle model
identifiability in estimating a large number of parameters.The model structure
accounts for cross-effects and choice heterogeneity arising from substitution,
interactions with non-chosen alternatives and other effects in a non-linear
manner.We describe the formulation, model estimation, interpretability and
examine the relative performance and econometric implications of our proposed
model.We present an illustrative example of the model on a classic red/blue bus
choice scenario example. For a real-world application, we use a travel mode
choice dataset to analyze the model characteristics compared to traditional
neural networks and Logit formulations.Our findings show that our ResLogit
approach significantly outperforms MLP models while providing similar
interpretability as a Multinomial Logit model. | ResLogit: A residual neural network logit model for data-driven choice modelling | 2019-12-20 22:02:58 | Melvin Wong, Bilal Farooq | http://arxiv.org/abs/1912.10058v2, http://arxiv.org/pdf/1912.10058v2 | econ.EM |
29,025 | em | We propose a new sequential Efficient Pseudo-Likelihood (k-EPL) estimator for
dynamic discrete choice games of incomplete information. k-EPL considers the
joint behavior of multiple players simultaneously, as opposed to individual
responses to other agents' equilibrium play. This, in addition to reframing the
problem from conditional choice probability (CCP) space to value function
space, yields a computationally tractable, stable, and efficient estimator. We
show that each iteration in the k-EPL sequence is consistent and asymptotically
efficient, so the first-order asymptotic properties do not vary across
iterations. Furthermore, we show the sequence achieves higher-order equivalence
to the finite-sample maximum likelihood estimator with iteration and that the
sequence of estimators converges almost surely to the maximum likelihood
estimator at a nearly-superlinear rate when the data are generated by any
regular Markov perfect equilibrium, including equilibria that lead to
inconsistency of other sequential estimators. When utility is linear in
parameters, k-EPL iterations are computationally simple, only requiring that
the researcher solve linear systems of equations to generate pseudo-regressors
which are used in a static logit/probit regression. Monte Carlo simulations
demonstrate the theoretical results and show k-EPL's good performance in finite
samples in both small- and large-scale games, even when the game admits
spurious equilibria in addition to one that generated the data. We apply the
estimator to study the role of competition in the U.S. wholesale club industry. | Efficient and Convergent Sequential Pseudo-Likelihood Estimation of Dynamic Discrete Games | 2019-12-22 20:34:23 | Adam Dearing, Jason R. Blevins | http://arxiv.org/abs/1912.10488v5, http://arxiv.org/pdf/1912.10488v5 | econ.EM |
29,026 | em | We propose an optimal-transport-based matching method to nonparametrically
estimate linear models with independent latent variables. The method consists
in generating pseudo-observations from the latent variables, so that the
Euclidean distance between the model's predictions and their matched
counterparts in the data is minimized. We show that our nonparametric estimator
is consistent, and we document that it performs well in simulated data. We
apply this method to study the cyclicality of permanent and transitory income
shocks in the Panel Study of Income Dynamics. We find that the dispersion of
income shocks is approximately acyclical, whereas the skewness of permanent
shocks is procyclical. By comparison, we find that the dispersion and skewness
of shocks to hourly wages vary little with the business cycle. | Recovering Latent Variables by Matching | 2019-12-30 23:49:27 | Manuel Arellano, Stephane Bonhomme | http://arxiv.org/abs/1912.13081v1, http://arxiv.org/pdf/1912.13081v1 | econ.EM |
29,027 | em | Markov switching models are a popular family of models that introduces
time-variation in the parameters in the form of their state- or regime-specific
values. Importantly, this time-variation is governed by a discrete-valued
latent stochastic process with limited memory. More specifically, the current
value of the state indicator is determined only by the value of the state
indicator from the previous period, thus the Markov property, and the
transition matrix. The latter characterizes the properties of the Markov
process by determining with what probability each of the states can be visited
next period, given the state in the current period. This setup decides on the
two main advantages of the Markov switching models. Namely, the estimation of
the probability of state occurrences in each of the sample periods by using
filtering and smoothing methods and the estimation of the state-specific
parameters. These two features open the possibility for improved
interpretations of the parameters associated with specific regimes combined
with the corresponding regime probabilities, as well as for improved
forecasting performance based on persistent regimes and parameters
characterizing them. | Markov Switching | 2020-02-10 11:29:23 | Yong Song, Tomasz Woźniak | http://dx.doi.org/10.1093/acrefore/9780190625979.013.174, http://arxiv.org/abs/2002.03598v1, http://arxiv.org/pdf/2002.03598v1 | econ.EM |
29,028 | em | Given the extreme dependence of agriculture on weather conditions, this paper
analyses the effect of climatic variations on this economic sector, by
considering both a huge dataset and a flexible spatio-temporal model
specification. In particular, we study the response of N-fertilizer application
to abnormal weather conditions, while accounting for other relevant control
variables. The dataset consists of gridded data spanning over 21 years
(1993-2013), while the methodological strategy makes use of a spatial dynamic
panel data (SDPD) model that accounts for both space and time fixed effects,
besides dealing with both space and time dependences. Time-invariant short and
long term effects, as well as time-varying marginal effects are also properly
defined, revealing interesting results on the impact of both GDP and weather
conditions on fertilizer utilizations. The analysis considers four
macro-regions -- Europe, South America, South-East Asia and Africa -- to allow
for comparisons among different socio-economic societies. In addition to
finding both spatial (in the form of knowledge spillover effects) and temporal
dependences as well as a good support for the existence of an environmental
Kuznets curve for fertilizer application, the paper shows peculiar responses of
N-fertilization to deviations from normal weather conditions of moisture for
each selected region, calling for ad hoc policy interventions. | The Effect of Weather Conditions on Fertilizer Applications: A Spatial Dynamic Panel Data Analysis | 2020-02-10 19:31:15 | Anna Gloria Billè, Marco Rogna | http://arxiv.org/abs/2002.03922v2, http://arxiv.org/pdf/2002.03922v2 | econ.EM |
29,029 | em | This article deals with parameterisation, identifiability, and maximum
likelihood (ML) estimation of possibly non-invertible structural vector
autoregressive moving average (SVARMA) models driven by independent and
non-Gaussian shocks. In contrast to previous literature, the novel
representation of the MA polynomial matrix using the Wiener-Hopf factorisation
(WHF) focuses on the multivariate nature of the model, generates insights into
its structure, and uses this structure for devising optimisation algorithms. In
particular, it allows to parameterise the location of determinantal zeros
inside and outside the unit circle, and it allows for MA zeros at zero, which
can be interpreted as informational delays. This is highly relevant for
data-driven evaluation of Dynamic Stochastic General Equilibrium (DSGE) models.
Typically imposed identifying restrictions on the shock transmission matrix as
well as on the determinantal root location are made testable. Furthermore, we
provide low level conditions for asymptotic normality of the ML estimator and
analytic expressions for the score and the information matrix. As application,
we estimate the Blanchard and Quah model and show that our method provides
further insights regarding non-invertibility using a standard macroeconometric
model. These and further analyses are implemented in a well documented
R-package. | Identifiability and Estimation of Possibly Non-Invertible SVARMA Models: A New Parametrisation | 2020-02-11 15:35:14 | Bernd Funovits | http://arxiv.org/abs/2002.04346v2, http://arxiv.org/pdf/2002.04346v2 | econ.EM |
29,030 | em | This paper analyses the number of free parameters and solutions of the
structural difference equation obtained from a linear multivariate rational
expectations model. First, it is shown that the number of free parameters
depends on the structure of the zeros at zero of a certain matrix polynomial of
the structural difference equation and the number of inputs of the rational
expectations model. Second, the implications of requiring that some components
of the endogenous variables be predetermined are analysed. Third, a condition
for existence and uniqueness of a causal stationary solution is given. | The Dimension of the Set of Causal Solutions of Linear Multivariate Rational Expectations Models | 2020-02-11 16:33:04 | Bernd Funovits | http://arxiv.org/abs/2002.04369v1, http://arxiv.org/pdf/2002.04369v1 | econ.EM |
29,031 | em | We construct long-term prediction intervals for time-aggregated future values
of univariate economic time series. We propose computational adjustments of the
existing methods to improve coverage probability under a small sample
constraint. A pseudo-out-of-sample evaluation shows that our methods perform at
least as well as selected alternative methods based on model-implied Bayesian
approaches and bootstrapping. Our most successful method yields prediction
intervals for eight macroeconomic indicators over a horizon spanning several
decades. | Long-term prediction intervals of economic time series | 2020-02-13 11:11:18 | Marek Chudy, Sayar Karmakar, Wei Biao Wu | http://arxiv.org/abs/2002.05384v1, http://arxiv.org/pdf/2002.05384v1 | econ.EM |
29,032 | em | Conjugate priors allow for fast inference in large dimensional vector
autoregressive (VAR) models but, at the same time, introduce the restriction
that each equation features the same set of explanatory variables. This paper
proposes a straightforward means of post-processing posterior estimates of a
conjugate Bayesian VAR to effectively perform equation-specific covariate
selection. Compared to existing techniques using shrinkage alone, our approach
combines shrinkage and sparsity in both the VAR coefficients and the error
variance-covariance matrices, greatly reducing estimation uncertainty in large
dimensions while maintaining computational tractability. We illustrate our
approach by means of two applications. The first application uses synthetic
data to investigate the properties of the model across different
data-generating processes, the second application analyzes the predictive gains
from sparsification in a forecasting exercise for US data. | Combining Shrinkage and Sparsity in Conjugate Vector Autoregressive Models | 2020-02-20 17:45:38 | Niko Hauzenberger, Florian Huber, Luca Onorante | http://arxiv.org/abs/2002.08760v2, http://arxiv.org/pdf/2002.08760v2 | econ.EM |
29,033 | em | This paper considers estimation and inference about tail features when the
observations beyond some threshold are censored. We first show that ignoring
such tail censoring could lead to substantial bias and size distortion, even if
the censored probability is tiny. Second, we propose a new maximum likelihood
estimator (MLE) based on the Pareto tail approximation and derive its
asymptotic properties. Third, we provide a small sample modification to the MLE
by resorting to Extreme Value theory. The MLE with this modification delivers
excellent small sample performance, as shown by Monte Carlo simulations. We
illustrate its empirical relevance by estimating (i) the tail index and the
extreme quantiles of the US individual earnings with the Current Population
Survey dataset and (ii) the tail index of the distribution of macroeconomic
disasters and the coefficient of risk aversion using the dataset collected by
Barro and Urs{\'u}a (2008). Our new empirical findings are substantially
different from the existing literature. | Estimation and Inference about Tail Features with Tail Censored Data | 2020-02-23 23:43:24 | Yulong Wang, Zhijie Xiao | http://arxiv.org/abs/2002.09982v1, http://arxiv.org/pdf/2002.09982v1 | econ.EM |
29,290 | em | Discrete Choice Experiments (DCE) have been widely used in health economics,
environmental valuation, and other disciplines. However, there is a lack of
resources disclosing the whole procedure of carrying out a DCE. This document
aims to assist anyone wishing to use the power of DCEs to understand people's
behavior by providing a comprehensive guide to the procedure. This guide
contains all the code needed to design, implement, and analyze a DCE using only
free software. | A step-by-step guide to design, implement, and analyze a discrete choice experiment | 2020-09-23 19:13:10 | Daniel Pérez-Troncoso | http://arxiv.org/abs/2009.11235v1, http://arxiv.org/pdf/2009.11235v1 | econ.EM |
29,035 | em | This paper studies the identification, estimation, and hypothesis testing
problem in complete and incomplete economic models with testable assumptions.
Testable assumptions ($A$) give strong and interpretable empirical content to
the models but they also carry the possibility that some distribution of
observed outcomes may reject these assumptions. A natural way to avoid this is
to find a set of relaxed assumptions ($\tilde{A}$) that cannot be rejected by
any distribution of observed outcome and the identified set of the parameter of
interest is not changed when the original assumption is not rejected. The main
contribution of this paper is to characterize the properties of such a relaxed
assumption $\tilde{A}$ using a generalized definition of refutability and
confirmability. I also propose a general method to construct such $\tilde{A}$.
A general estimation and inference procedure is proposed and can be applied to
most incomplete economic models. I apply my methodology to the instrument
monotonicity assumption in Local Average Treatment Effect (LATE) estimation and
to the sector selection assumption in a binary outcome Roy model of employment
sector choice. In the LATE application, I use my general method to construct a
set of relaxed assumptions $\tilde{A}$ that can never be rejected, and the
identified set of LATE is the same as imposing $A$ when $A$ is not rejected.
LATE is point identified under my extension $\tilde{A}$ in the LATE
application. In the binary outcome Roy model, I use my method of incomplete
models to relax Roy's sector selection assumption and characterize the
identified set of the binary potential outcome as a polyhedron. | Estimating Economic Models with Testable Assumptions: Theory and Applications | 2020-02-24 20:58:41 | Moyu Liao | http://arxiv.org/abs/2002.10415v3, http://arxiv.org/pdf/2002.10415v3 | econ.EM |
29,036 | em | We examine the impact of annual hours worked on annual earnings by
decomposing changes in the real annual earnings distribution into composition,
structural and hours effects. We do so via a nonseparable simultaneous model of
hours, wages and earnings. Using the Current Population Survey for the survey
years 1976--2019, we find that changes in the female distribution of annual
hours of work are important in explaining movements in inequality in female
annual earnings. This captures the substantial changes in their employment
behavior over this period. Movements in the male hours distribution only affect
the lower part of their earnings distribution and reflect the sensitivity of
these workers' annual hours of work to cyclical factors. | Hours Worked and the U.S. Distribution of Real Annual Earnings 1976-2019 | 2020-02-26 01:55:07 | Iván Fernández-Val, Franco Peracchi, Aico van Vuuren, Francis Vella | http://arxiv.org/abs/2002.11211v3, http://arxiv.org/pdf/2002.11211v3 | econ.EM |
29,037 | em | This paper combines causal mediation analysis with double machine learning to
control for observed confounders in a data-driven way under a
selection-on-observables assumption in a high-dimensional setting. We consider
the average indirect effect of a binary treatment operating through an
intermediate variable (or mediator) on the causal path between the treatment
and the outcome, as well as the unmediated direct effect. Estimation is based
on efficient score functions, which possess a multiple robustness property
w.r.t. misspecifications of the outcome, mediator, and treatment models. This
property is key for selecting these models by double machine learning, which is
combined with data splitting to prevent overfitting in the estimation of the
effects of interest. We demonstrate that the direct and indirect effect
estimators are asymptotically normal and root-n consistent under specific
regularity conditions and investigate the finite sample properties of the
suggested methods in a simulation study when considering lasso as machine
learner. We also provide an empirical application to the U.S. National
Longitudinal Survey of Youth, assessing the indirect effect of health insurance
coverage on general health operating via routine checkups as mediator, as well
as the direct effect. We find a moderate short term effect of health insurance
coverage on general health which is, however, not mediated by routine checkups. | Causal mediation analysis with double machine learning | 2020-02-28 16:39:49 | Helmut Farbmacher, Martin Huber, Lukáš Lafférs, Henrika Langen, Martin Spindler | http://arxiv.org/abs/2002.12710v6, http://arxiv.org/pdf/2002.12710v6 | econ.EM |
29,038 | em | Alternative data sets are widely used for macroeconomic nowcasting together
with machine learning--based tools. The latter are often applied without a
complete picture of their theoretical nowcasting properties. Against this
background, this paper proposes a theoretically grounded nowcasting methodology
that allows researchers to incorporate alternative Google Search Data (GSD)
among the predictors and that combines targeted preselection, Ridge
regularization, and Generalized Cross Validation. Breaking with most existing
literature, which focuses on asymptotic in-sample theoretical properties, we
establish the theoretical out-of-sample properties of our methodology and
support them by Monte-Carlo simulations. We apply our methodology to GSD to
nowcast GDP growth rate of several countries during various economic periods.
Our empirical findings support the idea that GSD tend to increase nowcasting
accuracy, even after controlling for official variables, but that the gain
differs between periods of recessions and of macroeconomic stability. | When are Google data useful to nowcast GDP? An approach via pre-selection and shrinkage | 2020-07-01 09:58:00 | Laurent Ferrara, Anna Simoni | http://dx.doi.org/10.1080/07350015.2022.2116025, http://arxiv.org/abs/2007.00273v3, http://arxiv.org/pdf/2007.00273v3 | econ.EM |
29,039 | em | In this paper, we estimate and leverage latent constant group structure to
generate the point, set, and density forecasts for short dynamic panel data. We
implement a nonparametric Bayesian approach to simultaneously identify
coefficients and group membership in the random effects which are heterogeneous
across groups but fixed within a group. This method allows us to flexibly
incorporate subjective prior knowledge on the group structure that potentially
improves the predictive accuracy. In Monte Carlo experiments, we demonstrate
that our Bayesian grouped random effects (BGRE) estimators produce accurate
estimates and score predictive gains over standard panel data estimators. With
a data-driven group structure, the BGRE estimators exhibit comparable accuracy
of clustering with the Kmeans algorithm and outperform a two-step Bayesian
grouped estimator whose group structure relies on Kmeans. In the empirical
analysis, we apply our method to forecast the investment rate across a broad
range of firms and illustrate that the estimated latent group structure
improves forecasts relative to standard panel data estimators. | Forecasting with Bayesian Grouped Random Effects in Panel Data | 2020-07-05 22:48:27 | Boyuan Zhang | http://arxiv.org/abs/2007.02435v8, http://arxiv.org/pdf/2007.02435v8 | econ.EM |
29,170 | em | We develop a Stata command xthenreg to implement the first-differenced GMM
estimation of the dynamic panel threshold model, which Seo and Shin (2016,
Journal of Econometrics 195: 169-186) have proposed. Furthermore, We derive the
asymptotic variance formula for a kink constrained GMM estimator of the dynamic
threshold model and include an estimation algorithm. We also propose a fast
bootstrap algorithm to implement the bootstrap for the linearity test. The use
of the command is illustrated through a Monte Carlo simulation and an economic
application. | Estimation of Dynamic Panel Threshold Model using Stata | 2019-02-27 06:19:33 | Myung Hwan Seo, Sueyoul Kim, Young-Joo Kim | http://dx.doi.org/10.1177/1536867X19874243, http://arxiv.org/abs/1902.10318v1, http://arxiv.org/pdf/1902.10318v1 | econ.EM |
29,040 | em | This paper presents a novel estimator of orthogonal GARCH models, which
combines (eigenvalue and -vector) targeting estimation with stepwise
(univariate) estimation. We denote this the spectral targeting estimator. This
two-step estimator is consistent under finite second order moments, while
asymptotic normality holds under finite fourth order moments. The estimator is
especially well suited for modelling larger portfolios: we compare the
empirical performance of the spectral targeting estimator to that of the quasi
maximum likelihood estimator for five portfolios of 25 assets. The spectral
targeting estimator dominates in terms of computational complexity, being up to
57 times faster in estimation, while both estimators produce similar
out-of-sample forecasts, indicating that the spectral targeting estimator is
well suited for high-dimensional empirical applications. | Spectral Targeting Estimation of $λ$-GARCH models | 2020-07-06 11:53:59 | Simon Hetland | http://arxiv.org/abs/2007.02588v1, http://arxiv.org/pdf/2007.02588v1 | econ.EM |
29,041 | em | We study the effects of counterfactual teacher-to-classroom assignments on
average student achievement in elementary and middle schools in the US. We use
the Measures of Effective Teaching (MET) experiment to semiparametrically
identify the average reallocation effects (AREs) of such assignments. Our
findings suggest that changes in within-district teacher assignments could have
appreciable effects on student achievement. Unlike policies which require
hiring additional teachers (e.g., class-size reduction measures), or those
aimed at changing the stock of teachers (e.g., VAM-guided teacher tenure
policies), alternative teacher-to-classroom assignments are resource neutral;
they raise student achievement through a more efficient deployment of existing
teachers. | Teacher-to-classroom assignment and student achievement | 2020-07-06 14:20:59 | Bryan S. Graham, Geert Ridder, Petra Thiemann, Gema Zamarro | http://arxiv.org/abs/2007.02653v2, http://arxiv.org/pdf/2007.02653v2 | econ.EM |
29,042 | em | This paper studies optimal decision rules, including estimators and tests,
for weakly identified GMM models. We derive the limit experiment for weakly
identified GMM, and propose a theoretically-motivated class of priors which
give rise to quasi-Bayes decision rules as a limiting case. Together with
results in the previous literature, this establishes desirable properties for
the quasi-Bayes approach regardless of model identification status, and we
recommend quasi-Bayes for settings where identification is a concern. We
further propose weighted average power-optimal identification-robust
frequentist tests and confidence sets, and prove a Bernstein-von Mises-type
result for the quasi-Bayes posterior under weak identification. | Optimal Decision Rules for Weak GMM | 2020-07-08 14:48:10 | Isaiah Andrews, Anna Mikusheva | http://arxiv.org/abs/2007.04050v7, http://arxiv.org/pdf/2007.04050v7 | econ.EM |
29,043 | em | In this paper, we test the contribution of foreign management on firms'
competitiveness. We use a novel dataset on the careers of 165,084 managers
employed by 13,106 companies in the United Kingdom in the period 2009-2017. We
find that domestic manufacturing firms become, on average, between 7% and 12%
more productive after hiring the first foreign managers, whereas foreign-owned
firms register no significant improvement. In particular, we test that previous
industry-specific experience is the primary driver of productivity gains in
domestic firms (15.6%), in a way that allows the latter to catch up with
foreign-owned firms. Managers from the European Union are highly valuable, as
they represent about half of the recruits in our data. Our identification
strategy combines matching techniques, difference-in-difference, and
pre-recruitment trends to challenge reverse causality. Results are robust to
placebo tests and to different estimators of Total Factor Productivity.
Eventually, we argue that upcoming limits to the mobility of foreign talents
after the Brexit event can hamper the allocation of productive managerial
resources. | Talents from Abroad. Foreign Managers and Productivity in the United Kingdom | 2020-07-08 15:07:13 | Dimitrios Exadaktylos, Massimo Riccaboni, Armando Rungi | http://arxiv.org/abs/2007.04055v1, http://arxiv.org/pdf/2007.04055v1 | econ.EM |
29,044 | em | We study treatment-effect estimation using panel data. The treatment may be
non-binary, non-absorbing, and the outcome may be affected by treatment lags.
We make a parallel-trends assumption, and propose event-study estimators of the
effect of being exposed to a weakly higher treatment dose for $\ell$ periods.
We also propose normalized estimators, that estimate a weighted average of the
effects of the current treatment and its lags. We also analyze commonly-used
two-way-fixed-effects regressions. Unlike our estimators, they can be biased in
the presence of heterogeneous treatment effects. A local-projection version of
those regressions is biased even with homogeneous effects. | Difference-in-Differences Estimators of Intertemporal Treatment Effects | 2020-07-08 20:01:22 | Clément de Chaisemartin, Xavier D'Haultfoeuille | http://arxiv.org/abs/2007.04267v12, http://arxiv.org/pdf/2007.04267v12 | econ.EM |
29,045 | em | This paper develops an empirical balancing approach for the estimation of
treatment effects under two-sided noncompliance using a binary conditionally
independent instrumental variable. The method weighs both treatment and outcome
information with inverse probabilities to produce exact finite sample balance
across instrument level groups. It is free of functional form assumptions on
the outcome or the treatment selection step. By tailoring the loss function for
the instrument propensity scores, the resulting treatment effect estimates
exhibit both low bias and a reduced variance in finite samples compared to
conventional inverse probability weighting methods. The estimator is
automatically weight normalized and has similar bias properties compared to
conventional two-stage least squares estimation under constant causal effects
for the compliers. We provide conditions for asymptotic normality and
semiparametric efficiency and demonstrate how to utilize additional information
about the treatment selection step for bias reduction in finite samples. The
method can be easily combined with regularization or other statistical learning
approaches to deal with a high-dimensional number of observed confounding
variables. Monte Carlo simulations suggest that the theoretical advantages
translate well to finite samples. The method is illustrated in an empirical
example. | Efficient Covariate Balancing for the Local Average Treatment Effect | 2020-07-08 21:04:46 | Phillip Heiler | http://arxiv.org/abs/2007.04346v1, http://arxiv.org/pdf/2007.04346v1 | econ.EM |
29,053 | em | This paper considers estimation and inference for heterogeneous
counterfactual effects with high-dimensional data. We propose a novel robust
score for debiased estimation of the unconditional quantile regression (Firpo,
Fortin, and Lemieux, 2009) as a measure of heterogeneous counterfactual
marginal effects. We propose a multiplier bootstrap inference and develop
asymptotic theories to guarantee the size control in large sample. Simulation
studies support our theories. Applying the proposed method to Job Corps survey
data, we find that a policy which counterfactually extends the duration of
exposures to the Job Corps training program will be effective especially for
the targeted subpopulations of lower potential wage earners. | Unconditional Quantile Regression with High Dimensional Data | 2020-07-27 19:13:41 | Yuya Sasaki, Takuya Ura, Yichong Zhang | http://arxiv.org/abs/2007.13659v4, http://arxiv.org/pdf/2007.13659v4 | econ.EM |
29,046 | em | This paper analyzes a semiparametric model of network formation in the
presence of unobserved agent-specific heterogeneity. The objective is to
identify and estimate the preference parameters associated with homophily on
observed attributes when the distributions of the unobserved factors are not
parametrically specified. This paper offers two main contributions to the
literature on network formation. First, it establishes a new point
identification result for the vector of parameters that relies on the existence
of a special repressor. The identification proof is constructive and
characterizes a closed-form for the parameter of interest. Second, it
introduces a simple two-step semiparametric estimator for the vector of
parameters with a first-step kernel estimator. The estimator is computationally
tractable and can be applied to both dense and sparse networks. Moreover, I
show that the estimator is consistent and has a limiting normal distribution as
the number of individuals in the network increases. Monte Carlo experiments
demonstrate that the estimator performs well in finite samples and in networks
with different levels of sparsity. | A Semiparametric Network Formation Model with Unobserved Linear Heterogeneity | 2020-07-10 17:09:41 | Luis E. Candelaria | http://arxiv.org/abs/2007.05403v2, http://arxiv.org/pdf/2007.05403v2 | econ.EM |
29,047 | em | This paper characterises dynamic linkages arising from shocks with
heterogeneous degrees of persistence. Using frequency domain techniques, we
introduce measures that identify smoothly varying links of a transitory and
persistent nature. Our approach allows us to test for statistical differences
in such dynamic links. We document substantial differences in transitory and
persistent linkages among US financial industry volatilities, argue that they
track heterogeneously persistent sources of systemic risk, and thus may serve
as a useful tool for market participants. | Persistence in Financial Connectedness and Systemic Risk | 2020-07-14 18:45:33 | Jozef Barunik, Michael Ellington | http://arxiv.org/abs/2007.07842v4, http://arxiv.org/pdf/2007.07842v4 | econ.EM |
29,048 | em | This paper studies the latent index representation of the conditional LATE
model, making explicit the role of covariates in treatment selection. We find
that if the directions of the monotonicity condition are the same across all
values of the conditioning covariate, which is often assumed in the literature,
then the treatment choice equation has to satisfy a separability condition
between the instrument and the covariate. This global representation result
establishes testable restrictions imposed on the way covariates enter the
treatment choice equation. We later extend the representation theorem to
incorporate multiple ordered levels of treatment. | Global Representation of the Conditional LATE Model: A Separability Result | 2020-07-16 07:30:59 | Yu-Chang Chen, Haitian Xie | http://dx.doi.org/10.1111/obes.12476, http://arxiv.org/abs/2007.08106v3, http://arxiv.org/pdf/2007.08106v3 | econ.EM |
29,049 | em | I devise a novel approach to evaluate the effectiveness of fiscal policy in
the short run with multi-category treatment effects and inverse probability
weighting based on the potential outcome framework. This study's main
contribution to the literature is the proposed modified conditional
independence assumption to improve the evaluation of fiscal policy. Using this
approach, I analyze the effects of government spending on the US economy from
1992 to 2019. The empirical study indicates that large fiscal contraction
generates a negative effect on the economic growth rate, and small and large
fiscal expansions realize a positive effect. However, these effects are not
significant in the traditional multiple regression approach. I conclude that
this new approach significantly improves the evaluation of fiscal policy. | Government spending and multi-category treatment effects:The modified conditional independence assumption | 2020-07-16 18:16:35 | Koiti Yano | http://arxiv.org/abs/2007.08396v3, http://arxiv.org/pdf/2007.08396v3 | econ.EM |
29,050 | em | We propose using a permutation test to detect discontinuities in an
underlying economic model at a known cutoff point. Relative to the existing
literature, we show that this test is well suited for event studies based on
time-series data. The test statistic measures the distance between the
empirical distribution functions of observed data in two local subsamples on
the two sides of the cutoff. Critical values are computed via a standard
permutation algorithm. Under a high-level condition that the observed data can
be coupled by a collection of conditionally independent variables, we establish
the asymptotic validity of the permutation test, allowing the sizes of the
local subsamples to be either be fixed or grow to infinity. In the latter case,
we also establish that the permutation test is consistent. We demonstrate that
our high-level condition can be verified in a broad range of problems in the
infill asymptotic time-series setting, which justifies using the permutation
test to detect jumps in economic variables such as volatility, trading
activity, and liquidity. These potential applications are illustrated in an
empirical case study for selected FOMC announcements during the ongoing
COVID-19 pandemic. | Permutation-based tests for discontinuities in event studies | 2020-07-20 05:12:52 | Federico A. Bugni, Jia Li, Qiyuan Li | http://arxiv.org/abs/2007.09837v4, http://arxiv.org/pdf/2007.09837v4 | econ.EM |
29,051 | em | Mean, median, and mode are three essential measures of the centrality of
probability distributions. In program evaluation, the average treatment effect
(mean) and the quantile treatment effect (median) have been intensively studied
in the past decades. The mode treatment effect, however, has long been
neglected in program evaluation. This paper fills the gap by discussing both
the estimation and inference of the mode treatment effect. I propose both
traditional kernel and machine learning methods to estimate the mode treatment
effect. I also derive the asymptotic properties of the proposed estimators and
find that both estimators follow the asymptotic normality but with the rate of
convergence slower than the regular rate $\sqrt{N}$, which is different from
the rates of the classical average and quantile treatment effect estimators. | The Mode Treatment Effect | 2020-07-22 21:05:56 | Neng-Chieh Chang | http://arxiv.org/abs/2007.11606v1, http://arxiv.org/pdf/2007.11606v1 | econ.EM |
29,052 | em | The multinomial probit model is a popular tool for analyzing choice behaviour
as it allows for correlation between choice alternatives. Because current model
specifications employ a full covariance matrix of the latent utilities for the
choice alternatives, they are not scalable to a large number of choice
alternatives. This paper proposes a factor structure on the covariance matrix,
which makes the model scalable to large choice sets. The main challenge in
estimating this structure is that the model parameters require identifying
restrictions. We identify the parameters by a trace-restriction on the
covariance matrix, which is imposed through a reparametrization of the factor
structure. We specify interpretable prior distributions on the model parameters
and develop an MCMC sampler for parameter estimation. The proposed approach
significantly improves performance in large choice sets relative to existing
multinomial probit specifications. Applications to purchase data show the
economic importance of including a large number of choice alternatives in
consumer choice analysis. | Scalable Bayesian estimation in the multinomial probit model | 2020-07-27 02:38:14 | Ruben Loaiza-Maya, Didier Nibbering | http://arxiv.org/abs/2007.13247v2, http://arxiv.org/pdf/2007.13247v2 | econ.EM |
29,054 | em | Applied macroeconomists often compute confidence intervals for impulse
responses using local projections, i.e., direct linear regressions of future
outcomes on current covariates. This paper proves that local projection
inference robustly handles two issues that commonly arise in applications:
highly persistent data and the estimation of impulse responses at long
horizons. We consider local projections that control for lags of the variables
in the regression. We show that lag-augmented local projections with normal
critical values are asymptotically valid uniformly over (i) both stationary and
non-stationary data, and also over (ii) a wide range of response horizons.
Moreover, lag augmentation obviates the need to correct standard errors for
serial correlation in the regression residuals. Hence, local projection
inference is arguably both simpler than previously thought and more robust than
standard autoregressive inference, whose validity is known to depend
sensitively on the persistence of the data and on the length of the horizon. | Local Projection Inference is Simpler and More Robust Than You Think | 2020-07-28 01:03:23 | José Luis Montiel Olea, Mikkel Plagborg-Møller | http://dx.doi.org/10.3982/ECTA18756, http://arxiv.org/abs/2007.13888v3, http://arxiv.org/pdf/2007.13888v3 | econ.EM |
29,055 | em | Commonly used methods of production function and markup estimation assume
that a firm's output quantity can be observed as data, but typical datasets
contain only revenue, not output quantity. We examine the nonparametric
identification of production function and markup from revenue data when a firm
faces a general nonparametri demand function under imperfect competition. Under
standard assumptions, we provide the constructive nonparametric identification
of various firm-level objects: gross production function, total factor
productivity, price markups over marginal costs, output prices, output
quantities, a demand system, and a representative consumer's utility function. | Nonparametric Identification of Production Function, Total Factor Productivity, and Markup from Revenue Data | 2020-10-31 02:34:40 | Hiroyuki Kasahara, Yoichi Sugita | http://arxiv.org/abs/2011.00143v1, http://arxiv.org/pdf/2011.00143v1 | econ.EM |
29,056 | em | Macroeconomists increasingly use external sources of exogenous variation for
causal inference. However, unless such external instruments (proxies) capture
the underlying shock without measurement error, existing methods are silent on
the importance of that shock for macroeconomic fluctuations. We show that, in a
general moving average model with external instruments, variance decompositions
for the instrumented shock are interval-identified, with informative bounds.
Various additional restrictions guarantee point identification of both variance
and historical decompositions. Unlike SVAR analysis, our methods do not require
invertibility. Applied to U.S. data, they give a tight upper bound on the
importance of monetary shocks for inflation dynamics. | Instrumental Variable Identification of Dynamic Variance Decompositions | 2020-11-03 02:32:44 | Mikkel Plagborg-Møller, Christian K. Wolf | http://arxiv.org/abs/2011.01380v2, http://arxiv.org/pdf/2011.01380v2 | econ.EM |
29,057 | em | Forecasters often use common information and hence make common mistakes. We
propose a new approach, Factor Graphical Model (FGM), to forecast combinations
that separates idiosyncratic forecast errors from the common errors. FGM
exploits the factor structure of forecast errors and the sparsity of the
precision matrix of the idiosyncratic errors. We prove the consistency of
forecast combination weights and mean squared forecast error estimated using
FGM, supporting the results with extensive simulations. Empirical applications
to forecasting macroeconomic series shows that forecast combination using FGM
outperforms combined forecasts using equal weights and graphical models without
incorporating factor structure of forecast errors. | Learning from Forecast Errors: A New Approach to Forecast Combinations | 2020-11-04 03:16:16 | Tae-Hwy Lee, Ekaterina Seregina | http://arxiv.org/abs/2011.02077v2, http://arxiv.org/pdf/2011.02077v2 | econ.EM |
29,058 | em | We use a decision-theoretic framework to study the problem of forecasting
discrete outcomes when the forecaster is unable to discriminate among a set of
plausible forecast distributions because of partial identification or concerns
about model misspecification or structural breaks. We derive "robust" forecasts
which minimize maximum risk or regret over the set of forecast distributions.
We show that for a large class of models including semiparametric panel data
models for dynamic discrete choice, the robust forecasts depend in a natural
way on a small number of convex optimization problems which can be simplified
using duality methods. Finally, we derive "efficient robust" forecasts to deal
with the problem of first having to estimate the set of forecast distributions
and develop a suitable asymptotic efficiency theory. Forecasts obtained by
replacing nuisance parameters that characterize the set of forecast
distributions with efficient first-stage estimators can be strictly dominated
by our efficient robust forecasts. | Robust Forecasting | 2020-11-06 04:17:22 | Timothy Christensen, Hyungsik Roger Moon, Frank Schorfheide | http://arxiv.org/abs/2011.03153v4, http://arxiv.org/pdf/2011.03153v4 | econ.EM |
29,059 | em | Following in the footsteps of the literature on empirical welfare
maximization, this paper wants to contribute by stressing the policymaker
perspective via a practical illustration of an optimal policy assignment
problem. More specifically, by focusing on the class of threshold-based
policies, we first set up the theoretical underpinnings of the policymaker
selection problem, to then offer a practical solution to this problem via an
empirical illustration using the popular LaLonde (1986) training program
dataset. The paper proposes an implementation protocol for the optimal solution
that is straightforward to apply and easy to program with standard statistical
software. | Optimal Policy Learning: From Theory to Practice | 2020-11-10 12:25:33 | Giovanni Cerulli | http://arxiv.org/abs/2011.04993v1, http://arxiv.org/pdf/2011.04993v1 | econ.EM |
29,060 | em | This paper studies identification of the effect of a mis-classified, binary,
endogenous regressor when a discrete-valued instrumental variable is available.
We begin by showing that the only existing point identification result for this
model is incorrect. We go on to derive the sharp identified set under mean
independence assumptions for the instrument and measurement error. The
resulting bounds are novel and informative, but fail to point identify the
effect of interest. This motivates us to consider alternative and slightly
stronger assumptions: we show that adding second and third moment independence
assumptions suffices to identify the model. | Identifying the effect of a mis-classified, binary, endogenous regressor | 2020-11-14 14:35:13 | Francis J. DiTraglia, Camilo Garcia-Jimeno | http://dx.doi.org/10.1016/j.jeconom.2019.01.007, http://arxiv.org/abs/2011.07272v1, http://arxiv.org/pdf/2011.07272v1 | econ.EM |
29,848 | em | This study considers the treatment choice problem when outcome variables are
binary. We focus on statistical treatment rules that plug in fitted values
based on nonparametric kernel regression and show that optimizing two
parameters enables the calculation of the maximum regret. Using this result, we
propose a novel bandwidth selection method based on the minimax regret
criterion. Finally, we perform a numerical analysis to compare the optimal
bandwidth choices for the binary and normally distributed outcomes. | Bandwidth Selection for Treatment Choice with Binary Outcomes | 2023-08-28 10:46:05 | Takuya Ishihara | http://arxiv.org/abs/2308.14375v2, http://arxiv.org/pdf/2308.14375v2 | econ.EM |
29,061 | em | To estimate causal effects from observational data, an applied researcher
must impose beliefs. The instrumental variables exclusion restriction, for
example, represents the belief that the instrument has no direct effect on the
outcome of interest. Yet beliefs about instrument validity do not exist in
isolation. Applied researchers often discuss the likely direction of selection
and the potential for measurement error in their articles but lack formal tools
for incorporating this information into their analyses. Failing to use all
relevant information not only leaves money on the table; it runs the risk of
leading to a contradiction in which one holds mutually incompatible beliefs
about the problem at hand. To address these issues, we first characterize the
joint restrictions relating instrument invalidity, treatment endogeneity, and
non-differential measurement error in a workhorse linear model, showing how
beliefs over these three dimensions are mutually constrained by each other and
the data. Using this information, we propose a Bayesian framework to help
researchers elicit their beliefs, incorporate them into estimation, and ensure
their mutual coherence. We conclude by illustrating our framework in a number
of examples drawn from the empirical microeconomics literature. | A Framework for Eliciting, Incorporating, and Disciplining Identification Beliefs in Linear Models | 2020-11-14 14:43:44 | Francis J. DiTraglia, Camilo Garcia-Jimeno | http://dx.doi.org/10.1080/07350015.2020.1753528, http://arxiv.org/abs/2011.07276v1, http://arxiv.org/pdf/2011.07276v1 | econ.EM |
29,062 | em | In this paper we propose a semi-parametric Bayesian Generalized Least Squares
estimator. In a generic setting where each error is a vector, the parametric
Generalized Least Square estimator maintains the assumption that each error
vector has the same distributional parameters. In reality, however, errors are
likely to be heterogeneous regarding their distributions. To cope with such
heterogeneity, a Dirichlet process prior is introduced for the distributional
parameters of the errors, leading to the error distribution being a mixture of
a variable number of normal distributions. Our method let the number of normal
components be data driven. Semi-parametric Bayesian estimators for two specific
cases are then presented: the Seemingly Unrelated Regression for equation
systems and the Random Effects Model for panel data. We design a series of
simulation experiments to explore the performance of our estimators. The
results demonstrate that our estimators obtain smaller posterior standard
deviations and mean squared errors than the Bayesian estimators using a
parametric mixture of normal distributions or a normal distribution. We then
apply our semi-parametric Bayesian estimators for equation systems and panel
data models to empirical data. | A Semi-Parametric Bayesian Generalized Least Squares Estimator | 2020-11-20 10:50:15 | Ruochen Wu, Melvyn Weeks | http://arxiv.org/abs/2011.10252v2, http://arxiv.org/pdf/2011.10252v2 | econ.EM |
29,063 | em | This paper proposes a new class of M-estimators that double weight for the
twin problems of nonrandom treatment assignment and missing outcomes, both of
which are common issues in the treatment effects literature. The proposed class
is characterized by a `robustness' property, which makes it resilient to
parametric misspecification in either a conditional model of interest (for
example, mean or quantile function) or the two weighting functions. As leading
applications, the paper discusses estimation of two specific causal parameters;
average and quantile treatment effects (ATE, QTEs), which can be expressed as
functions of the doubly weighted estimator, under misspecification of the
framework's parametric components. With respect to the ATE, this paper shows
that the proposed estimator is doubly robust even in the presence of missing
outcomes. Finally, to demonstrate the estimator's viability in empirical
settings, it is applied to Calonico and Smith (2017)'s reconstructed sample
from the National Supported Work training program. | Doubly weighted M-estimation for nonrandom assignment and missing outcomes | 2020-11-23 18:48:39 | Akanksha Negi | http://arxiv.org/abs/2011.11485v1, http://arxiv.org/pdf/2011.11485v1 | econ.EM |
29,064 | em | This paper develops a first-stage linear regression representation for the
instrumental variables (IV) quantile regression (QR) model. The quantile
first-stage is analogous to the least squares case, i.e., a linear projection
of the endogenous variables on the instruments and other exogenous covariates,
with the difference that the QR case is a weighted projection. The weights are
given by the conditional density function of the innovation term in the QR
structural model, conditional on the endogeneous and exogenous covariates, and
the instruments as well, at a given quantile. We also show that the required
Jacobian identification conditions for IVQR models are embedded in the quantile
first-stage. We then suggest inference procedures to evaluate the adequacy of
instruments by evaluating their statistical significance using the first-stage
result. The test is developed in an over-identification context, since
consistent estimation of the weights for implementation of the first-stage
requires at least one valid instrument to be available. Monte Carlo experiments
provide numerical evidence that the proposed tests work as expected in terms of
empirical size and power in finite samples. An empirical application
illustrates that checking for the statistical significance of the instruments
at different quantiles is important. The proposed procedures may be specially
useful in QR since the instruments may be relevant at some quantiles but not at
others. | A first-stage representation for instrumental variables quantile regression | 2021-02-02 01:26:54 | Javier Alejo, Antonio F. Galvao, Gabriel Montes-Rojas | http://arxiv.org/abs/2102.01212v4, http://arxiv.org/pdf/2102.01212v4 | econ.EM |
29,065 | em | How much do individuals contribute to team output? I propose an econometric
framework to quantify individual contributions when only the output of their
teams is observed. The identification strategy relies on following individuals
who work in different teams over time. I consider two production technologies.
For a production function that is additive in worker inputs, I propose a
regression estimator and show how to obtain unbiased estimates of variance
components that measure the contributions of heterogeneity and sorting. To
estimate nonlinear models with complementarity, I propose a mixture approach
under the assumption that individual types are discrete, and rely on a
mean-field variational approximation for estimation. To illustrate the methods,
I estimate the impact of economists on their research output, and the
contributions of inventors to the quality of their patents. | Teams: Heterogeneity, Sorting, and Complementarity | 2021-02-03 02:52:12 | Stephane Bonhomme | http://arxiv.org/abs/2102.01802v1, http://arxiv.org/pdf/2102.01802v1 | econ.EM |
29,160 | em | This paper studies the joint inference on conditional volatility parameters
and the innovation moments by means of bootstrap to test for the existence of
moments for GARCH(p,q) processes. We propose a residual bootstrap to mimic the
joint distribution of the quasi-maximum likelihood estimators and the empirical
moments of the residuals and also prove its validity. A bootstrap-based test
for the existence of moments is proposed, which provides asymptotically
correctly-sized tests without losing its consistency property. It is simple to
implement and extends to other GARCH-type settings. A simulation study
demonstrates the test's size and power properties in finite samples and an
empirical application illustrates the testing approach. | A Bootstrap Test for the Existence of Moments for GARCH Processes | 2019-02-05 20:32:20 | Alexander Heinemann | http://arxiv.org/abs/1902.01808v3, http://arxiv.org/pdf/1902.01808v3 | econ.EM |
29,066 | em | We study discrete panel data methods where unobserved heterogeneity is
revealed in a first step, in environments where population heterogeneity is not
discrete. We focus on two-step grouped fixed-effects (GFE) estimators, where
individuals are first classified into groups using kmeans clustering, and the
model is then estimated allowing for group-specific heterogeneity. Our
framework relies on two key properties: heterogeneity is a function - possibly
nonlinear and time-varying - of a low-dimensional continuous latent type, and
informative moments are available for classification. We illustrate the method
in a model of wages and labor market participation, and in a probit model with
time-varying heterogeneity. We derive asymptotic expansions of two-step GFE
estimators as the number of groups grows with the two dimensions of the panel.
We propose a data-driven rule for the number of groups, and discuss bias
reduction and inference. | Discretizing Unobserved Heterogeneity | 2021-02-03 19:03:19 | Stéphane Bonhomme Thibaut Lamadon Elena Manresa | http://arxiv.org/abs/2102.02124v1, http://arxiv.org/pdf/2102.02124v1 | econ.EM |