[go: up one dir, main page]

nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒11‒12
sixteen papers chosen by
Sune Karlsson
Örebro universitet

  1. CONSISTENT ESTIMATION OF MODELS DEFINED BY CONDITIONAL MOMENT RESTRICTIONS UNDER MINIMAL IDENTIFYING CONDITIONS By Xuexin Wang
  2. Likelihood ratio inference for missing data models By Karun Adusumilli; Taisuke Otsu
  3. Choosing Prior Hyperparameters: With Applications To Time-Varying Parameter Models By Wang, Mu-Chun
  4. Variational Inference for high dimensional structured factor copulas By Galeano San Miguel, Pedro; Ausín Olivera, María Concepción; Nguyen, Hoang
  5. Nuclear Norm Regularized Estimation of Panel Regression Models By Hyungsik Roger Moon; Martin Weidner
  6. A Consistent LM Type Specification Test for Semiparametric Models By Ivan Korolev
  7. Identification and Estimation of Group-Level Partial Effects By Kenichi Nagasawa
  8. Skewness-Adjusted Bootstrap Confidence Intervals and Confidence Bands for Impulse Response Functions By Grabowski, Daniel; Staszewska-Bystrova, Anna
  9. Hybrid choice models vs. endogeneity of indicator variables: a Monte Carlo investigation By Wiktor Budziński; Mikołaj Czajkowski
  10. Bounds on Average and Quantile Treatment Effects on Duration Outcomes under Censoring, Selection, and Noncompliance By Blanco, German; Chen, Xuan; Flores, Carlos A.; Flores-Lagunes, Alfonso
  11. Doubly Robust GMM Inference and Differentiated Products Demand Models By Stépahne Auray; Nicolas Lepage-Saucier; Purevdorj Tuvaandor
  12. Distributional Impact Analysis: Toolkit and Illustrations of Impacts beyond the Average Treatment Effect By Bedoya, Guadalupe; Bitarello, Luca; Davis, Jonathan; Mittag, Nikolas
  13. Semiparametrically efficient estimation of the average linear regression function By Bryan S. Graham; Cristine Campos de Xavier Pinto
  14. Quantifying Family, School, and Location Effects in the Presence of Complementarities and Sorting By Mohit Agrawal; Joseph G. Altonji; Richard K. Mansfield
  15. Wavelet analysis for temporal disaggregation By Chiara Perricone
  16. Multivariate Analysis Advancements and Applications by Subspace-based Techniques By Xu Huang

  1. By: Xuexin Wang
    Abstract: For econometric models defined by conditional moment restrictions, it is well known that the popular estimation methods such as the generalized method of moments and generalized empirical likelihood based on an arbitrary finite number of unconditional moment restrictions implied by the conditional moment restrictions can render inconsistent estimates. To guarantee the estimation consistency, some additional assumptions on these unconditional moment restrictions have to be levied. This paper introduces a simple consistent estimation procedure without assuming identifying conditions on the implied unconditional moment restrictions. This procedure is based on a weighted L2 norm with a unique weighting function, where a full continuum of unconditional moment restrictions is employed. It is quite easy to implement for any dimension of conditioning variables, and no any user-chosen number is required. Furthermore statistical inference is straightforward since the proposed estimator is asymptotically normal. Monte Carlo simulations demonstrate that the new estimator has excellent finite sample properties and outperforms other competitors in the cases we consider.
    Keywords: Characteristic function; A continuum of moments; Identification; Nonlinear Models; Nonintegrable weighting function
    JEL: C12 C22
    Date: 2018–10–29
    URL: http://d.repec.org/n?u=RePEc:wyi:wpaper:002382&r=ecm
  2. By: Karun Adusumilli; Taisuke Otsu
    Abstract: Missing or incomplete outcome data is a ubiquitous problem in biomedical and social sciences. Under the missing at random setup, inverse probability weighting is widely applied to estimate and make inference on the population objects of interest, but it is known that its performance can be poor in practical sample sizes. Recently, to overcome this problem, several alternative weighting methods have been proposed that directly balance the distributional characteristics of covariates. These existing balancing methods are useful for obtaining point estimates of the population objects. The purpose of this paper is to develop a new weighting scheme, based on Empirical Likelihood, that would be useful for conducting interval estimation or hypothesis testing. In particular, we propose re-weighting the covariate balancing weights so that the resulting objective function admits an asymptotic chi-square calibration. Our re-weighting method is naturally extended to inference on treatment effects, data combination models, and high-dimensional covariates. Simulation and empirical examples illustrate usefulness of the proposed method.
    Keywords: Missing data, Empirical balancing, Treatment effect, Nonparametric likelihood
    JEL: C12 C14
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:599&r=ecm
  3. By: Wang, Mu-Chun
    Abstract: Time-varying parameter models with stochastic volatility are widely used to study macroeconomic and financial data. These models are almost exclusively estimated using Bayesian methods. A common practice is to focus on prior distributions that themselves depend on relatively few hyperparameters such as the scaling factor for the prior covariance matrix of the residuals governing time variation in the parameters. The choice of these hyperparameters is crucial because their influence is sizeable for standard sample sizes. In this paper we treat the hyperparameters as part of a hierarchical model and propose a fast, tractable, easy-to-implement, and fully Bayesian approach to estimate those hyperparameters jointly with all other parameters in the model. We show via Monte Carlo simulations that, in this class of models, our approach can drastically improve on using fixed hyperparameters previously proposed in the literature.
    Keywords: Bayesian inference,Bayesian VAR,Time variation
    JEL: C11
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:zbw:vfsc18:181621&r=ecm
  4. By: Galeano San Miguel, Pedro; Ausín Olivera, María Concepción; Nguyen, Hoang
    Abstract: Factor copula models have been recently proposed for describing the joint distribution of a large number of variables in terms of a few common latent factors. In this paper, we employ a Bayesian procedure to make fast inferences for multi-factor and structured factor copulas. To deal with the high dimensional structure, we apply a variational inference (VI) algorithm to estimate different specifications of factor copula models. Compared to the Markov chain Monte Carlo (MCMC) approach, the variational approximation is much faster and could handle a sizeable problem in a few seconds. Another issue of factor copula models is that the bivariate copula functions connecting the variables are unknown in high dimensions. We derive an automatic procedure to recover the hidden dependence structure. By taking advantage of the posterior modes of the latent variables, we select the bivariate copula functions based on minimizing the Bayesian information criterion (BIC). The simulation studies in different contexts show that the procedure of bivariate copula selection could be very accurate in comparison to the true generated copula model. We illustrate our proposed procedure with two high dimensional real data sets.
    Keywords: Variational inference; Model selection; Factor copula
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:27652&r=ecm
  5. By: Hyungsik Roger Moon; Martin Weidner
    Abstract: In this paper we investigate panel regression models with interactive fixed effects. We propose two new estimation methods that are based on minimizing convex objective functions. The first method minimizes the sum of squared residuals with a nuclear (trace) norm regularization. The second method minimizes the nuclear norm of the residuals. We establish the consistency of the two resulting estimators. Those estimators have a very important computational advantage compared to the existing least squares (LS) estimator, in that they are defined as minimizers of a convex objective function. In addition, the nuclear norm penalization helps to resolve a potential identification problem for interactive fixed effect models, in particular when the regressors are low-rank and the number of the factors is unknown. We also show how to construct estimators that are asymptotically equivalent to the least squares (LS) estimator in Bai (2009) and Moon and Weidner (2017) by using our nuclear norm regularized or minimized estimators as initial values for a finite number of LS minimizing iteration steps. This iteration avoids any non-convex minimization, while the original LS estimation problem is generally non-convex, and can have multiple local minima.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.10987&r=ecm
  6. By: Ivan Korolev
    Abstract: This paper develops a consistent Lagrange Multiplier (LM) type specification test for semiparametric conditional mean models against nonparametric alternatives. Consistency is achieved by turning a conditional moment restriction into a growing number of unconditional moment restrictions using series methods. The test is simple to implement, because it requires estimating only the restricted semiparametric model and because the asymptotic distribution of the test statistic is pivotal. The use of series methods in estimation of the null semiparamertic model allows me to account for the estimation variance and obtain refined asymptotic results. The test demonstrates good size and power properties in simulations. I apply the test to one of the semiparametric gasoline demand specifications from Yatchew and No (2001) and find no evidence against it.
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.07620&r=ecm
  7. By: Kenichi Nagasawa
    Abstract: This paper presents a new identification result for causal effects of group-level variables when agents select into groups. The model allows for group selection to be based on individual unobserved heterogeneity. This feature leads to correlation between group-level covariates and unobserved individual heterogeneity. Whereas many of the existing identification strategies rely on instrumental variables for group selection, I introduce alternative identifying conditions which involve individual-level covariates that "shift" the distribution of unobserved heterogeneity. I use these conditions to construct a valid control function. The key identifying requirements on the observable "shifter" variables are likely to hold in settings where a rich array of individual characteristics are observed. The identification strategy is constructive and leads to a semiparametric, regression-based estimator of group-level causal effects, which I show to be consistent and asymptotically normal. A simulation study indicates good finite-sample properties of this estimator. I use my results to re-analyze the effects of school/neighborhood characteristics on student outcomes, following the work of Altonji and Mansfield (2018).
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1811.00667&r=ecm
  8. By: Grabowski, Daniel; Staszewska-Bystrova, Anna
    Abstract: This article investigates the construction of skewness-adjusted confidence intervals and joint confidence bands for impulse response functions from vector autoregressive models. Three different implementations of the skewness adjustment are investigated. The methods are based on a bootstrap algorithm that adjusts mean and skewness of the bootstrap distribution of the autoregressive coefficients before the impulse response functions are computed. Using extensive Monte Carlo simulations, the methods are shown to improve the coverage accuracy in small and medium sized samples and for unit root processes for both known and unknown lag orders.
    Keywords: Bootstrap,confidence intervals,joint confidence bands,vector autoregression,impulse response functions
    JEL: C15 C32
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:zbw:vfsc18:181590&r=ecm
  9. By: Wiktor Budziński (University of Warsaw, Faculty of Economic Sciences); Mikołaj Czajkowski (University of Warsaw, Faculty of Economic Sciences)
    Abstract: We investigate the problem of endogeneity in the context of hybrid choice (integrated choice and latent variable) models. We first provide a thorough analysis of potential causes of endogeneity and propose a working taxonomy. We demonstrate that although it is widely believed that the hybrid choice framework is devoid of the endogeneity problem, there is no theoretical reason to expect that this is the case. We then demonstrate empirically that the problem exists in the hybrid choice framework too. By conducting a Monte Carlo experiment, we display the extent of the bias resulting from measurement and endogeneity biases. Finally, we propose two novel solutions to address the problem: by explicitly accounting for correlation between structural and discrete choice component error terms (or with random parameters in a utility function), or by introducing additional latent variables. Using simulated data, we demonstrate that these approaches work as expected, that is, they result in unbiased estimates of all model parameters.
    Keywords: hybrid choice models, endogeneity, measurement bias, attitudinal variables, indicators
    JEL: C35 C51 Q51 R41
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2018-21&r=ecm
  10. By: Blanco, German (Illinois State University); Chen, Xuan (Renmin University of China); Flores, Carlos A. (California Polytechnic State University); Flores-Lagunes, Alfonso (Syracuse University)
    Abstract: We consider the problem of assessing the effects of a treatment on duration outcomes using data from a randomized evaluation with noncompliance. For such settings, we derive nonparametric sharp bounds for average and quantile treatment effects addressing three pervasive problems simultaneously: self-selection into the spell of interest, endogenous censoring of the duration outcome, and noncompliance with the assigned treatment. Ignoring any of these issues could yield biased estimates of the effects. Notably, the proposed bounds do not impose the independent censoring assumption - which is commonly used to address censoring but is likely to fail in important settings - or exclusion restrictions to address endogeneity of censoring and selection. Instead, they employ monotonicity and stochastic dominance assumptions. To illustrate the use of these bounds we assess the effects of the Job Corps (JC) training program on its participants' last complete employment spell duration. Our estimated bounds suggest that JC participation may increase the average duration of the last complete employment spell before week 208 after randomization by at least 5.6 log points (5.8 percent) for individuals who comply with their treatment assignment and experience a complete employment spell whether or not they enrolled in JC. The estimated quantile treatment effects suggest the impacts may be heterogeneous, and strengthen our conclusions based on the estimated average effects.
    Keywords: duration outcomes, partial identification, principal stratification, independent censoring, job corps
    JEL: C21 C24 C41 J64
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp11864&r=ecm
  11. By: Stépahne Auray (CREST; ENSAI; ULCO); Nicolas Lepage-Saucier (CREST; ENSAI); Purevdorj Tuvaandor (CREST; ENSAI)
    Abstract: This paper develops robust inference methods for moment condition models implemented with a n1=2-consistent auxiliary estimator of the nuisance parameters. When applied to models subject to weak identification and boundary parameter problems; they simultaneously overcome both irregularities and are asymptotically pivotal with minimal assumptions on the parameter space. If these problems are not present in the data; they are asymptotically equivalent to standard statistics for nonlinear models. They also have similar computational requirements. We apply our tests to the differentiated products demand model; which may suffer from both problems: the variance of the random coefecients is often close to zero; causing the boundary parameter problem; and the strength of the available instruments is often put in doubt; which may cause weak identification. We evaluate the performance of the proposed tests by simulations.
    Keywords: Boundary parameter, heterogeneity, pivotal statistic, random utility, robust inference, weak identification.
    Date: 2018–08–25
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2018-13&r=ecm
  12. By: Bedoya, Guadalupe (World Bank); Bitarello, Luca (Northwestern University); Davis, Jonathan (University of Chicago); Mittag, Nikolas (CERGE-EI)
    Abstract: Program evaluations often focus on average treatment effects. However, average treatment effects miss important aspects of policy evaluation, such as the impact on inequality and whether treatment harms some individuals. A growing literature develops methods to evaluate such issues by examining the distributional impacts of programs and policies. This toolkit reviews methods to do so, focusing on their application to randomized control trials. The paper emphasizes two strands of the literature: estimation of impacts on outcome distributions and estimation of the distribution of treatment impacts. The article then discusses extensions to conditional treatment effect heterogeneity, that is, to analyses of how treatment impacts vary with observed characteristics. The paper offers advice on inference, testing, and power calculations, which are important when implementing distributional analyses in practice. Finally, the paper illustrates select methods using data from two randomized evaluations.
    Keywords: policy evaluation, distributional impact analysis, heterogeneous treatment effects, impacts on outcome distributions, distribution of treatment effects, random control trials
    JEL: C18 C21 C54 C93 D39
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp11863&r=ecm
  13. By: Bryan S. Graham; Cristine Campos de Xavier Pinto
    Abstract: Let Y be an outcome of interest, X a vector of treatment measures, and W a vector of pre-treatment control variables. Here X may include (combinations of) continuous, discrete, and/or non-mutually exclusive "treatments". Consider the linear regression of Y onto X in a subpopulation homogenous in W = w (formally a conditional linear predictor). Let b0(w) be the coefficient vector on X in this regression. We introduce a semiparametrically efficient estimate of the average beta0 = E[b0(W)]. When X is binary-valued (multi-valued) our procedure recovers the (a vector of) average treatment effect(s). When X is continuously-valued, or consists of multiple non-exclusive treatments, our estimand coincides with the average partial effect (APE) of X on Y when the underlying potential response function is linear in X, but otherwise heterogenous across agents. When the potential response function takes a general nonlinear/heterogenous form, and X is continuously-valued, our procedure recovers a weighted average of the gradient of this response across individuals and values of X. We provide a simple, and semiparametrically efficient, method of covariate adjustment for settings with complicated treatment regimes. Our method generalizes familiar methods of covariate adjustment used for program evaluation as well as methods of semiparametric regression (e.g., the partially linear regression model).
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1810.12511&r=ecm
  14. By: Mohit Agrawal; Joseph G. Altonji; Richard K. Mansfield
    Abstract: We extend the control function approach of Altonji and Mansfield (2018) to allow for multiple group levels and complementarities. Our analysis provides a foundation for causal interpretation of multilevel mixed effects models in the presence of sorting. In our empirical application, we obtain lower bound estimates of the importance of school and commuting zone inputs for education and wages. A school/location combination at the 90th versus 10th percentile of the school/location quality distribution increases the high school graduation probability and college enrollment probability by at least .06 and .17, respectively. Treatment effects are heterogeneous across subgroups, primarily due to nonlinearity in the educational attainment model.
    JEL: C1 C31 I20 I24 R23
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:25167&r=ecm
  15. By: Chiara Perricone (DEF,University of Rome "Tor Vergata")
    Abstract: A problem often faced by economic researchers is the interpolation or distribution of economic time series observed at low frequency into compatible higher frequency data. A method based on wavelet analysis is presented to temporal disaggregate time series. A standard `plausible' method is applied, not to the original time series, but to the smooth components resulting from a discrete wavelet transformation. This first step generates a smoothed component at the desired frequency. Subsequently, a noisy component is added to the smooth series to enforce the natural constraint of the series. The method is applied to national accounts for Euro Area, to study both ow and stock variables, and it outperforms other standard methods, as Stram and Wei or low pass interpolation when the series of interest is volatile.
    Keywords: wavelet, temporal disaggregation, sector financial accounts
    JEL: C10 C65 C32 E32
    Date: 2018–10–29
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:444&r=ecm
  16. By: Xu Huang (De Montfort University)
    Abstract: Alongside the increasing speed of human society developments and technological advancements, the complexity level of multivariate analysis has rapidly risen due to the prevalence of knowledge and science, regardless of the existing controversial drawbacks of the wide range of empirical methods (parametric and limited nonparametric approaches). This research aims to expand the multivariate extension of subspace-based techniques on multivariate analysis and brings novel contributions to not only the theoretical advancements but also broadening the horizon of the corresponding applications in complex systems like economics and social sciences. Subspace-based techniques adopted in this research include Singular Value Decomposition (SVD), Singular Spectrum Analysis (SSA) and Convergent Cross Mapping (CCM), which all have the advantages of being nonparametric approaches, assumption-free, no limitations to nonlinearity or complex dynamics, signal and noise together as a whole as the research object. This research proposed two novel multivariate analysis methods based on the study of subspace-based techniques: the mutual association measure based on eigenvalue-based criterion; and the hybrid causality detection approach by combining SSA and Convergent Cross Mapping (CCM). Both simulations and several successful implementations are conducted for the critical evaluation of the proposed advancements with promising robust performances. The proposed approaches offer the interested parties a different angle to resolve the multivariate analysis questions in a reduced form, data-oriented aspect. It is also expected to open the research opportunities of nonparametric multivariate analysis through the advanced, inclusive subspace-based techniques that show strong adaptability and capability in the complex system analysis in economics and social science.
    Keywords: Subspace-based Techniques; Multivariate analysis advancements; Causality detection; Mutual Association Measure; Singular Spectrum Analysis; Convergent Cross Mapping;
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:sek:iacpro:6509598&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.