[go: up one dir, main page]

nep-ecm New Economics Papers
on Econometrics
Issue of 2020‒03‒02
eighteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Robust Dynamic Panel Data Models Using ε-contamination By Badi H. Baltagi; Georges Bresson; Anoop Chaturvedi; Guy Lacroix
  2. Inferences for Partially Conditional Quantile Treatment Effect Model By Zongwu Cai; Ying Fang; Ming Lin; Shengfang Tang
  3. Partial Identification and Inference for Dynamic Models and Counterfactuals By Myrto Kalouptsidi; Yuichi Kitamura; Lucas Lima; Eduardo A. Souza-Rodrigues
  4. Simple Tests for Stock Return Predictability with Improved Size and Power Properties By Leybourne, Stephen J; Harvey, David I; Taylor, AM Robert
  5. A New Nonlinear Wavelet-Based Unit Root Test with Structural Breaks By Aydin, Mucahit
  6. Nonparametric Significance Testing in Measurement Error Models By Hao Dong; Luke Taylor
  7. Density forecast combinations: the real-time dimension By McAdam, Peter; Warne, Anders
  8. Dependence-Robust Inference Using Resampled Statistics By Michael P. Leung
  9. Nonparametric forecasting of multivariate probability density functions By Dominique Guegan; Matteo Iacopini
  10. Beyond Connectedness: A Covariance Decomposition based Network Risk Model By Umut Akovali
  11. Estimation of the Financial Cycle with a Rank-Reduced Multivariate State-Space Model By Rob Luginbuhl
  12. Bayesian estimation of agent-based models via adaptive particle Markov chain Monte Carlo By Lux, Thomas
  13. Efficient Policy Learning from Surrogate-Loss Classification Reductions By Andrew Bennett; Nathan Kallus
  14. Blinder-Oaxaca decomposition with recursive tree-based methods: a technical note By Olga Takacs; Janos Vincze
  15. Econometrics at scale: Spark up big data in economics By Bluhm, Benjamin; Cutura, Jannic
  16. A Bayesian Covariance Graph And Latent Position Model For Multivariate Financial Time Series By Daniel Felix Ahelegbey; Luis Carvalho; Eric D. Kolaczyk
  17. Time-inhomogeneous Gaussian stochastic volatility models: Large deviations and super roughness By Archil Gulisashvili
  18. Invariant measures for fractional stochastic volatility models By Bal\'azs Gerencs\'er; Mikl\'os R\'asonyi

  1. By: Badi H. Baltagi; Georges Bresson; Anoop Chaturvedi; Guy Lacroix
    Abstract: This paper extends the work of Baltagi et al. (2018) to the popular dynamic panel data model. We investigate the robustness of Bayesian panel data models to possible misspecification of the prior distribution. The proposed robust Bayesian approach departs from the standard Bayesian framework in two ways. First, we consider the ε-contamination class of prior distributions for the model parameters as well as for the individual effects. Second, both the base elicited priors and the ε-contamination priors use Zellner (1986)'s g-priors for the variance-covariance matrices. We propose a general "toolbox" for a wide range of specifications which includes the dynamic panel model with random effects, with cross-correlated effects à la Chamberlain, for the Hausman-Taylor world and for dynamic panel data models with homogeneous/heterogeneous slopes and cross-sectional dependence. Using a Monte Carlo simulation study, we compare the finite sample properties of our proposed estimator to those of standard classical estimators. The paper contributes to the dynamic panel data literature by proposing a general robust Bayesian framework which encompasses the conventional frequentist specifications and their associated estimation methods as special cases.
    Keywords: Dynamic Model,ε-contamination,g-priors,Type-II Maximum Likelihood Posterior Density,Panel Data,Robust Bayesian Estimator,Two-Stage Hierarchy,
    JEL: C11 C23 C26
    Date: 2020–02–03
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2020s-07&r=all
  2. By: Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Ying Fang (The Wang Yanan Institute for Studies in Economics, Xiamen University, Xiamen, Fujian 361005, China); Ming Lin (The Wang Yanan Institute for Studies in Economics, Xiamen University, Xiamen, Fujian 361005, China); Shengfang Tang (Department of Statistics, School of Economics, Xiamen University, Xiamen, Fujian 361005, China)
    Abstract: In this paper, a new model, termed as the partially conditional quantile treatment effect (PCQTE) model, is proposed to characterize the heterogeneity of treatment effect conditional on some predetermined variable(s). We show that the partially conditional quantile treatment effect is identified under the assumption of selection on observables, which leads to a semiparametric estimation procedure in two steps: First, parametric estimation of the propensity score function and then, nonparametric estimation of conditional quantile treatment effect. Under some regularity conditions, the consistency and asymptotic normality of the proposed semiparametric estimator are derived. In addition, a specification test is seminally proposed in quantile regression literature, to test whether there exits heterogeneity for PCQTE across sub-populations, a consistent test, based on the Cramer-von Mises type criterion. The asymptotic properties of the proposed test statistic are investigated, including consistency and asymptotic normality. Finally, the performance of the proposed methods is illustrated through Monte Carlo experiments and an empirical application on estimating the effect of the first-time mother's smoking during pregnancy on the baby's birth weight conditional on mother's age and testing whether the partially conditional quantile treatment effect varies across different mother's age.
    Keywords: Conditional quantile treatment effect; Heterogeneity; Specification test; Propensity score; Semiparametric estimation.
    JEL: C12 C13 C14 C23
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202005&r=all
  3. By: Myrto Kalouptsidi; Yuichi Kitamura; Lucas Lima; Eduardo A. Souza-Rodrigues
    Abstract: We provide a general framework for investigating partial identification of structural dynamic discrete choice models and their counterfactuals, along with uniformly valid inference procedures. In doing so, we derive sharp bounds for the model parameters, counterfactual behavior, and low-dimensional outcomes of interest, such as the average welfare effects of hypothetical policy interventions. We char- acterize the properties of the sets analytically and show that when the target outcome of interest is a scalar, its identified set is an interval whose endpoints can be calculated by solving well-behaved constrained optimization problems via standard algorithms. We obtain a uniformly valid inference pro- cedure by an appropriate application of subsampling. To illustrate the performance and computational feasibility of the method, we consider both a Monte Carlo study of firm entry/exit, and an empirical model of export decisions applied to plant-level data from Colombian manufacturing industries. In these applications, we demonstrate how the identified sets shrink as we incorporate alternative model restrictions, providing intuition regarding the source and strength of identification.
    JEL: C0 C1 F0 L0
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:26761&r=all
  4. By: Leybourne, Stephen J; Harvey, David I; Taylor, AM Robert
    Abstract: Predictive regression methods are widely used to examine the predictability of (excess) stock returns by lagged financial variables characterised by unknown degrees of persistence and endogeneity. We develop new and easy to implement tests for predictability in these circumstances using regression t-ratios. The simplest possible test, optimal (under Gaussianity) for a weakly persistent and exogenous predictor, is based on the standard t-ratio from the OLS regression of returns on a constant and the lagged predictor. Where the predictor is endogenous, we show that the optimal, but infeasible, test for predictability is based on the t-ratio on the lagged predictor when augmenting the basic predictive regression above with the current period innovation driving the predictor. We propose a feasible version of this test, designed for the case where the predictor is an endogenous near-unit root process, using a GLS-based estimate of this innovation. We also discuss a variant of the standard t-ratio obtained from the predictive regression of OLS demeaned returns on the GLS demeaned lagged predictor. In the near-unit root case, the limiting null distributions of these three statistics depend on both the endogeneity correlation parameter and the local-to-unity parameter characterising the predictor. A feasible method for obtaining asymptotic critical values is discussed and response surfaces are provided. To develop procedures which display good size and power properties regardless of the degree of persistence of the predictor, we propose tests based on weighted combinations of the three t-ratios discussed above, where the weights are obtained using the p-values from a unit root test on the predictor. Using Monte Carlo methods we compare our preferred weighted test with the leading tests in the literature. These results suggest that, despite their simplicity, our weighted tests display very good finite sample size control and power across a range of persistence and endogeneity levels for the predictor, comparing very favourably with these extant tests. An empirical illustration using US stock returns is provided.
    Keywords: predictive regression, persistence, endogeneity, weighted statistics
    Date: 2020–02–24
    URL: http://d.repec.org/n?u=RePEc:esy:uefcwp:26886&r=all
  5. By: Aydin, Mucahit
    Abstract: In the literature, there are no nonlinear wavelet-based unit root tests with structural breaks. To fill this gap in the literature, this study proposes new wavelet-based unit root tests that take into account nonlinearity and structural breaks. According to Monte Carlo simulations results, the proposed tests show better size and power properties as the sample size increases. Moreover, the results indicate that the Fourier Wavelet-based KSS (FWKSS) unit root test is more powerful than the WKSS test in the presences of structural breaks.
    Keywords: Unit Root Test, Nonlinearity, Wavelet, Fourier Function.
    JEL: C12 C22
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:98693&r=all
  6. By: Hao Dong (Southern Methodist University); Luke Taylor (Aarhus University)
    Abstract: We develop the first nonparametric significance test for regression models with classical measurement error in the regressors. In particular, the Cram�r-von Mises test and the Kolmogorov-Smirnov test for the null hypothesis $E[Y|X^{*},Z^{*}]=E[Y|Z^{*}]$ are proposed when only noisy measurements of $X^{*}$ and $Z^{*}$ are available. The asymptotic null distributions of the test statistics are derived and a bootstrap method is implemented to obtain the critical values. Despite the test statistics being constructed using deconvolution estimators, we show that the test can detect a sequence of local alternatives converging to the null at the root-n rate. We also highlight the finite sample performance of the test through a Monte Carlo study.
    Keywords: Significance test, deconvolution, classical measurement error, unknown error distribution.
    JEL: C14
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:smu:ecowpa:2003&r=all
  7. By: McAdam, Peter; Warne, Anders
    Abstract: Density forecast combinations are examined in real-time using the log score to compare five methods: fixed weights, static and dynamic prediction pools, as well as Bayesian and dynamic model averaging. Since real-time data involves one vintage per time period and are subject to revisions, the chosen actuals for such comparisons typically differ from the information that can be used to compute model weights. The terms observation lag and information lag are introduced to clarify the different time shifts involved for these computations and we discuss how they influence the combination methods. We also introduce upper and lower bounds for the density forecasts, allowing us to benchmark the combination methods. The empirical study employs three DSGE models and two BVARs, where the former are variants of the Smets and Wouters model and the latter are benchmarks. The models are estimated on real-time euro area data and the forecasts cover 2001–2014, focusing on inflation and output growth. We find that some combinations are superior to the individual models for the joint and the output forecasts, mainly due to over-confident forecasts of the BVARs during the Great Recession. Combinations with limited weight variation over time and with positive weights on all models provide better forecasts than those with greater weight variation. For the inflation forecasts, the DSGE models are better overall than the BVARs and the combination methods. JEL Classification: C11, C32, C52, C53, E37
    Keywords: Bayesian inference, euro area, forecast comparisons, model averaging, prediction pools, predictive likelihood
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20202378&r=all
  8. By: Michael P. Leung
    Abstract: We develop inference procedures robust to general forms of weak dependence. These involve test statistics constructed by resampling data in a manner that does not depend on the unknown correlation structure of the data. The statistics are simple to compute and asymptotically normal under the weak requirement that the target parameter can be consistently estimated at the parametric rate. This requirement holds for regular estimators under many well-known forms of weak dependence and justifies the claim of dependence-robustness. We consider applications to settings with unknown or complicated forms of dependence, with various forms network dependence as leading examples. We develop tests for both moment equalities and inequalities.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.02097&r=all
  9. By: Dominique Guegan (UP1 - Université Panthéon-Sorbonne, CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique, Labex ReFi - UP1 - Université Panthéon-Sorbonne, University of Ca’ Foscari [Venice, Italy]); Matteo Iacopini (UP1 - Université Panthéon-Sorbonne, CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique, Labex ReFi - UP1 - Université Panthéon-Sorbonne, University of Ca’ Foscari [Venice, Italy])
    Abstract: The study of dependence between random variables is the core of theoretical and applied statistics. Static and dynamic copula models are useful for describing the dependence structure, which is fully encrypted in the copula probability density function. However, these models are not always able to describe the temporal change of the dependence patterns, which is a key characteristic of financial data. We propose a novel nonparametric framework for modelling a time series of copula probability density functions, which allows to forecast the entire function without the need of post-processing procedures to grant positiveness and unit integral. We exploit a suitable isometry that allows to transfer the analysis in a subset of the space of square integrable functions, where we build on nonparametric functional data analysis techniques to perform the analysis. The framework does not assume the densities to belong to any parametric family and it can be successfully applied also to general multivariate probability density functions with bounded or unbounded support. Finally, a noteworthy field of application pertains the study of time varying networks represented through vine copula models. We apply the proposed methodology for estimating and forecasting the time varying dependence structure between the S&P500 and NASDAQ indices.
    Keywords: nonparametric statistics,functional PCA,multivariate densities,copula,functional time series,forecast,unbounded support
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:hal:journl:halshs-01821815&r=all
  10. By: Umut Akovali (Koc University)
    Abstract: This study extends the Diebold-Yilmaz Connectedness Index (DYCI) methodology and, based on forecast error covariance decompositions, derives a network risk model for a portfolio of assets. As a normalized measure of the sum of variance contributions, system-wide connectedness averages out the information embedded in the covariance matrix in aggregating pairwise directional measures. This actually does matter, especially when there are large differences in asset variances. As a first step towards deriving the network risk model, the portfolio covariance matrix is decomposed to obtain the network-driven component of the portfolio variance using covariance decompositions. A second step shows that a common factor model can be estimated to obtain both the variance and covariance decompositions. In a third step, using quantile regressions, the proposed network risk model is estimated for different shock sizes. It is shown, in contrast to the DYCI model, the dynamic quantile estimation of the network risk model can differentiate even small shocks at both tails. This result is obtained because the network risk model makes full use of information embedded in the covariance matrix. Estimation results show that in two recent episodes of financial market turmoil, the proposed network risk model captures the responses to systemic events better than the system-wide index.
    Keywords: Connectedness; Covariance decomposition; Factor models, Idiosyncratic risk; Portfolio risk; Quantile regressions; Systemic risk; Vector Autoregressions; Variance decomposition.
    JEL: C32 G21
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:koc:wpaper:2003&r=all
  11. By: Rob Luginbuhl (CPB Netherlands Bureau for Economic Policy Analysis)
    Abstract: We propose a model-based method to estimate a unique financial cycle based on a rank-restricted multivariate state-space model. This permits us to use mixed-frequency data, allowing for longer sample periods. In our model the financial cycle dynamics are captured by an unobserved trigonometric cycle component. We identify a single financial cycle from the multiple time series by imposing rank reduction on this cycle component. The rank reduction can be justified based on a principal components argument. The model also includes unobserved components to capture the business cycle, time-varying seasonality, trends, and growth rates in the data. In this way we can control for these effects when estimating the financial cycle. We apply our model to US and Dutch data and conclude that a bivariate model of credit and house prices is sufficient to estimate the financial cycle.
    JEL: E5 F3 G15 G01
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:cpb:discus:409.rdf&r=all
  12. By: Lux, Thomas
    Abstract: Over the last decade, agent-based models in economics have reached a state of maturity that brought the tasks of statistical inference and goodness-of-fit of such models on the agenda of the research community. While most available papers have pursued a frequentist approach adopting either likelihood-based algorithms or simulated moment estimators, here we explore Bayesian estimation using a Markov chain Monte Carlo approach (MCMC). One major problem in the design of MCMC estimators is finding a parametrization that leads to a reasonable acceptance probability for new draws from the proposal density. With agent-based models the appropriate choice of the proposal density and its parameters becomes even more complex since such models often require a numerical approximation of the likelihood. This brings in additional factors affecting the acceptance rate as it will also depend on the approximation error of the likelihood. In this paper, we take advantage of a number of recent innovations in MCMC: We combine Particle Filter Markov Chain Monte Carlo (PMCMC) as proposed by Andrieu et al. (2010) with adaptive choice of the proposal distribution and delayed rejection in order to identify an appropriate design of the MCMC estimator. We illustrate the methodology using two well-known behavioral asset pricing models.
    Keywords: Agents-based models,Makov chain Monte Carlo,particle filter
    JEL: G12 C15 C58
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:202001&r=all
  13. By: Andrew Bennett; Nathan Kallus
    Abstract: Recent work on policy learning from observational data has highlighted the importance of efficient policy evaluation and has proposed reductions to weighted (cost-sensitive) classification. But, efficient policy evaluation need not yield efficient estimation of policy parameters. We consider the estimation problem given by a weighted surrogate-loss classification reduction of policy learning with any score function, either direct, inverse-propensity weighted, or doubly robust. We show that, under a correct specification assumption, the weighted classification formulation need not be efficient for policy parameters. We draw a contrast to actual (possibly weighted) binary classification, where correct specification implies a parametric model, while for policy learning it only implies a semiparametric model. In light of this, we instead propose an estimation approach based on generalized method of moments, which is efficient for the policy parameters. We propose a particular method based on recent developments on solving moment problems using neural networks and demonstrate the efficiency and regret benefits of this method empirically.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.05153&r=all
  14. By: Olga Takacs (Corvinus University of Budapest and Center for Economic and Regional Studies, Institute of Economics); Janos Vincze (Corvinus University of Budapest and Center for Economic and Regional Studies, Institute of Economics)
    Abstract: The Blinder-Oaxaca decomposition was developed in order to detect and characterize discriminatory treatment, and one of its most frequent use has been the study of wage discrimination. It recognizes that the mere difference between the average wages of two groups may not mean discrimination (in a very wide sense of the word), but the difference can be due to different characteristics the groups possess. It decomposes average differences in the variable of interest into two parts: one explained by observable features of the two group, and an unexplained part, which may signal discrimination. The methodology was originally developed for OLS estimates, but it has been generalized in several nonlinear directions. In this paper we describe afurther extension of the basic idea: we apply Random Forest (RF) regression to estimate the explained and unexplained parts, and then we employ the CART (Classification and Regression Tree) methodology to identify the groups for which discrimination is most or least severe.
    Keywords: Oaxaca-Blinder decomposition, Random Forest Regression. CART
    JEL: C10 C14 C18
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:has:discpr:1923&r=all
  15. By: Bluhm, Benjamin; Cutura, Jannic
    Abstract: This paper provides an overview of how to use \big data" for economic research. We investigate the performance and ease of use of different Spark applications running on a distributed file system to enable the handling and analysis of data sets which were previously not usable due to their size. More specifically, we explain how to use Spark to (i) explore big data sets which exceed retail grade computers memory size and (ii) run typical econometric tasks including microeconometric, panel data and time series regression models which are prohibitively expensive to evaluate on stand-alone machines. By bridging the gap between the abstract concept of Spark and ready-to-use examples which can easily be altered to suite the researchers need, we provide economists and social scientists more generally with the theory and practice to handle the ever growing datasets available. The ease of reproducing the examples in this paper makes this guide a useful reference for researchers with a limited background in data handling and distributed computing.
    Keywords: Econometrics,Distributed Computing,Apache Spark
    JEL: C53 C55
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:safewp:266&r=all
  16. By: Daniel Felix Ahelegbey (Università di Pavia); Luis Carvalho (Boston University); Eric D. Kolaczyk (Boston University)
    Abstract: Current understanding holds that financial contagion is driven mainly by system-wide interconnectedness of institutions. A distinction has been made between systematic and idiosyncratic channels of contagion, with shocks transmitted through the latter expected to be substantially more likely to lead to a crisis than through the former. Idiosyncratic connectivity is thought to be driven not simply by obviously shared characteristics among institutions, but more by the latent strategic position of ?rms in ?nancial markets. We propose a Bayesian hierarchical model for multivariate ?nancial time series that characterizes the interdependence in the idiosyncratic factors of a VAR model via a covariance graphical model whose structure is modeled through a latent position model. We develop an efficient algorithm that samples the network of the idiosyncratic factors and the latent positions underlying the network. We examine the dynamic volatility network and latent positions among 150 publicly listed institutions across the United States and Europe and how they contribute to systemic vulnerabilities and risk transmission.
    Keywords: Bayesian inference, Covariance graph model, Idiosyncratic Contagion Channels, Latent Space Models, Systemic Risk, VAR
    JEL: C11 C15 C51 C55 G01
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0181&r=all
  17. By: Archil Gulisashvili
    Abstract: We introduce time-inhomogeneous stochastic volatility models, in which the volatility is described by a positive function of a Volterra type continuous Gaussian process that may have extremely rough sample paths. The drift function and the volatility function are assumed to be time-dependent and locally $\omega$-continuous for some modulus of continuity $\omega$. The main result obtained in the paper is a sample path large deviation principle for the log-price process in a Gaussian model under very mild restrictions. We apply this result to study the first exit time of the log-price process from an open interval.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.05143&r=all
  18. By: Bal\'azs Gerencs\'er; Mikl\'os R\'asonyi
    Abstract: We establish that a large class of non-Markovian stochastic volatility models converge to an invariant measure as time tends to infinity. Our arguments are based on a novel coupling idea which is of interest on its own right.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.04832&r=all

This nep-ecm issue is ©2020 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.