[go: up one dir, main page]

nep-ecm New Economics Papers
on Econometrics
Issue of 2024‒02‒12
fifteen papers chosen by
Sune Karlsson, Örebro universitet


  1. Counterfactuals in factor models By Jad Beyhum
  2. Covariance Function Estimation for High-Dimensional Functional Time Series with Dual Factor Structures By Chenlei Leng; Degui Li; Hamlin Shang; Yingcun Xia
  3. On the Validity of Classical and Bayesian DSGE-Based Inference By Katerina Petrova
  4. Identification with possibly invalid IVs By Christophe Bruneel-Zupanc; Jad Beyhum
  5. Identification of Dynamic Nonlinear Panel Models under Partial Stationarity By Wayne Yuan Gao; Rui Wang
  6. Bootstrap Diagnostics for Irregular Estimators By Isaiah Andrews; Jesse M. Shapiro
  7. Information based inference in models with set-valued predictions and misspecification By Hiroaki Kaido; Francesca Molinari
  8. Efficient Estimation of Stochastic Parameters: A GLS Approach By Da Huo, Da
  9. Robust Bayesian Method for Refutable Models By Moyu Liao
  10. Model Averaging and Double Machine Learning By Achim Ahrens; Christian B. Hansen; Mark E. Schaffer; Thomas Wiemann
  11. The Stick-Breaking and Ordering Representation of Compositional Data: Copulas and Regression models By Faugeras, Olivier
  12. Efficient Computation of Confidence Sets Using Classification on Equidistributed Grids By Lujie Zhou
  13. SpotV2Net: Multivariate Intraday Spot Volatility Forecasting via Vol-of-Vol-Informed Graph Attention Networks By Alessio Brini; Giacomo Toscano
  14. An econometric analysis of volatility discovery By Fruet Dias, Gustavo; Papailias, Fotis; Scherrer, Cristina
  15. Variable selection in latent regression IRT models via knockoffs: an application to international large-scale assessment in education By Xie, Zilong; Chen, Yunxiao; von Davier, Matthias; Weng, Haolei

  1. By: Jad Beyhum
    Abstract: We study a new model where the potential outcomes, corresponding to the values of a (possibly continuous) treatment, are linked through common factors. The factors can be estimated using a panel of regressors. We propose a procedure to estimate time-specific and unit-specific average marginal effects in this context. Our approach can be used either with high-dimensional time series or with large panels. It allows for treatment effects heterogenous across time and units and is straightforward to implement since it only relies on principal components analysis and elementary computations. We derive the asymptotic distribution of our estimator of the average marginal effect and highlight its solid finite sample performance through a simulation exercise. The approach can also be used to estimate average counterfactuals or adapted to an instrumental variables setting and we discuss these extensions. Finally, we illustrate our novel methodology through an empirical application on income inequality.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2401.03293&r=ecm
  2. By: Chenlei Leng; Degui Li; Hamlin Shang; Yingcun Xia
    Abstract: We propose a flexible dual functional factor model for modelling high-dimensional functional time series. In this model, a high-dimensional fully functional factor parametrisation is imposed on the observed functional processes, whereas a low-dimensional version (via series approximation) is assumed for the latent functional factors. We extend the classic principal component analysis technique for the estimation of a low-rank structure to the estimation of a large covariance matrix of random functions that satisfies a notion of (approximate) functional "low-rank plus sparse" structure; and generalise the matrix shrinkage method to functional shrinkage in order to estimate the sparse structure of functional idiosyncratic components. Under appropriate regularity conditions, we derive the large sample theory of the developed estimators, including the consistency of the estimated factors and functional factor loadings and the convergence rates of the estimated matrices of covariance functions measured by various (functional) matrix norms. Consistent selection of the number of factors and a data-driven rule to choose the shrinkage parameter are discussed. Simulation and empirical studies are provided to demonstrate the finite-sample performance of the developed model and estimation methodology.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2401.05784&r=ecm
  3. By: Katerina Petrova
    Abstract: This paper studies large sample classical and Bayesian inference in a prototypical linear DSGE model and demonstrates that inference on the structural parameters based on a Gaussian likelihood is unaffected by departures from Gaussianity of the structural shocks. This surprising result is due to a cancellation in the asymptotic variance resulting into a generalized information equality for the block corresponding to the structural parameters. The underlying reason for the cancellation is the certainty equivalence property of the linear rational expectation model. The main implication of this result is that classical and Bayesian Gaussian inference achieve a semi-parametric efficiency bound and there is no need for a “sandwich-form” correction of the asymptotic variance of the structural parameters. Consequently, MLE-based confidence intervals and Bayesian credible sets of the deep parameters based on a Gaussian likelihood have correct asymptotic coverage even when the structural shocks are non-Gaussian. On the other hand, inference on the reduced-form parameters characterizing the volatility of the shocks is invalid whenever the structural shocks have a non-Gaussian density and the paper proposes a simple Metropolis-within-Gibbs algorithm that achieves correct large sample inference for the volatility parameters.
    Keywords: DSGE models; generalized information equality; sandwich form covariance
    JEL: C11 C12 C22
    Date: 2024–01–01
    URL: http://d.repec.org/n?u=RePEc:fip:fednsr:97624&r=ecm
  4. By: Christophe Bruneel-Zupanc; Jad Beyhum
    Abstract: This paper proposes a novel identification strategy relying on quasi-instrumental variables (quasi-IVs). A quasi-IV is a relevant but possibly invalid IV because it is not completely exogenous and/or excluded. We show that a variety of models with discrete or continuous endogenous treatment, which are usually identified with an IV - quantile models with rank invariance additive models with homogenous treatment effects, and local average treatment effect models - can be identified under the joint relevance of two complementary quasi-IVs instead. To achieve identification we complement one excluded but possibly endogenous quasi-IV (e.g., ``relevant proxies'' such as previous treatment choice) with one exogenous (conditional on the excluded quasi-IV) but possibly included quasi-IV (e.g., random assignment or exogenous market shocks). In practice, our identification strategy should be attractive since complementary quasi-IVs should be easier to find than standard IVs. Our approach also holds if any of the two quasi-IVs turns out to be a valid IV.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2401.03990&r=ecm
  5. By: Wayne Yuan Gao; Rui Wang
    Abstract: This paper studies identification for a wide range of nonlinear panel data models, including binary choice, ordered repsonse, and other types of limited dependent variable models. Our approach accommodates dynamic models with any number of lagged dependent variables as well as other types of (potentially contemporary) endogeneity. Our identification strategy relies on a partial stationarity condition, which not only allows for an unknown distribution of errors but also for temporal dependencies in errors. We derive partial identification results under flexible model specifications and provide additional support conditions for point identification. We demonstrate the robust finite-sample performance of our approach using Monte Carlo simulations, with static and dynamic ordered choice models as illustrative examples.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2401.00264&r=ecm
  6. By: Isaiah Andrews; Jesse M. Shapiro
    Abstract: Empirical researchers frequently rely on normal approximations in order to summarize and communicate uncertainty about their findings to their scientific audience. When such approximations are unreliable, they can lead the audience to make misguided decisions. We propose to measure the failure of the conventional normal approximation for a given estimator by the total variation distance between a bootstrap distribution and the normal distribution parameterized by the point estimate and standard error. For a wide class of decision problems and a class of uninformative priors, we show that a multiple of the total variation distance bounds the mistakes which result from relying on the conventional normal approximation. In a sample of recent empirical articles that use a bootstrap for inference, we find that the conventional normal approximation is often poor. We suggest and illustrate convenient alternative reports for such settings.
    JEL: C18 C44 D81
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:32038&r=ecm
  7. By: Hiroaki Kaido; Francesca Molinari
    Abstract: This paper proposes an information-based inference method for partially identified parameters in incomplete models that is valid both when the model is correctly specified and when it is misspecified. Key features of the method are: (i) it is based on minimizing a suitably defined Kullback-Leibler information criterion that accounts for incompleteness of the model and delivers a non-empty pseudotrue set; (ii) it is computationally tractable; (iii) its implementation is the same for both correctly and incorrectly specified models; (iv) it exploits all information provided by variation in discrete and continuous covariates; (v) it relies on Rao’s score statistic, which is shown to be asymptotically pivotal.
    Date: 2024–01–29
    URL: http://d.repec.org/n?u=RePEc:azt:cemmap:02/24&r=ecm
  8. By: Da Huo, Da
    Abstract: This thesis presents a novel rolling GLS-based model to improve the precision of time-varying parameter estimates in dynamic linear models. Through rigorous simulations, the rolling GLS model exhibits enhanced accuracy in scenarios with smaller sample sizes and maintains its efficacy when the normality assumption is relaxed, distinguishing it from traditional models like Kalman Filters. Furthermore, the thesis expands on the model to tackle more complex stochastic structures and validates its effectiveness through practical applications to real-world financial data, like inflation risk premium estimations. The research culminates in offering a robust tool for financial econometrics, enhancing the reliability of financial analyses and predictions.
    Keywords: Time Series Analysis, Dynamic Linear Model, Stochastic Parameters, Least Squares
    JEL: C13 C22 C32 C58 G11 G12
    Date: 2024–01–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:119731&r=ecm
  9. By: Moyu Liao
    Abstract: We propose a robust Bayesian method for economic models that can be rejected under some data distributions. The econometrician starts with a structural assumption which can be written as the intersection of several assumptions, and the joint assumption is refutable. To avoid the model rejection, the econometrician first takes a stance on which assumption $j$ is likely to be violated and considers a measurement of the degree of violation of this assumption $j$. She then considers a (marginal) prior belief on the degree of violation $(\pi_{m_j})$: She considers a class of prior distributions $\pi_s$ on all economic structures such that all $\pi_s$ have the same marginal distribution $\pi_m$. Compared to the standard nonparametric Bayesian method that puts a single prior on all economic structures, the robust Bayesian method imposes a single marginal prior distribution on the degree of violation. As a result, the robust Bayesian method allows the econometrician to take a stance only on the likeliness of violation of assumption $j$. Compared to the frequentist approach to relax the refutable assumption, the robust Bayesian method is transparent on the econometrician's stance of choosing models. We also show that many frequentists' ways to relax the refutable assumption can be found equivalent to particular choices of robust Bayesian prior classes. We use the local average treatment effect (LATE) in the potential outcome framework as the leading illustrating example.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2401.04512&r=ecm
  10. By: Achim Ahrens; Christian B. Hansen; Mark E. Schaffer; Thomas Wiemann
    Abstract: This paper discusses pairing double/debiased machine learning (DDML) with stacking, a model averaging method for combining multiple candidate learners, to estimate structural parameters. We introduce two new stacking approaches for DDML: short-stacking exploits the cross-fitting step of DDML to substantially reduce the computational burden and pooled stacking enforces common stacking weights over cross-fitting folds. Using calibrated simulation studies and two applications estimating gender gaps in citations and wages, we show that DDML with stacking is more robust to partially unknown functional forms than common alternative approaches based on single pre-selected learners. We provide Stata and R software implementing our proposals.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2401.01645&r=ecm
  11. By: Faugeras, Olivier
    Abstract: Compositional Data (CoDa) is usually viewed as data on the simplex and is studied via a log-ratio analysis, following the classical work of J. Aitchison (1986). We propose an alternative view of CoDa as stick breaking processes. The first stick-breaking approach gives rise to a view of CoDa as ordered statistics, from which we can derive “stick-ordered” distributions. The second approach is based on a rescaled stick-breaking transformation, and give rises to a geometric view of CoDa as a free unit cube. The latter allows to introduce copula and regression models, which are useful for studying the internal or external dependence of CoDa. We establish connections with other topics of statistics like i) spacings and order statistics, ii) Bayesian nonparametrics and Dirichlet distributions, iii) neutrality, iv) mixability.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:129018&r=ecm
  12. By: Lujie Zhou
    Abstract: Economic models produce moment inequalities, which can be used to form tests of the true parameters. Confidence sets (CS) of the true parameters are derived by inverting these tests. However, they often lack analytical expressions, necessitating a grid search to obtain the CS numerically by retaining the grid points that pass the test. When the statistic is not asymptotically pivotal, constructing the critical value for each grid point in the parameter space adds to the computational burden. In this paper, we convert the computational issue into a classification problem by using a support vector machine (SVM) classifier. Its decision function provides a faster and more systematic way of dividing the parameter space into two regions: inside vs. outside of the confidence set. We label those points in the CS as 1 and those outside as -1. Researchers can train the SVM classifier on a grid of manageable size and use it to determine whether points on denser grids are in the CS or not. We establish certain conditions for the grid so that there is a tuning that allows us to asymptotically reproduce the test in the CS. This means that in the limit, a point is classified as belonging to the confidence set if and only if it is labeled as 1 by the SVM.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2401.01804&r=ecm
  13. By: Alessio Brini; Giacomo Toscano
    Abstract: This paper introduces SpotV2Net, a multivariate intraday spot volatility forecasting model based on a Graph Attention Network architecture. SpotV2Net represents financial assets as nodes within a graph and includes non-parametric high-frequency Fourier estimates of the spot volatility and co-volatility as node features. Further, it incorporates Fourier estimates of the spot volatility of volatility and co-volatility of volatility as features for node edges. We test the forecasting accuracy of SpotV2Net in an extensive empirical exercise, conducted with high-frequency prices of the components of the Dow Jones Industrial Average index. The results we obtain suggest that SpotV2Net shows improved accuracy, compared to alternative econometric and machine-learning-based models. Further, our results show that SpotV2Net maintains accuracy when performing intraday multi-step forecasts. To interpret the forecasts produced by SpotV2Net, we employ GNNExplainer, a model-agnostic interpretability tool and thereby uncover subgraphs that are critical to a node's predictions.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2401.06249&r=ecm
  14. By: Fruet Dias, Gustavo; Papailias, Fotis; Scherrer, Cristina
    Abstract: We investigate information processing in the stochastic process driving stock’s volatility (volatility discovery). We apply fractionally cointegration techniques to decompose the estimates of the market-specific integrated variances into an estimate of the common integrated variance of the efficient price and a transitory component. The market weights on the common integrated variance of the efficient price are the volatility discovery measures. We relate the volatility discovery measure to the price discovery framework and formally show their roles on the identification of the integrated variance of the efficient price. We establish the limiting distribution of the volatility discovery measures by resorting to both long span and in-fill asymptotics. The empirical application is in line with our theoretical results, as it reveals that trading venues incorporate new information into the stochastic volatility process in an individual manner and that the volatility discovery analysis identifies a distinct information process than that based on the price discovery analysis.
    Keywords: double asymptotics; fractionally cointegrated vector autoregressive model; high-frequency data; long memory; market microstructure; price discovery; realized measures
    JEL: C1 J1
    Date: 2023–12–15
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:121363&r=ecm
  15. By: Xie, Zilong; Chen, Yunxiao; von Davier, Matthias; Weng, Haolei
    Abstract: International large-scale assessments (ILSAs) play an important role in educational research and policy making. They collect valuable data on education quality and performance development across many education systems, giving countries the opportunity to share techniques, organizational structures, and policies that have proven efficient and successful. To gain insights from ILSA data, we identify non-cognitive variables associated with students’ academic performance. This problem has three analytical challenges: 1) academic performance is measured by cognitive items under a matrix sampling design; 2) there are many missing values in the non-cognitive variables; and 3) multiple comparisons due to a large number of non-cognitive variables. We consider an application to the Programme for International Student Assessment (PISA), aiming to identify non-cognitive variables associated with students’ performance in science. We formulate it as a variable selection problem under a general latent variable model framework and further propose a knockoff method that conducts variable selection with a controlled error rate for false selections.
    Keywords: Model-X knockoffs; missing data; latent variables; variable selection; international large-scale assessment; OUP deal
    JEL: C1
    Date: 2023–12–12
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:120812&r=ecm

This nep-ecm issue is ©2024 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.