[go: up one dir, main page]

nep-ecm New Economics Papers
on Econometrics
Issue of 2024‒07‒08
eighteen papers chosen by
Sune Karlsson, Örebro universitet


  1. Bootstrap Inference on a Factor Model Based Average Treatment Effects Estimator By Luya Wang; Jeffrey S. Racine; Qiaoyu Wang
  2. Bayesian nonparametric methods for macroeconomic forecasting By Massimiliano MARCELLINO; Michael PFARRHOFER
  3. On the Identifying Power of Monotonicity for Average Treatment Effects By Yuehao Bai; Shunzhuang Huang; Sarah Moon; Azeem M. Shaikh; Edward J. Vytlacil
  4. On the modelling and prediction of high-dimensional functional time series By Jinyuan Chang; Qin Fang; Xinghao Qiao; Qiwei Yao
  5. Estimating treatment-effect heterogeneity across sites in multi-site randomized experiments with imperfect compliance By Cl\'ement de Chaisemartin; Antoine Deeb
  6. Identifying the Cumulative Causal Effect of a Non-Binary Treatment from a Binary Instrument By Vedant Vohra; Jacob Goldin
  7. LaLonde (1986) after Nearly Four Decades: Lessons Learned By Guido Imbens; Yiqing Xu
  8. Locally Adaptive Online Functional Data Analysis By Valentin Patilea; Jeffrey S. Racine
  9. Distributional Refinement Network: Distributional Forecasting via Deep Learning By Benjamin Avanzi; Eric Dong; Patrick J. Laub; Bernard Wong
  10. Risky Oil: It's All in the Tails By Christiane Baumeister; Florian Huber; Massimiliano Marcellino
  11. Dynamic Latent-Factor Model with High-Dimensional Asset Characteristics By Adam Baybutt
  12. Semi-nonparametric models of multidimensional matching: an optimal transport approach By Dongwoo Kim; Young Jun Lee
  13. Some variation of COBRA in sequential learning setup By Aryan Bhambu; Arabin Kumar Dey
  14. Scenario-based Quantile Connectedness of the U.S. Interbank Liquidity Risk Network By Tomohiro Ando; Jushan Bai; Lina Lu; Cindy M. Vojtech
  15. Factor Selection and Structural Breaks By Siddhartha Chib; Simon C. Smith
  16. On the Reliability of Estimated Taylor Rules for Monetary Policy Analysis By Joshua Brault; Qazi Haque; Louis Phaneuf
  17. The Need for Equivalence Testing in Economics By Fitzgerald, Jack
  18. The Method of Moments for Multivariate Random Sums By Javed, Farrukh; Loperfido, Nicola; Mazur, Stepan

  1. By: Luya Wang; Jeffrey S. Racine; Qiaoyu Wang
    Abstract: We propose a novel bootstrap procedure for conducting inference for factor model based average treatment effects estimators. Our method overcomes bias inherent to existing bootstrap procedures and substantially improves upon existing large sample normal inference theory in small sample settings. The finite sample improvements arising from the use of our proposed procedure are illustrated via a set of Monte Carlo simulations, and formal justification for the procedure is outlined.
    Keywords: finite sample bias; average treatment effects; bootstrap inference; factor model
    JEL: C15 C21 C23
    Date: 2024–05
    URL: https://d.repec.org/n?u=RePEc:mcm:deptwp:2024-03&r=
  2. By: Massimiliano MARCELLINO; Michael PFARRHOFER
    Abstract: We review specification and estimation of multivariate Bayesian nonparametric models for forecasting (possibly large sets of) macroeconomic and financial variables. The focus is on Bayesian Additive Regression Trees and Gaussian Processes. We then apply various versions of these models for point, density and tail forecasting using datasets for the euro area and the US. The performance is compared with that of several variants of Bayesian VARs to assess the relevance of accounting for general forms of nonlinearities. We find that medium-scale linear VARs with stochastic volatility are tough benchmarks to beat. Some gains in predictive accuracy arise for nonparametric approaches, most notably for short-run forecasts of unemployment and longer-run predictions of inflation, and during recessionary or otherwise non-standard economic episodes
    Keywords: United States, euro area, Bayesian Additive Regression Trees, Gaussian Processes, multivariate time series analysis, structural breaks
    JEL: C11 C32 C53
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:baf:cbafwp:cbafwp24224&r=
  3. By: Yuehao Bai; Shunzhuang Huang; Sarah Moon; Azeem M. Shaikh; Edward J. Vytlacil
    Abstract: In the context of a binary outcome, treatment, and instrument, Balke and Pearl (1993, 1997) establish that adding monotonicity to the instrument exogeneity assumption does not decrease the identified sets for average potential outcomes and average treatment effect parameters when those assumptions are consistent with the distribution of the observable data. We show that the same results hold in the broader context of multi-valued outcome, treatment, and instrument. An important example of such a setting is a multi-arm randomized controlled trial with noncompliance.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.14104&r=
  4. By: Jinyuan Chang; Qin Fang; Xinghao Qiao; Qiwei Yao
    Abstract: We propose a two-step procedure to model and predict high-dimensional functional time series, where the number of function-valued time series $p$ is large in relation to the length of time series $n$. Our first step performs an eigenanalysis of a positive definite matrix, which leads to a one-to-one linear transformation for the original high-dimensional functional time series, and the transformed curve series can be segmented into several groups such that any two subseries from any two different groups are uncorrelated both contemporaneously and serially. Consequently in our second step those groups are handled separately without the information loss on the overall linear dynamic structure. The second step is devoted to establishing a finite-dimensional dynamical structure for all the transformed functional time series within each group. Furthermore the finite-dimensional structure is represented by that of a vector time series. Modelling and forecasting for the original high-dimensional functional time series are realized via those for the vector time series in all the groups. We investigate the theoretical properties of our proposed methods, and illustrate the finite-sample performance through both extensive simulation and two real datasets.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.00700&r=
  5. By: Cl\'ement de Chaisemartin; Antoine Deeb
    Abstract: We consider multi-site randomized controlled trials with a large number of small sites and imperfect compliance, conducted in non-random convenience samples in each site. We show that an Empirical-Bayes (EB) estimator can be used to estimate a lower bound of the variance of intention-to-treat (ITT) effects across sites. We also propose bounds for the coefficient from a regression of site-level ITTs on sites' control-group outcome. Turning to local average treatment effects (LATEs), the EB estimator cannot be used to estimate their variance, because site-level LATE estimators are biased. Instead, we propose two testable assumptions under which the LATEs' variance can be written as a function of sites' ITT and first-stage (FS) effects, thus allowing us to use an EB estimator leveraging only unbiased ITT and FS estimators. We revisit Behaghel et al. (2014), who study the effect of counselling programs on job seekers job-finding rate, in more than 200 job placement agencies in France. We find considerable ITT heterogeneity, and even more LATE heterogeneity: our lower bounds on ITTs' (resp. LATEs') standard deviation are more than three (resp. four) times larger than the average ITT (resp. LATE) across sites. Sites with a lower job-finding rate in the control group have larger ITT effects.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.17254&r=
  6. By: Vedant Vohra; Jacob Goldin
    Abstract: The effect of a treatment may depend on the intensity with which it is administered. We study identification of ordered treatment effects with a binary instrument, focusing on the effect of moving from the treatment’s minimum to maximum intensity. With arbitrary heterogeneity across units, standard IV assumptions (Angrist and Imbens, 1995) do not constrain this parameter, even among compliers. We consider a range of additional assumptions and show how they can deliver sharp, informative bounds. We illustrate our approach with two applications, involving the effect of (1) health insurance on emergency department usage, and (2) attendance in an after-school program on student learning.
    JEL: C26
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:32425&r=
  7. By: Guido Imbens; Yiqing Xu
    Abstract: In 1986, Robert LaLonde published an article that compared nonexperimental estimates to experimental benchmarks LaLonde (1986). He concluded that the nonexperimental methods at the time could not systematically replicate experimental benchmarks, casting doubt on the credibility of these methods. Following LaLonde's critical assessment, there have been significant methodological advances and practical changes, including (i) an emphasis on estimators based on unconfoundedness, (ii) a focus on the importance of overlap in covariate distributions, (iii) the introduction of propensity score-based methods leading to doubly robust estimators, (iv) a greater emphasis on validation exercises to bolster research credibility, and (v) methods for estimating and exploiting treatment effect heterogeneity. To demonstrate the practical lessons from these advances, we reexamine the LaLonde data and the Imbens-Rubin-Sacerdote lottery data. We show that modern methods, when applied in contexts with significant covariate overlap, yield robust estimates for the adjusted differences between the treatment and control groups. However, this does not mean that these estimates are valid. To assess their credibility, validation exercises (such as placebo tests) are essential, whereas goodness of fit tests alone are inadequate. Our findings highlight the importance of closely examining the assignment process, carefully inspecting overlap, and conducting validation exercises when analyzing causal effects with nonexperimental data.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.00827&r=
  8. By: Valentin Patilea; Jeffrey S. Racine
    Abstract: We consider the problem of building adaptive, rate optimal estimators for the mean and covariance functions of random curves in the context of streaming data. In general, functional data analysis requires nonparametric smoothing of curves observed at a discrete set of design points, which may be measured with error. However, classical nonparametric smoothing methods (e.g., kernels, splines, etc.) assume that the degree of smoothness is known. In many applications functional data could be irregular, even perhaps nowhere differentiable. Moreover, the (ir)regularity of the curves could vary across their domain. We contribute to the literature by providing estimators and inference procedures that use an iterative plug-in estimator of ‘local regularity’ which delivers a computationally attractive, recursive, online updating method that is well-suited to streaming data. Theoretical support and Monte Carlo simulation evidence is provided, and code in the R language is available for the interested reader.
    Keywords: Adaptive estimator; Covariance function; Hölder exponent; Optimal smoothing
    JEL: C15 C21 C23
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:mcm:deptwp:2024-04&r=
  9. By: Benjamin Avanzi; Eric Dong; Patrick J. Laub; Bernard Wong
    Abstract: A key task in actuarial modelling involves modelling the distributional properties of losses. Classic (distributional) regression approaches like Generalized Linear Models (GLMs; Nelder and Wedderburn, 1972) are commonly used, but challenges remain in developing models that can (i) allow covariates to flexibly impact different aspects of the conditional distribution, (ii) integrate developments in machine learning and AI to maximise the predictive power while considering (i), and, (iii) maintain a level of interpretability in the model to enhance trust in the model and its outputs, which is often compromised in efforts pursuing (i) and (ii). We tackle this problem by proposing a Distributional Refinement Network (DRN), which combines an inherently interpretable baseline model (such as GLMs) with a flexible neural network-a modified Deep Distribution Regression (DDR; Li et al., 2019) method. Inspired by the Combined Actuarial Neural Network (CANN; Schelldorfer and W{\''u}thrich, 2019), our approach flexibly refines the entire baseline distribution. As a result, the DRN captures varying effects of features across all quantiles, improving predictive performance while maintaining adequate interpretability. Using both synthetic and real-world data, we demonstrate the DRN's superior distributional forecasting capacity. The DRN has the potential to be a powerful distributional regression model in actuarial science and beyond.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.00998&r=
  10. By: Christiane Baumeister; Florian Huber; Massimiliano Marcellino
    Abstract: The substantial fluctuations in oil prices in the wake of the COVID-19 pandemic and the Russian invasion of Ukraine have highlighted the importance of tail events in the global market for crude oil which call for careful risk assessment. In this paper we focus on forecasting tail risks in the oil market by setting up a general empirical framework that allows for flexible predictive distributions of oil prices that can depart from normality. This model, based on Bayesian additive regression trees, remains agnostic on the functional form of the conditional mean relations and assumes that the shocks are driven by a stochastic volatility model. We show that our nonparametric approach improves in terms of tail forecasts upon three competing models: quantile regressions commonly used for studying tail events, the Bayesian VAR with stochastic volatility, and the simple random walk. We illustrate the practical relevance of our new approach by tracking the evolution of predictive densities during three recent economic and geopolitical crisis episodes, by developing consumer and producer distress indices that signal the build-up of upside and downside price risk, and by conducting a risk scenario analysis for 2024.
    JEL: C11 C32 C53 Q41 Q47
    Date: 2024–05
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:32524&r=
  11. By: Adam Baybutt
    Abstract: We develop novel estimation procedures with supporting econometric theory for a dynamic latent-factor model with high-dimensional asset characteristics, that is, the number of characteristics is on the order of the sample size. Utilizing the Double Selection Lasso estimator, our procedure employs regularization to eliminate characteristics with low signal-to-noise ratios yet maintains asymptotically valid inference for asset pricing tests. The crypto asset class is well-suited for applying this model given the limited number of tradable assets and years of data as well as the rich set of available asset characteristics. The empirical results present out-of-sample pricing abilities and risk-adjusted returns for our novel estimator as compared to benchmark methods. We provide an inference procedure for measuring the risk premium of an observable nontradable factor, and employ this to find that the inflation-mimicking portfolio in the crypto asset class has positive risk compensation.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.15721&r=
  12. By: Dongwoo Kim; Young Jun Lee
    Abstract: This paper proposes empirically tractable multidimensional matching models, focusing on worker-job matching. We generalize the parametric model proposed by Lindenlaub (2017), which relies on the assumption of joint normality of observed characteristics of workers and jobs. In our paper, we allow unrestricted distributions of characteristics and show identification of the production technology, and equilibrium wage and matching functions using tools from optimal transport theory. Given identification, we propose efficient, consistent, asymptotically normal sieve estimators. We revisit Lindenlaub’s empirical application and show that, between 1990 and 2010, the U.S. economy experienced much larger technological progress favouring cognitive abilities than the original findings suggest. Furthermore, our flexible model specifications provide a significantly better fit for patterns in the evolution of wage inequality.
    Date: 2024–05–28
    URL: https://d.repec.org/n?u=RePEc:azt:cemmap:12/24&r=
  13. By: Aryan Bhambu; Arabin Kumar Dey
    Abstract: This research paper introduces innovative approaches for multivariate time series forecasting based on different variations of the combined regression strategy. We use specific data preprocessing techniques which makes a radical change in the behaviour of prediction. We compare the performance of the model based on two types of hyper-parameter tuning Bayesian optimisation (BO) and Usual Grid search. Our proposed methodologies outperform all state-of-the-art comparative models. We illustrate the methodologies through eight time series datasets from three categories: cryptocurrency, stock index, and short-term load forecasting.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.04539&r=
  14. By: Tomohiro Ando; Jushan Bai; Lina Lu; Cindy M. Vojtech
    Abstract: We characterize the U.S. interbank liquidity risk network based on a supervisory dataset, using a scenario-based quantile network connectedness approach. In terms of methodology, we consider a quantile vector autoregressive model with unobserved heterogeneity and propose a Bayesian nuclear norm estimation method. A common factor structure is employed to deal with unobserved heterogeneity that may exhibit endogeneity within the network. Then we develop a scenario-based quantile network connectedness framework by accommodating various economic scenarios, through a scenario-based moving average expression of the model where forecast error variance decomposition under a future pre-specified scenario is derived. The methodology is used to study the quantile-dependent liquidity risk network among large U.S. bank holding companies. The estimated quantile liquidity risk network connectedness measures could be useful for bank supervision and financial stability monitoring by providing leading indicators of the system-wide liquidity risk connectedness not only at the median but also at the tails or even under a pre-specified scenario. The measures also help identify systemically important banks and vulnerable banks in the liquidity risk transmission of the U.S. banking system.
    Keywords: nuclear norm; Bayesian analysis; scenario-based quantile connectedness; bank supervision; financial stability
    JEL: C11 C31 C32 C33 C58 G21 G28
    Date: 2024–04–18
    URL: https://d.repec.org/n?u=RePEc:fip:fedbqu:98335&r=
  15. By: Siddhartha Chib; Simon C. Smith
    Abstract: We develop a new approach to select risk factors in an asset pricing model that allows the set to change at multiple unknown break dates. Using the six factors displayed in Table 1 since 1963, we document a marked shift towards parsimonious models in the last two decades. Prior to 2005, five or six factors are selected, but just two are selected thereafter. This finding offers a simple implication for the factor zoo literature: ignoring breaks detects additional factors that are no longer relevant. Moreover, all omitted factors are priced by the selected factors in every regime. Finally, the selected factors outperform popular factor models as an investment strategy.
    Keywords: Model comparison; Factor models; Structural breaks; Anomaly; Bayesian analysis; Discount factor; Portfolio analysis; Sparsity
    JEL: G12 C11 C12 C52 C58
    Date: 2024–05–31
    URL: https://d.repec.org/n?u=RePEc:fip:fedgfe:2024-37&r=
  16. By: Joshua Brault; Qazi Haque; Louis Phaneuf
    Abstract: Taylor rules and their implications for monetary policy analysis can be misleading if the inflation target is held fixed while being in fact time-varying. We offer a theoretical analysis showing why assuming a fixed inflation target in place of a time-varying target can lead to a downward bias in the estimated policy rate response to the inflation gap and wrong statistical inference about indeterminacy. Our analysis suggests the bias is stronger in periods where inflation target movements are large. This is confirmed by simulation evidence about the magnitude of the bias obtained from a New Keynesian model featuring positive trend inflation. We further estimate medium-scale NK models with positive trend inflation and a time-varying inflation target using a novel population-based MCMC routine known as parallel tempering. The estimation results confirm our theoretical analysis while favouring a determinacy outcome for both pre and post-Volcker periods and shedding new light about the type of rule the Fed likely followed.
    Keywords: Taylor rule estimation, time-varying inflation target, omitted variable bias
    JEL: E50 E52 E58
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:een:camaaa:2024-39&r=
  17. By: Fitzgerald, Jack
    Abstract: Equivalence testing methods can provide statistically significant evidence that relationships are practically equal to zero. I demonstrate their necessity in a systematic reproduction of estimates defending 135 null claims made in 81 articles from top economics journals. 37-63% of these estimates cannot be significantly bounded beneath benchmark effect sizes. Though prediction platform data reveals that researchers find these equivalence testing 'failure rates' to be unacceptable, researchers actually expect unacceptably high failure rates, accurately predicting that failure rates exceed acceptable thresholds by around 23 percentage points. To obtain failure rates that researchers deem acceptable, one must contend that nearly half of published effect sizes in economics are practically equivalent to zero. Because such a claim is ludicrous, Type II error rates are likely quite high throughout economics. This paper provides economists with empirical justification, guidelines, and commands in Stata and R for conducting credible equivalence testing in future research.
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:zbw:i4rdps:125&r=
  18. By: Javed, Farrukh (Lund University); Loperfido, Nicola (Università degli Studi di Urbino "Carlo Bo"); Mazur, Stepan (Örebro University School of Business)
    Abstract: Multivariate random sums appear in many scienti c elds, most no- tably in actuarial science, where they model both the number of claims and their sizes. Unfortunately, they pose severe inferential problems. For example, their density function is analytically intractable, in the general case, thus preventing likelihood inference. In this paper, we address the problem by the method of moments, under the assumption that the claim size and the claim number have a multivariate skew-normal and a Poisson distribution, respectively. In doing so, we also derive closed-form expres- sions for some fundamental measures of multivariate kurtosis and high- light some limitations of both projection pursuit and invariant coordinate selection.
    Keywords: Fourth cumulant; Kurtosis; Poisson distribution; Skew-normal distribution.
    JEL: C13 C30 C46
    Date: 2024–06–18
    URL: https://d.repec.org/n?u=RePEc:hhs:oruesi:2024_006&r=

This nep-ecm issue is ©2024 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.