[go: up one dir, main page]

nep-ecm New Economics Papers
on Econometrics
Issue of 2022‒05‒02
thirty papers chosen by
Sune Karlsson
Örebro universitet

  1. Pairwise Valid Instruments By Zhenting Sun; Kaspar W\"uthrich
  2. Dynamic Spatiotemporal ARCH Models By Philipp Otto; Osman Do\u{g}an; S\"uleyman Ta\c{s}p{\i}nar
  3. Portfolio Optimization Using a Consistent Vector-Based MSE Estimation Approach By Maaz Mahadi; Tarig Ballal; Muhammad Moinuddin; Tareq Y. Al-Naffouri; Ubaid Al-Saggaf
  4. Inference in Linear Dyadic Data Models with Network Spillovers By Nathan Canen; Ko Sugiura
  5. Fast Simulation-Based Bayesian Estimation of Heterogeneous and Representative Agent Models using Normalizing Flow Neural Networks By Cameron Fen
  6. Estimation of a Factor-Augmented Linear Model with Applications Using Student Achievement Data By Matthew Harding; Carlos Lamarche; Chris Muris
  7. On Robust Inference in Time Series Regression By Richard T. Baillie; Francis X. Diebold; George Kapetanios; Kun Ho Kim
  8. Weighted-average quantile regression By Denis Chetverikov; Yukun Liu; Aleh Tsyvinski
  9. Estimating Nonlinear Network Data Models with Fixed Effects By David William Hughes
  10. A multiplicative thinning-based integer-valued GARCH model By Aknouche, Abdelhakim; Scotto, Manuel
  11. Latent Unbalancedness in Three-Way Gravity Models By Daniel Czarnowske; Amrei Stammann
  12. Encompassing Tests for Nonparametric Regressions By Elia Lapenta; Pascal Lavergne
  13. Clustered Local Average Treatment Effects: Fields of Study and Academic Student Progress By Nibbering, Didier; Oosterveen, Matthijs; Silva, Pedro Luís
  14. Difference-in-Differences for Policy Evaluation By Brantly Callaway
  15. Testing mediation effects using logic of Boolean matrices By Shi, Chengchun; Li, Lexin
  16. Minimax Risk in Estimating Kink Threshold and Testing Continuity By Javier Hidalgo; Heejun Lee; Jungyoon Lee; Myung Hwan Seo
  17. Honest calibration assessment for binary outcome predictions By Timo Dimitriadis; Lutz Duembgen; Alexander Henzi; Marius Puke; Johanna Ziegel
  18. A Classifier-Lasso Approach for Estimating Production Functions with Latent Group Structures By Daniel Czarnowske
  19. Automatic Debiased Machine Learning for Dynamic Treatment Effects By Rahul Singh; Vasilis Syrgkanis
  20. Estimating causal effects with optimization-based methods: A review and empirical comparison By Martin Cousineau; Vedat Verter; Susan A. Murphy; Joelle Pineau
  21. Are Instrumental Variables Really That Instrumental? Endogeneity Resolution in Regression Models for Comparative Studies By Ravi Kashyap
  22. Reducing overestimating and underestimating volatility via the augmented blending-ARCH model By Jun Lu; Shao Yi
  23. Correcting Attrition Bias using Changes-in-Changes By Dalia Ghanem; Sarojini Hirshleifer; Desire Kedagni; Karen Ortiz-Becerra
  24. Synthetic Controls in Action By Alberto Abadie; Jaume Vives-i-Bastida
  25. A primer on Variational Laplace By Zeidman, Peter; Friston, Karl; Parr, Thomas
  26. DAMNETS: A Deep Autoregressive Model for Generating Markovian Network Time Series By Jase Clarkson; Mihai Cucuringu; Andrew Elliott; Gesine Reinert
  27. A general characterization of optimal tie-breaker designs By Harrison H. Li; Art B. Owen
  28. Ensemble learning for portfolio valuation and risk management By Lotfi Boudabsa; Damir Filipović
  29. Multivariate doubly truncated moments for generalized skew-elliptical distributions with application to multivariate tail conditional risk measures By Baishuai Zuo; Chuancun Yin
  30. Discussion of estimating linearized heterogeneous agent models using panel data By Den Haan, Wouter J.

  1. By: Zhenting Sun; Kaspar W\"uthrich
    Abstract: Finding valid instruments is difficult. We propose Validity Set Instrumental Variable (VSIV) regression, a method for estimating treatment effects when the instruments are partially invalid. VSIV regression exploits testable implications for instrument validity to remove invalid variation in the instruments. We show that the proposed VSIV estimators are asymptotically normal under weak conditions and always remove or reduce the asymptotic bias relative to standard IV estimators.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.08050&r=
  2. By: Philipp Otto; Osman Do\u{g}an; S\"uleyman Ta\c{s}p{\i}nar
    Abstract: Geo-referenced data are characterized by an inherent spatial dependence due to the geographical proximity. In this paper, we introduce a dynamic spatiotemporal autoregressive conditional heteroscedasticity (ARCH) process to describe the effects of (i) the log-squared time-lagged outcome variable, i.e., the temporal effect, (ii) the spatial lag of the log-squared outcome variable, i.e., the spatial effect, and (iii) the spatial lag of the log-squared time-lagged outcome variable, i.e., the spatiotemporal effect, on the volatility of an outcome variable. Furthermore, our suggested process allows for the fixed effects over time and space to account for the unobserved heterogeneity. For this dynamic spatiotemporal ARCH model, we derive a generalized method of moments (GMM) estimator based on the linear and quadratic moment conditions of a specific transformation. We show the consistency and asymptotic normality of the GMM estimator, and determine the best set of moment functions. We investigate the finite-sample properties of the proposed GMM estimator in a series of Monte-Carlo simulations with different model specifications and error distributions. Our simulation results show that our suggested GMM estimator has good finite sample properties. In an empirical application, we use monthly log-returns of the average condominium prices of each postcode of Berlin from 1995 to 2015 (190 spatial units, 240 time points) to demonstrate the use of our suggested model. Our estimation results show that the temporal, spatial and spatiotemporal lags of the log-squared returns have statistically significant effects on the volatility of the log-returns.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.13856&r=
  3. By: Maaz Mahadi; Tarig Ballal; Muhammad Moinuddin; Tareq Y. Al-Naffouri; Ubaid Al-Saggaf
    Abstract: This paper is concerned with optimizing the global minimum-variance portfolio's (GMVP) weights in high-dimensional settings where both observation and population dimensions grow at a bounded ratio. Optimizing the GMVP weights is highly influenced by the data covariance matrix estimation. In a high-dimensional setting, it is well known that the sample covariance matrix is not a proper estimator of the true covariance matrix since it is not invertible when we have fewer observations than the data dimension. Even with more observations, the sample covariance matrix may not be well-conditioned. This paper determines the GMVP weights based on a regularized covariance matrix estimator to overcome the aforementioned difficulties. Unlike other methods, the proper selection of the regularization parameter is achieved by minimizing the mean-squared error of an estimate of the noise vector that accounts for the uncertainty in the data mean estimation. Using random-matrix-theory tools, we derive a consistent estimator of the achievable mean-squared error that allows us to find the optimal regularization parameter using a simple line search. Simulation results demonstrate the effectiveness of the proposed method when the data dimension is larger than the number of data samples or of the same order.
    Date: 2022–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2204.05611&r=
  4. By: Nathan Canen; Ko Sugiura
    Abstract: When using dyadic data (i.e., data indexed by pairs of units, such as trade flow data between two countries), researchers typically assume a linear model, estimate it using Ordinary Least Squares and conduct inference using "dyadic-robust" variance estimators. The latter assumes that dyads are uncorrelated if they do not share a common unit (e.g., if one country does not appear in both pairs of trade flow data). We show that this assumption does not hold in many empirical applications because indirect links may exist due to network connections, e.g., different country-pairs may have correlated trade outcomes due to sharing common trading partner links. Hence, as we prove, then show in Monte Carlo simulations, "dyadic-robust" estimators can be severely biased. We develop a consistent variance estimator appropriate for such contexts by leveraging results in network econometrics. Our estimator has good finite sample properties in numerical simulations. We then illustrate our message with an application to voting behavior by seating neighbors in the European Parliament.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.03497&r=
  5. By: Cameron Fen
    Abstract: This paper proposes a simulation-based deep learning Bayesian procedure for the estimation of macroeconomic models. This approach is able to derive posteriors even when the likelihood function is not tractable. Because the likelihood is not needed for Bayesian estimation, filtering is also not needed. This allows Bayesian estimation of HANK models with upwards of 800 latent states as well as estimation of representative agent models that are solved with methods that don't yield a likelihood--for example, projection and value function iteration approaches. I demonstrate the validity of the approach by estimating a 10 parameter HANK model solved via the Reiter method that generates 812 covariates per time step, where 810 are latent variables, showing this can handle a large latent space without model reduction. I also estimate the algorithm with an 11-parameter model solved via value function iteration, which cannot be estimated with Metropolis-Hastings or even conventional maximum likelihood estimators. In addition, I show the posteriors estimated on Smets-Wouters 2007 are higher quality and faster using simulation-based inference compared to Metropolis-Hastings. This approach helps address the computational expense of Metropolis-Hastings and allows solution methods which don't yield a tractable likelihood to be estimated.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.06537&r=
  6. By: Matthew Harding; Carlos Lamarche; Chris Muris
    Abstract: In many longitudinal settings, economic theory does not guide practitioners on the type of restrictions that must be imposed to solve the rotational indeterminacy of factor-augmented linear models. We study this problem and offer several novel results on identification using internally generated instruments. We propose a new class of estimators and establish large sample results using recent developments on clustered samples and high-dimensional models. We carry out simulation studies which show that the proposed approaches improve the performance of existing methods on the estimation of unknown factors. Lastly, we consider three empirical applications using administrative data of students clustered in different subjects in elementary school, high school and college.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.03051&r=
  7. By: Richard T. Baillie; Francis X. Diebold; George Kapetanios; Kun Ho Kim
    Abstract: Least squares regression with heteroskedasticity and autocorrelation consistent (HAC) standard errors has proved very useful in cross section environments. However, several major difficulties, which are generally overlooked, must be confronted when transferring the HAC estimation technology to time series environments. First, most economic time series have strong autocorrelation, which renders HAC regression parameter estimates highly inefficient. Second, strong autocorrelation similarly renders HAC conditional predictions highly inefficient. Finally, the structure of most popular HAC estimators is ill-suited to capture the autoregressive autocorrelation typically present in economic time series, which produces large size distortions and reduced power in hypothesis testing, in all but the largest sample sizes. We show that all three problems are largely avoided by the use of a simple dynamic regression (DynReg), which is easily implemented and also avoids possible problems concerning strong exogeneity. We demonstrate the advantages of DynReg with detailed simulations covering a range of practical issues.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.04080&r=
  8. By: Denis Chetverikov; Yukun Liu; Aleh Tsyvinski
    Abstract: In this paper, we introduce the weighted-average quantile regression framework, $\int_0^1 q_{Y|X}(u)\psi(u)du = X'\beta$, where $Y$ is a dependent variable, $X$ is a vector of covariates, $q_{Y|X}$ is the quantile function of the conditional distribution of $Y$ given $X$, $\psi$ is a weighting function, and $\beta$ is a vector of parameters. We argue that this framework is of interest in many applied settings and develop an estimator of the vector of parameters $\beta$. We show that our estimator is $\sqrt T$-consistent and asymptotically normal with mean zero and easily estimable covariance matrix, where $T$ is the size of available sample. We demonstrate the usefulness of our estimator by applying it in two empirical settings. In the first setting, we focus on financial data and study the factor structures of the expected shortfalls of the industry portfolios. In the second setting, we focus on wage data and study inequality and social welfare dependence on commonly used individual characteristics.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.03032&r=
  9. By: David William Hughes
    Abstract: This paper considers estimation of a directed network model in which outcomes are driven by dyad-specific variables (such as measures of homophily) as well as unobserved agent-specific parameters that capture degree heterogeneity. I develop a jackknife bias correction to deal with the incidental parameters problem that arises from fixed effect estimation of the model. In contrast to previous proposals, the jackknife approach is easily adaptable to different models and allows for non-binary outcome variables. Additionally, since the jackknife estimates all parameters in the model, including fixed effects, it allows researchers to construct estimates of average effects and counterfactual outcomes. I also show how the jackknife can be used to bias-correct fixed effect averages over functions that depend on multiple nodes, e.g. triads or tetrads in the network. As an example, I implement specification tests for dependence across dyads, such as reciprocity or transitivity. Finally, I demonstrate the usefulness of the estimator in an application to a gravity model for import/export relationships across countries.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.15603&r=
  10. By: Aknouche, Abdelhakim; Scotto, Manuel
    Abstract: In this paper we introduce a multiplicative integer-valued time series model, which is defined as the product of a unit-mean integer-valued independent and identically distributed (iid) sequence, and an integer-valued dependent process. The latter is defined as a binomial thinning operation of its own past and of the past of the observed process. Furthermore, it combines some features of the integer-valued GARCH (INGARCH), the autoregressive conditional duration (ACD), and the integer autoregression (INAR) processes. The proposed model is semi-parametric and is able to parsimoniously generate very high overdispersion, persistence, and heavy-tailedness. The dynamic probabilistic structure of the model is first analyzed. In addition, parameter estimation is considered by using a two-stage weighted least squares estimate (2SWLSE), consistency and asymptotic normality (CAN) of which are established under mild conditions. Applications of the proposed formulation to simulated and actual count time series data are provided.
    Keywords: Integer-valued time series, INAR model, INGARCH model, multiplicative error model (MEM), ACD model, two-stage weighted least squares.
    JEL: C01 C13 C22 C25
    Date: 2022–03–20
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:112475&r=
  11. By: Daniel Czarnowske; Amrei Stammann
    Abstract: Many panel data sets used for pseudo-poisson estimation of three-way gravity models are implicitly unbalanced because uninformative observations are redundant for the estimation. We show with real data as well as simulations that this phenomenon, which we call latent unbalancedness, amplifies the inference problem recently studied by Weidner and Zylkin (2021).
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.02235&r=
  12. By: Elia Lapenta; Pascal Lavergne
    Abstract: We set up a formal framework to characterize encompassing of nonparametric models through the L2 distance. We contrast it to previous literature on the comparison of nonparametric regression models. We then develop testing procedures for the encompassing hypothesis that are fully nonparametric. Our test statistics depend on kernel regression, raising the issue of bandwidth's choice. We investigate two alternative approaches to obtain a "small bias property" for our test statistics. We show the validity of a wild bootstrap method, and we illustrate the attractive features of our tests for small and moderate samples.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.06685&r=
  13. By: Nibbering, Didier (Monash University); Oosterveen, Matthijs (University of Porto); Silva, Pedro Luís (University of Porto)
    Abstract: Multiple unordered treatments with a binary instrument for each treatment are common in policy evaluation. This multiple treatment setting allows for different types of changes in treatment status that are non-compliant with the activated instrument. Therefore, instrumental variable (IV) methods have to rely on strong assumptions on the subjects' behavior to identify local average treatment effects (LATEs). This paper introduces a new IV strategy that identifies an interpretable weighted average of LATEs under relaxed assumptions, in the presence of clusters with similar treatments. The clustered LATEs allow for shifts across treatment clusters that are consistent with preference updating, but render IV estimation of individual LATEs biased. The clustered LATEs are estimated by standard IV methods, and we provide an algorithm that estimates the treatment clusters. We empirically analyze the effect of fields of study on academic student progress, and find violations of the LATE assumptions in line with preference updating, clusters with similar fields, treatment effect heterogeneity across students, and significant differences in student progress due to fields of study.
    Keywords: treatment clusters, instrumental variables, multiple treatments, field of study
    JEL: C36 I21 I23
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp15159&r=
  14. By: Brantly Callaway
    Abstract: Difference-in-differences is one of the most used identification strategies in empirical work in economics. This chapter reviews a number of important, recent developments related to difference-in-differences. First, this chapter reviews recent work pointing out limitations of two way fixed effects regressions (these are panel data regressions that have been the dominant approach to implementing difference-in-differences identification strategies) that arise in empirically relevant settings where there are more than two time periods, variation in treatment timing across units, and treatment effect heterogeneity. Second, this chapter reviews recently proposed alternative approaches that are able to circumvent these issues without being substantially more complicated to implement. Third, this chapter covers a number of extensions to these results, paying particular attention to (i) parallel trends assumptions that hold only after conditioning on observed covariates and (ii) strategies to partially identify causal effect parameters in difference-in-differences applications in cases where the parallel trends assumption may be violated.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.15646&r=
  15. By: Shi, Chengchun; Li, Lexin
    Abstract: A central question in high-dimensional mediation analysis is to infer the significance of individual mediators. The main challenge is that the total number of potential paths that go through any mediator is super-exponential in the number of mediators. Most existing mediation inference solutions either explicitly impose that the mediators are conditionally independent given the exposure, or ignore any potential directed paths among the mediators. In this article, we propose a novel hypothesis testing procedure to evaluate individual mediation effects, while taking into account potential interactions among the mediators. Our proposal thus fills a crucial gap, and greatly extends the scope of existing mediation tests. Our key idea is to construct the test statistic using the logic of Boolean matrices, which enables us to establish the proper limiting distribution under the null hypothesis. We further employ screening, data splitting, and decorrelated estimation to reduce the bias and increase the power of the test. We show that our test can control both the size and false discovery rate asymptotically, and the power of the test approaches one, while allowing the number of mediators to diverge to infinity with the sample size. We demonstrate the efficacy of the method through simulations and a neuroimaging study of Alzheimer’s disease. A Python implementation of the proposed procedure is available at https://github. com/callmespring/LOGAN.
    Keywords: Boolean matrix; Directed acyclic graph; Gaussian graphical model; High-dimensional inference; Mediation analysis; Neuroimaging analysis; R01AG061303; R01AG062542; R01AG034570
    JEL: C1
    Date: 2021–04–20
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:108881&r=
  16. By: Javier Hidalgo; Heejun Lee; Jungyoon Lee; Myung Hwan Seo
    Abstract: We derive a risk lower bound in estimating the threshold parameter without knowing whether the threshold regression model is continuous or not. The bound goes to zero as the sample size $ n $ grows only at the cube root rate. Motivated by this finding, we develop a continuity test for the threshold regression model and a bootstrap to compute its \textit{p}-values. The validity of the bootstrap is established, and its finite sample property is explored through Monte Carlo simulations.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.00349&r=
  17. By: Timo Dimitriadis; Lutz Duembgen; Alexander Henzi; Marius Puke; Johanna Ziegel
    Abstract: Probability predictions from binary regressions or machine learning methods ought to be calibrated: If an event is predicted to occur with probability $x$, it should materialize with approximately that frequency, which means that the so-called calibration curve $p(x)$ should equal the bisector for all $x$ in the unit interval. We propose honest calibration assessment based on novel confidence bands for the calibration curve, which are valid only subject to the natural assumption of isotonicity. Besides testing the classical goodness-of-fit null hypothesis of perfect calibration, our bands facilitate inverted goodness-of-fit tests whose rejection allows for the sought-after conclusion of a sufficiently well specified model. We show that our bands have a finite sample coverage guarantee, are narrower than existing approaches, and adapt to the local smoothness and variance of the calibration curve $p$. In an application to model predictions of an infant having a low birth weight, the bounds give informative insights on model calibration.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.04065&r=
  18. By: Daniel Czarnowske
    Abstract: I present a new estimation procedure for production functions with latent group structures. I consider production functions that are heterogeneous across groups but time-homogeneous within groups, and where the group membership of the firms is unknown. My estimation procedure is fully data-driven and embeds recent identification strategies from the production function literature into the classifier-Lasso. Simulation experiments demonstrate that firms are assigned to their correct latent group with probability close to one. I apply my estimation procedure to a panel of Chilean firms and find sizable differences in the estimates compared to the standard approach of classification by industry.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.02220&r=
  19. By: Rahul Singh; Vasilis Syrgkanis
    Abstract: We extend the idea of automated debiased machine learning to the dynamic treatment regime. We show that the multiply robust formula for the dynamic treatment regime with discrete treatments can be re-stated in terms of a recursive Riesz representer characterization of nested mean regressions. We then apply a recursive Riesz representer estimation learning algorithm that estimates de-biasing corrections without the need to characterize how the correction terms look like, such as for instance, products of inverse probability weighting terms, as is done in prior work on doubly robust estimation in the dynamic regime. Our approach defines a sequence of loss minimization problems, whose minimizers are the mulitpliers of the de-biasing correction, hence circumventing the need for solving auxiliary propensity models and directly optimizing for the mean squared error of the target de-biasing correction.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.13887&r=
  20. By: Martin Cousineau; Vedat Verter; Susan A. Murphy; Joelle Pineau
    Abstract: In the absence of randomized controlled and natural experiments, it is necessary to balance the distributions of (observable) covariates of the treated and control groups in order to obtain an unbiased estimate of a causal effect of interest; otherwise, a different effect size may be estimated, and incorrect recommendations may be given. To achieve this balance, there exist a wide variety of methods. In particular, several methods based on optimization models have been recently proposed in the causal inference literature. While these optimization-based methods empirically showed an improvement over a limited number of other causal inference methods in their relative ability to balance the distributions of covariates and to estimate causal effects, they have not been thoroughly compared to each other and to other noteworthy causal inference methods. In addition, we believe that there exist several unaddressed opportunities that operational researchers could contribute with their advanced knowledge of optimization, for the benefits of the applied researchers that use causal inference tools. In this review paper, we present an overview of the causal inference literature and describe in more detail the optimization-based causal inference methods, provide a comparative analysis of the prevailing optimization-based methods, and discuss opportunities for new methods.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.00097&r=
  21. By: Ravi Kashyap
    Abstract: We provide a justification for why, and when, endogeneity will not cause bias in the interpretation of the coefficients in a regression model. This technique can be a viable alternative to, or even used alongside, the instrumental variable method. We show that when performing any comparative study, it is possible to measure the true change in the coefficients under a broad set of conditions. Our results hold, as long as the product of the covariance structure between the explanatory variables and the covariance between the error term and the explanatory variables are equal, within the same system at different time periods or across multiple systems at the same point in time.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.14255&r=
  22. By: Jun Lu; Shao Yi
    Abstract: SVR-GARCH model tends to "backward eavesdrop" when forecasting the financial time series volatility in which case it tends to simply produce the prediction by deviating the previous volatility. Though the SVR-GARCH model has achieved good performance in terms of various performance measurements, trading opportunities, peak or trough behaviors in the time series are all hampered by underestimating or overestimating the volatility. We propose a blending ARCH (BARCH) and an augmented BARCH (aBARCH) model to overcome this kind of problem and make the prediction towards better peak or trough behaviors. The method is illustrated using real data sets including SH300 and S&P500. The empirical results obtained suggest that the augmented and blending models improve the volatility forecasting ability.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.12456&r=
  23. By: Dalia Ghanem; Sarojini Hirshleifer; Desire Kedagni; Karen Ortiz-Becerra
    Abstract: Attrition is a common and potentially important threat to internal validity in treatment effect studies. We extend the changes-in-changes approach to identify the average treatment effect for respondents and the entire study population in the presence of attrition. Our method can be applied to randomized experiments as well as difference-in-difference designs. A simulation experiment points to the advantages of this approach relative to one of the most commonly used approaches in the literature, inverse probability weighting. Those advantages are further illustrated with an application to a large-scale randomized experiment.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.12740&r=
  24. By: Alberto Abadie; Jaume Vives-i-Bastida
    Abstract: In this article we propose a set of simple principles to guide empirical practice in synthetic control studies. The proposed principles follow from formal properties of synthetic control estimators, and pertain to the nature, implications, and prevention of over-fitting biases within a synthetic control framework, to the interpretability of the results, and to the availability of validation exercises. We discuss and visually demonstrate the relevance of the proposed principles under a variety of data configurations.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.06279&r=
  25. By: Zeidman, Peter; Friston, Karl (University College London); Parr, Thomas
    Abstract: This article details a scheme for approximate Bayesian inference, which has underpinned thousands of neuroimaging studies since its introduction 15 years ago. Variational Laplace (VL) provides a generic approach for fitting linear or non-linear models, which may be static or dynamic, returning a posterior probability density over the model parameters and an approximation of log model evidence, which enables Bayesian model comparison. VL applies variational Bayesian inference in conjunction with quadratic or Laplace approximations of the evidence lower bound (free energy). Importantly, update equations do not need to be derived for each model under consideration, providing a general method for fitting a broad class of models. This primer is intended for experimenters and modellers who may wish to fit models to data using variational Bayesian methods, without assuming previous experience of variational Bayes or machine learning. Accompanying code demonstrates how to fit different kinds of model using the reference implementation of the VL scheme in the open-source Statistical Parametric Mapping (SPM) software package. In addition, we provide a standalone software function that does not require SPM, in order to ease translation to other fields, together with detailed pseudocode. Finally, the supplementary materials provide worked derivations of the key equations.
    Date: 2022–04–08
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:28vwh&r=
  26. By: Jase Clarkson; Mihai Cucuringu; Andrew Elliott; Gesine Reinert
    Abstract: In this work, we introduce DAMNETS, a deep generative model for Markovian network time series. Time series of networks are found in many fields such as trade or payment networks in economics, contact networks in epidemiology or social media posts over time. Generative models of such data are useful for Monte-Carlo estimation and data set expansion, which is of interest for both data privacy and model fitting. Using recent ideas from the Graph Neural Network (GNN) literature, we introduce a novel GNN encoder-decoder structure in which an encoder GNN learns a latent representation of the input graph, and a decoder GNN uses this representation to simulate the network dynamics. We show using synthetic data sets that DAMNETS can replicate features of network topology across time observed in the real world, such as changing community structure and preferential attachment. DAMNETS outperforms competing methods on all of our measures of sample quality over several real and synthetic data sets.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.15009&r=
  27. By: Harrison H. Li; Art B. Owen
    Abstract: In a regression discontinuity design, subjects with a running variable $x$ exceeding a threshold $t$ receive a binary treatment while those with $x\le t$ do not. When the investigator can randomize the treatment, a tie-breaker design allows for greater statistical efficiency. Our setting has random $x\sim F$, a working model where the response satisfies a two line regression model, and two economic constraints. One constraint is on the expected proportion of treated subjects and the other is on how treatment correlates with $x$, to express the strength of a preference for treating subjects with higher $x$. Under these conditions we show that there always exists an optimal design with treatment probabilities piecewise constant in $x$. It is natural to require these treatment probabilities to be non-decreasing in $x$; under this constraint, we find an optimal design requires just two probability levels, when $F$ is continuous. By contrast, a typical tie-breaker design as in Owen and Varian (2020) uses a three level design with fixed treatment probabilities $0$, $0.5$ and $1$. We find large efficiency gains for our optimal designs compared to using those three levels when fewer than half of the subjects are to be treated, or $F$ is not symmetric. Our methods easily extend to the fixed $x$ design problem and can optimize for any efficiency metric that is a continuous functional of the information matrix in the two-line regression. We illustrate the optimal designs with a data example based on Head Start, a U.S. government early-childhood intervention program.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.12511&r=
  28. By: Lotfi Boudabsa (Ecole Polytechnique Fédérale de Lausanne - School of Basic Sciences); Damir Filipović (Ecole Polytechnique Fédérale de Lausanne; Swiss Finance Institute)
    Abstract: We introduce an ensemble learning method for dynamic portfolio valuation and risk management building on regression trees. We learn the dynamic value process of a derivative portfolio from a finite sample of its cumulative cash flow. The estimator is given in closed form. The method is fast and accurate, and scales well with sample size and path space dimension. The method can also be applied to Bermudan style options. Numerical experiments show good results in moderate dimension problems.
    Keywords: dynamic portfolio valuation, ensemble learning, gradient boosting, random forest, regression trees, risk management, Bermudan options
    Date: 2022–04
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp2230&r=
  29. By: Baishuai Zuo; Chuancun Yin
    Abstract: In this paper, we focus on multivariate doubly truncated first two moments of generalized skew-elliptical (GSE) distributions and derive explicit expressions for them. It includes many useful distributions, for examples, generalized skew-normal (GSN), generalized skew-Laplace (GSLa), generalized skew-logistic (GSLo) and generalized skew student-$t$ (GSSt) distributions, all as special cases. We also give formulas of multivariate doubly truncated expectation and covariance for GSE distributions. As applications, we show the results of multivariate tail conditional expectation (MTCE) and multivariate tail covariance (MTCov) for GSE distributions.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.00839&r=
  30. By: Den Haan, Wouter J.
    Abstract: The techniques proposed in Papp and Reiter (2020) allow the use of cross-sectional and aggregate data observed at different frequencies in the estimation of dynamic stochastic macroeconomic models. However, the question is whether technique is getting ahead of what is sensible in terms of currently available empirical strategies to estimate macroeconomic models which are – without exception – misspecified.
    Keywords: heterogeneous agents; misspecification; solution techniques
    JEL: J1
    Date: 2020–06–01
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:103971&r=

This nep-ecm issue is ©2022 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.