[go: up one dir, main page]

nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒08‒31
27 papers chosen by
Sune Karlsson
Orebro University

  1. A Test for Endogeneity in Conditional Quantiles By Tae-Hwan Kim; Christophe Muller
  2. A Mixture Innovation Heterogeneous Autoregressive Model for Structural Breaks and Long Memory By Nima Nonejad
  3. First Difference Transformation in Panel VAR models: Robustness, Estimation and Inference By Arturas Juodis
  4. Reworking Wild Bootstrap Based Inference for Clustered Errors By Matthew D. Webb
  5. Estimation and Inference in Univariate and Multivariate Log-GARCH-X Models When the Conditional Density is Unknown By Sucarrat, Genaro; Grønneberg, Steffen; Escribano, Alvaro
  6. Set Identification of Generalized Linear Predictors in the Presence of Non-Classical Measurement Errors By Kaspar Wüthrich
  7. Long Memory and Structural Breaks in Realized Volatility: An Irreversible Markov Switching Approach By Nima Nonejad
  8. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox By Nima Nonejad
  9. Extremum Sieve Estimation in k-out-of-n Systems By Tatiana Komarova
  10. Bias correcting adjustment coefficients in a cointegrated VAR with known cointegrating vectors By Kees Jan van Garderen; H. Peter Boswijk
  11. Outcome conditioned treatment effects By Stefan Hoderlein; Yuya Sasaki
  12. Bayesian Inference and Model Comparison for Random Choice Structures By William J. McCausland; A.A.J. Marley
  13. On the Estimation of Skewed Geometric Stable Distributions By Halvarsson, Daniel
  14. Better than Random: Weighted Least Squares Meta-Regression Analysis By T.D. Stanley; Hristos Doucouliagos
  15. A Note on Modelling Dynamics in Happiness Estimations By Alan, Piper
  16. Nonparametric estimation of the conditional distribution in a discrete-time stochastic volatility model By Roland Langrock; Th\'eo Michelot; Alexander Sohn; Thomas Kneib
  17. Volatility and Liquidity Costs By Selma Chaker
  18. Tail probabilities and partial moments for quadratic forms in multivariate generalized hyperbolic random vectors By Simon A. Broda
  19. Which continuous-time model is most appropriate for exchange rates? By Deniz Erdemlioglu; Sébastien Laurent; Christopher J. Neely
  20. Macroeconomic factors strike back: A Bayesian change-point model of time-varying risk exposures and premia in the U.S. cross-section By Daniele Bianchi; Massimo Guidolin; Francesco Ravazzolo
  21. Refined Multifractal Cross-Correlation Analysis By Pawe{\l} O\'swi\c{e}cimka; Stanis{\l}aw Dro\.zd\.z; Marcin Forczek; Stanis{\l}aw Jadach; Jaros{\l}aw Kwapie\'n
  22. Are we in a bubble? A simple time-series-based diagnostic By Franses, Ph.H.B.F.
  23. Regression analysis of country effects using multilevel data: A cautionary tale By Bryan, Mark L.; Jenkins, Stephen P.
  24. An argument for preferring Firth bias-adjusted estimates in aggregate and individual-level discrete choice modeling By Kessels, Roselinde; Jones, Bradley; Goos, Peter
  25. Efficient Approximation of the Spatial Covariance Function for Large Datasets - Analysis of Atmospheric CO2 Concentrations By Patrick Gneuss; Wolfgang Schmid; Reimund Schwarze
  26. Risk Preferences and Estimation Risk in Portfolio Choice By Hao Liu; Winfried Pohlmeier
  27. ECB monetary policy surprises: identification through cojumps in interest rates By Lars winkelmann; Markus Bibinger; Tobias Linzert;

  1. By: Tae-Hwan Kim (School of Economics, Yonsei University - Yonsei University); Christophe Muller (AMSE - Aix-Marseille School of Economics - Aix-Marseille Univ. - Centre national de la recherche scientifique (CNRS) - École des Hautes Études en Sciences Sociales [EHESS] - Ecole Centrale Marseille (ECM))
    Abstract: In this paper, we develop a test to detect the presence of endogeneity in conditional quantiles. Our test is a Hausman-type test based on the distance between two estimators, of which one is consistent only under no endogeneity while the other is consistent regardless of the presence of endogeneity in conditional quantile models. We derive the asymptotic distribution of the test statistic under the null hypothesis of no endogeneity. The finite sample properties of the test are investigated through Monte Carlo simulations, and it is found that the test shows good size and power properties in finite samples. As opposed to the test based on the IVQR estimator of Chernozhukov and Hansen (2006) in the case of more than a couple of variables, our approach does not imply an infeasible computation time. Finally, we apply our approach to test for endogeneity in conditional quantile models for estimating Engel curves using UK consumption and expenditure data. The pattern of endogeneity in the Engel curve is found to vary substantially across quantiles
    Keywords: regression quantile; endogeneity; two-stage estimation; Hausman test; Engel curve
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-00854527&r=ecm
  2. By: Nima Nonejad (Aarhus University and CREATES)
    Abstract: We propose a flexible model to describe nonlinearities and long-range dependence in time series dynamics. Our model is an extension of the heterogeneous autoregressive model. Structural breaks occur through mixture distributions in state innovations of linear Gaussian state space models. Monte Carlo simulations evaluate the properties of the estimation procedures. Results show that the proposed model is viable and flexible for purposes of forecasting volatility. Model uncertainty is accounted for by employing Bayesian model averaging. Bayesian model averaging provides very competitive forecasts compared to any single model specification. It provides further improvements when we average over nonlinear specifications.
    Keywords: Mixture innovation models, Markov chain Monte Carlo, Realized volatility
    JEL: C11 C22 C51 C53
    Date: 2013–08–13
    URL: http://d.repec.org/n?u=RePEc:aah:create:2013-24&r=ecm
  3. By: Arturas Juodis
    Abstract: This paper considers estimation of Panel Vectors Autoregressive Models of order 1 (PVAR(1)) with possible cross-sectional heteroscedasticity in the error terms. We focus on fixed T consistent estimation methods in First differences (FD) with or without additional strictly exogenous regressors. Additional results for the Panel FD OLS estimator and the FDLS estimator of Han and Phillips (2010) are provided. In the covariance stationary case it is shown that the univariate moment conditions of the latter estimator are violated for general parameter matrices in the multivariate case. Furthermore, we simplify the analysis of Binder, Hsiao, and Pesaran (2005) by providing analytical results for the _rst two derivatives of the Transformed Maximum Likelihood (TML) function. We extend the original model by taking into account possible cross-sectional heteroscedasticity and presence of strictly exogenous regressors. Moreover, we show that in the three wave panel the loglikelihood function of the unrestricted TML estimator violates the global identification assumption. The finite-sample performance of the analyzed methods is investigated in a Monte Carlo study. Results indicate that under effect stationarity the TML estimator encounters problems with global identification even for moderate values of T.
    Date: 2013–06–05
    URL: http://d.repec.org/n?u=RePEc:ame:wpaper:1306&r=ecm
  4. By: Matthew D. Webb (University of Calgary)
    Abstract: Many empirical projects are well suited to incorporating a linear difference-in-differences research design. While estimation is straightforward, reliable inference can be a challenge. Past research has not only demonstrated that estimated standard errors are biased dramatically downwards in models possessing a group clustered design, but has also suggested a number of bootstrap-based improvements to the inference procedure. In this paper, I first demonstrate using Monte Carlo experiments, that these bootstrap-based procedures and traditional cluster-robust standard errors perform poorly in situations with fewer than eleven clusters - a setting faced in many empirical applications. With few clusters, the wild cluster bootstrap-t procedure results in p-values that are not point identified. I subsequently introduce two easy-to-implement alternative procedures that involve the wild bootstrap. Further Monte Carlo simulations provide evidence that the use of a 6-point distribution with the wild bootstrap can improve the reliability of inference.
    Keywords: CRVE, grouped data, clustered data, panel data, wild bootstrap, cluster wild bootstrap
    JEL: C15 C21 C23
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1315&r=ecm
  5. By: Sucarrat, Genaro; Grønneberg, Steffen; Escribano, Alvaro
    Abstract: Exponential models of Autoregressive Conditional Heteroscedasticity (ARCH) enable richer dynamics (e.g. contrarian or cyclical), provide greater robustness to jumps and outliers, and guarantee the positivity of volatility. The latter is not guaranteed in ordinary ARCH models, in particular when additional exogenous or predetermined variables ("X") are included in the volatility specification. Here, we propose estimation and inference methods for univariate and multivariate Generalised log-ARCH-X (i.e. log-GARCH-X) models when the conditional density is not known via (V)ARMA-X representations. The multivariate specification allows for volatility feedback across equations, and time-varying correlations can be fitted in a subsequent step. Finally, our empirical applications on electricity prices show that the model-class is particularly useful when the X-vector is high-dimensional.
    Keywords: ARCH, exponential GARCH, log-GARCH, ARMA-X, Multivariate GARCH
    JEL: C22 C32 C51 C52
    Date: 2013–08–11
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:49344&r=ecm
  6. By: Kaspar Wüthrich
    Abstract: This paper studies the identification of coefficients in generalized linear predictors where the outcome variable suffers from non-classical measurement errors. Combining a mixture model of data errors with the bounding procedure proposed by Stoye (2007) derive bounds on the coefficient vector under different non-parametric assumptions about the structure of the measurement error. The method is illustrated by analyzing a simple earnings equation.
    Keywords: Generalized linear predictor; Non-classical measurement error; Contaminated sampling; Corrupt sampling; Multiplicative mean independence; Stochastic dominance; Nonparametric bounds
    JEL: C2 C21 J24
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:ube:dpvwib:dp1304&r=ecm
  7. By: Nima Nonejad (Aarhus University and CREATES)
    Abstract: This paper proposes a model that simultaneously captures long memory and structural breaks. We model structural breaks through irreversible Markov switching or so-called change-point dynamics. The parameters subject to structural breaks and the unobserved states which determine the position of the structural breaks are sampled from the joint posterior density by sampling from their respective conditional posteriors using Gibbs sampling and Metropolis-Hastings. Monte Carlo simulations demonstrate that the proposed estimation approach is effective in identifying and dating structural breaks. Applied to daily S&P 500 data, one finds strong evidence of three structural breaks. The evidence of these breaks is robust to different specifications including a GARCH specification for the conditional variance of volatility.
    Keywords: Long memory, Structural breaks, Change-points, Gibbs sampling
    JEL: C22 C11 C52 G10
    Date: 2013–08–13
    URL: http://d.repec.org/n?u=RePEc:aah:create:2013-26&r=ecm
  8. By: Nima Nonejad (Aarhus University and CREATES)
    Abstract: This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast and efficient framework for estimation. These advantages are used to for instance estimate stochastic volatility models with leverage effect or with Student-t distributed errors. We also model changing time series characteristics of the US inflation rate by considering a heteroskedastic ARFIMA model where the heteroskedasticity is specified by means of a Gaussian stochastic volatility process.
    Keywords: Particle filter, Metropolis-Hastings, Unobserved components, Bayes
    JEL: C22 C11 C63
    Date: 2013–08–13
    URL: http://d.repec.org/n?u=RePEc:aah:create:2013-27&r=ecm
  9. By: Tatiana Komarova
    Abstract: The paper considers nonparametric estimation of absolutely continuous distribution functions of lifetimes of non-identical components in k-out-of-n systems from the observed "autopsy" data. In economics,ascending "button" or "clock" auctions with n heterogeneous bidders present 2-out-of-n systems. Classical competing risks models are examples of n-out-of-n systems. Under weak conditions on the underlying distributions the estimation problem is shown to be well-posed and the suggested extremum sieve estimator is proven to be consistent. The paper illustrates the suggested estimation method by using sieve spaces of Bernstein polynomials which allow an easy implementation of constraints on the monotonicity of estimated distribution functions.
    Keywords: k-out-of-n systems, competing risks, sieve estimation, Bernstein polynomials
    Date: 2013–07
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2013/564&r=ecm
  10. By: Kees Jan van Garderen; H. Peter Boswijk
    Abstract: The maximum likelihood estimator of the adjustment coefficient in a cointegrated vector autoregressive model (CVAR) is generally biased. For the case where the cointegrating vector is known in a first-order CVAR with no intercept, we derive a condition for the unbiasedness of the maximum likelihood estimator of the adjustment coefficients, and provide a simple characterization of the bias in case this condition is violated. A feasible bias correction method is shown to virtually eliminate the bias over a large part of the parameter space.
    Date: 2013–06–04
    URL: http://d.repec.org/n?u=RePEc:ame:wpaper:1305&r=ecm
  11. By: Stefan Hoderlein (Institute for Fiscal Studies and Boston College); Yuya Sasaki
    Abstract: This paper introduces average treatment effects conditional on the outcomes variable in an endogenous setup where outcome Y, treatment X and instrument Z are continuous. These objects allow to refine well studied treatment effects like ATE and ATT in the case of continuous treatment (see Florens et al (2009)), by breaking them up according to the rank of the outcome distribution. For instance, in the returns to schooling case, the outcome conditioned average treatment effect on the treated (ATTO), gives the average effect of a small increase in schooling on the subpopulation characterised by a certain treatment intensity, say 16 years of schooling, and a certain rank in the wage distribution. We show that IV type approaches are better suited to identify overall averages across the population like the average partial effect, or outcome conditioned versions thereof, while selection type methods are better suited to identify ATT or ATTO. Importantly, none of the identification relies on rectangular support of the errors in the identification equation. Finally, we apply all concepts to analyse the nonlinear heterogeneous effects of smoking during pregnancy on infant birth weight.
    Keywords: Continuous treatment, average treatment effect on the treated, marginal treatment effect, average partial effect, local instrumental variables, nonseparable model, endogeneity, quantiles
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:39/13&r=ecm
  12. By: William J. McCausland; A.A.J. Marley
    Abstract: We complete the development of a testing ground for axioms of discrete stochastic choice. Our contribution here is to develop new posterior simulation methods for Bayesian inference, suitable for a class of prior distributions introduced by McCausland and Marley (2013). These prior distributions are joint distributions over various choice distributions over choice sets of different sizes. Since choice distributions over different choice sets can be mutually dependent, previous methods relying on conjugate prior distributions do not apply. We demonstrate by analyzing data from a previously reported experiment and report evidence for and against various axioms.
    Keywords: Random utility, discrete choice, Bayesian inference, MCMC
    JEL: C11 C35 C53 D01
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:mtl:montec:07-2013&r=ecm
  13. By: Halvarsson, Daniel (Ratio)
    Abstract: The increasing interest in the application of geometric stable distributions has lead to a need for appropriate estimators. Building on recent procedures for estimating the Linnik distribution, this paper develops two estimators for the geometric stable distribution. Closed form expressions are provided for the signed and unsigned fractional moments of the distribution. The estimators are then derived using the methods of fractional lower order moments and that of logarithmic moments. Their performance is tested on simulated data, where the lower order estimators, in particular, are found to give efficient results over most of the parameter space.
    Keywords: Geometric stable distribution; Estimation; Fractional lower order moments; Logarithmic moments; Economics
    JEL: C00
    Date: 2013–08–21
    URL: http://d.repec.org/n?u=RePEc:hhs:ratioi:0216&r=ecm
  14. By: T.D. Stanley; Hristos Doucouliagos
    Abstract: This paper revisits and challenges two widely accepted practices in multiple meta-regression analysis: the prevalent use of random-effects meta-regression analysis (RE-MRA) and the correction of standard errors from fixed-effects meta-regression analysis (FE-MRA). Specifically, we investigate the bias of RE-MRA when there is publication selection bias and compare RE-MRA with an alternative weighted least square meta-regression analysis (WLS-MRA). Simulations and statistical theory show that multiple WLS-MRA provides improved estimates of meta-regression coefficients and their confidence intervals when there is no publication bias. When there is publication selection bias, WLS-MRA dominates RE-MRA, especially when there is additive excess heterogeneity. WLS-MRA is also compared to FE-MRA, where conventional wisdom is to correct the standard errors by dividing by √MSE. We demonstrate why it is better not to make this correction.
    Keywords: meta-regression, weighted least squares, random-effects, fixed-effects
    Date: 2013–08–17
    URL: http://d.repec.org/n?u=RePEc:dkn:econwp:eco_2013_2&r=ecm
  15. By: Alan, Piper
    Abstract: This short note discusses two alternative ways to model dynamics in happiness regressions. A explained, this may be important when standard fixed effects estimates have serial correlation in the residuals, but is also potentially useful when serial correlation is not a problem for providing new insights in the happiness of economics area. The note discusses modelling dynamics two ways the note discusses are via a lagged dependent variable, and via an AR(1) process. The usefulness and statistical appropriateness of each is discussed with reference to happiness. Finally, a flow chart is provided summarising key decisions regarding the choice regarding, and potential necessity of, modelling dynamics.
    Keywords: Happiness, Dynamics, Lagged Dependent Variable, AR(1) process, Estimation
    JEL: C23 C50 I31
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:49364&r=ecm
  16. By: Roland Langrock; Th\'eo Michelot; Alexander Sohn; Thomas Kneib
    Abstract: Stochastic volatility (SV) models mimic many of the stylized facts attributed to time series of asset returns, while maintaining conceptual simplicity. A substantial body of research deals with various techniques for fitting relatively basic SV models, which assume the returns to be conditionally normally distributed or Student-t-distributed, given the volatility. In this manuscript, we consider a frequentist framework for estimating the conditional distribution in an SV model in a nonparametric way, thus avoiding any potentially critical assumptions on the shape. More specifically, we suggest to represent the density of the conditional distribution as a linear combination of standardized B-spline basis functions, imposing a penalty term in order to arrive at a good balance between goodness of fit and smoothness. This allows us to employ the efficient hidden Markov model machinery in order to fit the model and to assess its predictive performance. We demonstrate the feasibility of the approach in a simulation study before applying it to three series of returns on stocks and one series of stock index returns. The nonparametric approach leads to an improved predictive capacity in some cases, and we find evidence for the conditional distributions being leptokurtic and negatively skewed.
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1308.5836&r=ecm
  17. By: Selma Chaker
    Abstract: Observed high-frequency prices are contaminated with liquidity costs or market microstructure noise. Using such data, we derive a new asset return variance estimator inspired by the market microstructure literature to explicitly model the noise and remove it from observed returns before estimating their variance. The returns adjusted for the estimated liquidity costs are either totally or partially free from noise. If the liquidity costs are fully removed, the sum of squared high-frequency returns – which would be inconsistent for return variance when based on observed returns – becomes a consistent variance estimator when based on adjusted returns. This novel estimator achieves the maximum possible rate of convergence. However, if the liquidity costs are only partially removed, the residual noise is smaller and closer to an exogenous white noise than the original noise. Therefore, any volatility estimator that is robust to noise relies on weaker noise assumptions if it is based on adjusted returns than if it is based on observed returns.
    Keywords: Econometric and statistical methods; Financial markets; Market structure and pricing
    JEL: G20 C14 C51 C58
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:13-29&r=ecm
  18. By: Simon A. Broda
    Abstract: Countless test statistics can be written as quadratic forms in certain random vectors, or ratios thereof. Consequently, their distribution has received considerable attention in the literature. Except for a few special cases, no closed-form expression for the cdf exists, and one resorts to numerical methods. Traditionally the problem is analyzed under the assumption of joint Gaussianity; the algorithm that is usually employed is that of Imhof (1961). The present manuscript generalizes this result to the case of multivariate generalized hyperbolic (MGHyp) random vectors. The MGHyp is a very flexible distribution which nests, among others, the multivariate t, Laplace, and variance gamma distributions. An expression for the first partial moment is also obtained, which plays a vital role in financial risk management. The proof involves a generalization of the classic inversion formula due to Gil-Pelaez (1951). Two applications are considered: first, the finite-sample distribution of the 2SLS estimator of a structural parameter. Second, the Value at Risk and Expected Shortfall of a quadratic portfolio with heavy-tailed risk factors.
    Date: 2013–05–01
    URL: http://d.repec.org/n?u=RePEc:ame:wpaper:1304&r=ecm
  19. By: Deniz Erdemlioglu; Sébastien Laurent; Christopher J. Neely
    Abstract: This paper attempts to realistically model the underlying exchange rate data generating process. We ask what types of diffusion or jump features are most appropriate. The most plausible model for 1-minute data features Brownian motion and Poisson jumps but not infinite activity jumps. Modeling periodic volatility is necessary to accurately identify the frequency of jump occurrences and their locations. We propose a two-stage method to capture the effects of these periodic volatility patterns. Simulations show that microstructure noise does not significantly impair the statistical tests for jumps and diffusion behavior.>
    Keywords: Foreign exchange ; Time-series analysis
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:2013-024&r=ecm
  20. By: Daniele Bianchi (Bocconi University); Massimo Guidolin (IGIER Bocconi University); Francesco Ravazzolo (Norges Bank (Central Bank of Norway) and BI Norwegian Business School)
    Abstract: This paper proposes a Bayesian estimation framework for a typical multi-factor model with time-varying risk exposures to macroeconomic risk factors and corresponding premia to price U.S. stocks and bonds. The model assumes that risk exposures and idiosynchratic volatility follow a break-point latent process, allowing for changes at any point in time but not restricting them to change at all points. An empirical application to 40 years of U.S. data and 23 portfolios shows that the approach yields sensible results compared to previous two-step methods based on naive recursive estimation schemes, as well as a set of alternative model restrictions. A variance decomposition test shows that although most of the predictable variation comes from the market risk premium, a number of additional macroeconomic risks, including real output and inflation shocks, are significantly priced in the cross-section. A Bayes factor analysis decisively favors the proposed change-point model.
    Keywords: Change-point model, Stochastic volatility, Multi-factor linear models
    JEL: G11 C53
    Date: 2013–08–22
    URL: http://d.repec.org/n?u=RePEc:bno:worpap:2013_19&r=ecm
  21. By: Pawe{\l} O\'swi\c{e}cimka; Stanis{\l}aw Dro\.zd\.z; Marcin Forczek; Stanis{\l}aw Jadach; Jaros{\l}aw Kwapie\'n
    Abstract: We propose a modified algorithm - Multifractal Cross-Correlation Analysis (MFCCA) - that is able to consistently identify and quantify multifractal cross-correlations between two time series. Our motivation for introducing this algorithm is that the already existing methods like MF-DXA have serious limitations for most of the signals describing complex natural processes. The principal component of the related improvement is proper incorporation of the sign of fluctuations. We present a broad analysis of the model fractal stochastic processes as well as of the real-world signals and show that MFCCA is a robust tool and allows a reliable quantification of the cross-correlative structure of analyzed processes. We, in particular, analyze a relation between the generalized Hurst exponent and the MFCCA parameter $\lambda_q$. This relation provides information about the character of potential multifractality in cross-correlations of the processes under study and thus enables selective insight into their dynamics. Using also an example of financial time series from the stock market we show that waiting times and price increments of the companies are multifractally cross-correlated but only for relatively large fluctuations, whereas the small ones could be considered mutually independent.
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1308.6148&r=ecm
  22. By: Franses, Ph.H.B.F.
    Abstract: Time series with bubble-like patterns display an unbalance between growth and acceleration, in the sense that growth in the upswing is “too fast†and then there is a collapse. In fact, such time series show periods where both the first differences (1-L) and the second differences (1-L)2 of the data are positive-valued, after which period there is a collapse. For a time series without such bubbles, it can be shown that 1-L2 differenced data should be stable. A simple test based on one-step-ahead forecast errors can now be used to timely monitor whether a series experiences a bubble and also whether a collapse is near. Illustration on simulated data and on two housing prices and the Nikkei index illustrates the practical relevance of the new diagnostic. Monte Carlo simulations indicate that the empirical power of the test is high.
    Keywords: growth;test;acceleration;C22;speculative bubbles
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765039598&r=ecm
  23. By: Bryan, Mark L.; Jenkins, Stephen P.
    Abstract: Cross-national differences in outcomes are often analysed using regression analysis of multilevel country datasets, examples of which include the ECHP, ESS, EU-SILC, EVS, ISSP, and SHARE. We review the regression methods applicable to this data structure, pointing out problems with the assessment of country-level factors that appear not to be widely appreciated, and illustrate our arguments using Monte-Carlo simulations and analysis of womens employment probabilities and work hours using EU SILC data. With large sample sizes of individuals within each country but a small number of countries, analysts can reliably estimate individual-level effects within each country but estimates of parameters summarising country effects are likely to be unreliable. Multilevel (hierarchical) modelling methods are commonly used in this context but they are no panacea.
    Date: 2013–08–19
    URL: http://d.repec.org/n?u=RePEc:ese:iserwp:2013-14&r=ecm
  24. By: Kessels, Roselinde; Jones, Bradley; Goos, Peter
    Abstract: Using maximum likelihood estimation for discrete choice modeling of small datasets causes two problems. The rst problem is that the data often exhibit separation, in which case the maximum likelihood estimates do not exist. Also, provided they exist, the maximum likelihood estimates are biased. In this paper, we show how to adapt Firth's bias-adjustment method for use in discrete choice modeling. This approach removes the rst-order bias of the estimates, and it also deals with the separation issue. An additional advantage of the bias adjustment is that it is usually accompanied by a reduction in the variance. Using a large-scale simulation study, we identify the situations where Firth's bias-adjustment method is most useful in avoiding the problem of separation as well as removing the bias and reducing the variance. As a special case, we apply the bias-adjustment approach to discrete choice data from individuals, making it possible to construct an empirical distribution of the respondents' preferences without imposing any a priori population distribution. For both research purposes, we base our ndings on data from a stated choice study on various forms of employee compensation.
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:ant:wpaper:2013013&r=ecm
  25. By: Patrick Gneuss (Faculty of Business Administration and Economics, European University Viadrina, Frankfurt (Oder)); Wolfgang Schmid (Faculty of Business Administration and Economics, European University Viadrina, Frankfurt (Oder)); Reimund Schwarze
    Abstract: Linear mixed effects models have been widely used in the spatial analysis of environmental processes. However, parameter estimation and spatial predictions involve the inversion and determinant of the n times n dimensional spatial covariance matrix of the data process, with n being the number of observations. Nowadays environmental variables are typically obtained through remote sensing and contain observations of the order of tens or hundreds of thousand on a single day, which quickly leads to bottlenecks in terms of computation speed and requirements in working memory. Therefore techniques for reducing the dimension of the problem are required. The present work analyzes approaches to approximate the spatial covariance function in a real dataset of remotely sensed carbon dioxide concentrations, obtained from the Atmospheric Infrared Sounder of NASA's 'Aqua' satellite on the 1st of May 2009. In a cross-validation case study it is shown how fixed rank kriging, stationary covariance tapering and the full-scale approximation are able to notably speed up calculations. However the loss in predictive performance caused by the approximation strongly differs. The best results were obtained for the full-scale approximation, which was able to overcome the individual weaknesses of the fixed rank kriging and the covariance tapering.
    Keywords: spatial covariance function, fixed rank kriging, covariance tapering, full-scale approximation, large spatial data sets, mid-tropospheric CO2, remote sensing, efficient approximation
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:euv:dpaper:009&r=ecm
  26. By: Hao Liu (CoFE, University of Konstanz, Germany); Winfried Pohlmeier (CoFE, University of Konstanz, Germany; ZEW, Germany; RCEA, Italy)
    Abstract: This paper analyzes the estimation risk of efficient portfolio selection. We use the concept of certainty equivalent as the basis for a well-defined statistical loss function and a monetary measure to assess estimation risk. For given risk preferences we provide analytical results for different sources of estimation risk such as sample size, dimension of the portfolio choice problem and correlation structure of the return process. Our results show that theoretically sub-optimal portfolio choice strategies turn out to be superior once estimation risk is taken into account. Since estimation risk crucially depends on risk preferences, the choice of the estimator for a given portfolio strategy becomes endogenous. We show that a shrinkage approach accounting for estimation risk in both, mean and covariance of the return vector, is generally superior to simple theoretically suboptimal strategies. Moreover, focusing on just one source of estimation risk, e.g. risk reduction in covariance estimation, can lead to suboptimal portfolios.
    Keywords: efficient portfolio, estimation risk, certainty equivalent, shrinkage
    JEL: G11 G12 G17
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:47_13&r=ecm
  27. By: Lars winkelmann; Markus Bibinger; Tobias Linzert;
    Abstract: This paper proposes a new econometric approach to disentangle two distinct response patterns of the yield curve to monetary policy announcements. Based on cojumps in intraday tick-data of a short and long term interest rate, we develop a day-wise test that detects the occurrence of a significant policy surprise and identifies the market perceived source of the surprise. The new test is applied to 133 policy announcements of the European Central Bank (ECB) in the period from 2001-2012. Our main findings indicate a good predictability of ECB policy decisions and remarkably stable perceptions about the ECB’s policy preferences.
    Keywords: Central bank communication; yield curve; spectral cojump estimator; high frequency tick-data
    JEL: E58 C14 C58
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2013-038&r=ecm

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.