[go: up one dir, main page]

nep-ecm New Economics Papers
on Econometrics
Issue of 2017‒01‒29
twenty-one papers chosen by
Sune Karlsson
Örebro universitet

  1. Uniform Inference in Panel Autoregression By John Chao; Peter C.B. Phillips
  2. A Unified Framework for Dimension Reduction in Forecasting By Alessandro Barbarino; Efstathia Bura
  3. Non-standard Confidence Sets for Ratios and Tipping Points with Applications to Dynamic Panel Data By Jean-Thomas Bernard; Ba Chu; Lynda Khalaf; Marcel-Cristian Voia
  4. Estimation for dynamic and static panel probit models with large individual effects By Wei Gao; Wicher Bergsma; Qiwei Yao
  5. Sequential Probability Ration Tests : Conservative and Robust By Kleijnen, J.P.C.; Shi, Wen
  6. Speeding up MCMC by Efficient Data Subsampling By Kohn, Robert; Quiroz, Matias; Tran, Minh-Ngoc; Villani, Mattias
  7. The Fiction of Full BEKK By Chia-Lin Chang; Michael McAleer
  8. The contribution of jumps to forecasting the density of returns By Christophe Chorro; Florian Ielpo; Benoît Sévi
  9. A new approach to volatility modeling: the High-Dimensional Markov model By Arnaud Dufays; Maciej Augustyniak; Luc Bauwens
  10. BAYESIAN ESTIMATION OF BETA-TYPE DISTRIBUTION PARAMETERS BASED ON GROUPED DATA By Kazuhiko Kakamu; Haruhisa Nishino
  11. Nonparametric forecasting with one-sided kernel adopting pseudo one-step ahead data By Jungwoo Kim; Joocheol Kim
  12. Unit Root Tests and Heavy-Tailed Innovations By Georgiev, Iliyan; Rodrigues, Paulo M M; Taylor, A M Robert
  13. Set Identification, Moment Restrictions and Inference By Bontemps, Christian; Magnac, Thierry
  14. Measuring the uncertainty of Principal Components in Dynamic Factor Models By Ruiz, Esther; Vicente, Javier de
  15. Playing by the rules? Agreement between predicted and observed binary choices Cycle: A Bayesian Evaluation By Stephanie Thomas
  16. Sparse Change-point HAR Models for Realized Variance By Arnaud Dufays; Jeroen V.K. Rombouts
  17. BIAS correction for dynamic factor models By García-Martos, Carolina; Bastos, Guadalupe; Alonso Fernández, Andrés Modesto
  18. A discrete choice model for large heterogeneous panels with interactive fixed effects with an application to the determinants of corporate bond issuance By Boneva, Lena; Linton, Oliver
  19. Time Series Copulas for Heteroskedastic Data By Rub\'en Loaiza-Maya; Michael S. Smith; Worapree Maneesoonthorn
  20. Dynamic panel data modelling using maximum likelihood: an alternative to Arellano-Bond By Enrique Moral-Benito; Paul Allison; Richard Williams
  21. The Estimation of Network Formation Games with Positive Spillovers By Vincent Boucher

  1. By: John Chao (University of Maryland); Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: This paper considers estimation and inference concerning the autoregressive coefficient (?) in a panel autoregression for which the degree of persistence in the time dimension is unknown. The main objective is to construct confidence intervals for ? that are asymptotically valid, having asymptotic coverage probability at least that of the nominal level uniformly over the parameter space. It is shown that a properly normalized statistic based on the Anderson-Hsiao IV procedure, which we call the M statistic, is uniformly convergent and can be inverted to obtain asymptotically valid interval estimates. In the unit root case confidence intervals based on this procedure are unsatisfactorily wide and uninformative. To sharpen the intervals a new procedure is developed using information from unit root pretests to select alternative confidence intervals. Two sequential tests are used to assess how close ? is to unity and to correspondingly tailor intervals near the unit root region. When ? is close to unity, the width of these intervals shrinks to zero at a faster rate than that of the confidence interval based on the M statistic. Only when both tests reject the unit root hypothesis does the construction revert to the M statistic intervals, whose width has the optimal N^{-1/2}T^{-1/2} rate of shrinkage when the underlying process is stable. The asymptotic properties of this pretest-based procedure show that it produces confidence intervals with at least the prescribed coverage probability in large samples. Simulations confirm that the proposed interval estimation methods perform well in finite samples and are easy to implement in practice. A supplement to the paper provides an extensive set of new results on the asymptotic behavior of panel IV estimators in weak instrument settings.
    Keywords: Confidence interval, Dynamic panel data models, panel IV, pooled OLS, Pretesting, Uniform inference
    JEL: C23 C36
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2071&r=ecm
  2. By: Alessandro Barbarino; Efstathia Bura
    Abstract: Factor models are widely used in summarizing large datasets with few underlying latent factors and in building time series forecasting models for economic variables. In these models, the reduction of the predictors and the modeling and forecasting of the response y are carried out in two separate and independent phases. We introduce a potentially more attractive alternative, Sufficient Dimension Reduction (SDR), that summarizes x as it relates to y, so that all the information in the conditional distribution of y|x is preserved. We study the relationship between SDR and popular estimation methods, such as ordinary least squares (OLS), dynamic factor models (DFM), partial least squares (PLS) and RIDGE regression, and establish the connection and fundamental differences between the DFM and SDR frameworks. We show that SDR significantly reduces the dimension of widely used macroeconomic series data with one or two sufficient reductions delivering similar forecasting performance to that of competing methods in macro-forecasting.
    Keywords: Diffusion Index ; Dimension Reduction ; Factor Models ; Forecasting ; Partial Least Squares ; Principal Components
    JEL: C32 C53 C55 E17
    Date: 2017–01–12
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2017-04&r=ecm
  3. By: Jean-Thomas Bernard (Department of Economics, University of Ottawa); Ba Chu (Department of Economics, Carleton University); Lynda Khalaf (Department of Economics, Carleton University); Marcel-Cristian Voia (Department of Economics, Carleton University)
    Abstract: We study estimation uncertainty when the object of interest contains one or more ratios of parameters. The ratio of parameters is a discontinuous parameter transformation; it has been shown that traditional confidence intervals often fail to cover this true ratio with very high probability. Constructing confidence sets for ratios using Fieller’s method is a viable solution as the method can avoid the discontinuity problem. This paper proposes an extension of the multivariate Fieller method beyond standard estimators, focusing on asymptotically mixed normal estimators that commonly arise in dynamic panel polynomial regression with persistent covariates. We discuss the cases where the underlying estimators converge to various distri- butions, depending on the persistence level of the covariates. We show that the asymptotic distribution of the pivotal statistic used for constructing a Fieller’s confidence set remains a standard Chi-squared distribution regardless of rates of convergence, thus the rates are being ‘self-normalized’ and can be unknown. A simulation study illustrates the finite sample properties of the proposed method in a dynamic polynomial panel. Our method is demonstrated to work well in small samples, even when the persistence coefficient is unity.
    Date: 2017–01–18
    URL: http://d.repec.org/n?u=RePEc:car:carecp:17-05&r=ecm
  4. By: Wei Gao; Wicher Bergsma; Qiwei Yao
    Abstract: For discrete panel data, the dynamic relationship between successive observations is often of interest. We consider a dynamic probit model for short panel data. A problem with estimating the dynamic parameter of interest is that the model contains a large number of nuisance parameters, one for each individual. Heckman proposed to use maximum likelihood estimation of the dynamic parameter, which, however, does not perform well if the individual effects are large. We suggest new estimators for the dynamic parameter, based on the assumption that the individual parameters are random and possibly large. Theoretical properties of our estimators are derived, and a simulation study shows they have some advantages compared with Heckman's estimator and the modified profile likelihood estimator for fixed effects.
    Keywords: Dynamic probit regression; generalized linear models; panel data; probit models; static probit regression
    JEL: C1
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:65165&r=ecm
  5. By: Kleijnen, J.P.C. (Tilburg University, Center For Economic Research); Shi, Wen
    Abstract: In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output (response). One SPRT is published in Wald (1945), and allows general distribution types. For a normal (Gaussian) distribution this SPRT assumes a known variance, but in our modified SPRT we estimate the variance. Another SPRT is published in Hall (1962), and assumes a normal distribution with an unknown variance estimated from a pilot sample. We also investigate a modification, replacing this pilot-sample estimator by a fully sequential estimator. We present a sequence of Monte Carlo experiments for quantifying the performance of these SPRTs. In experiment #1 the simulation outputs are normal. This experiment suggests that Wald (1945)’s SPRT with estimated variance gives significantly high error rates. Hall (1962)’s original and modified SPRTs are conservative; i.e., the actual error rates are much smaller than the prespecified (nominal) rates. The most efficient SPRT is our modified Hall (1962) SPRT. In experiment #2 we examine the robustness of the various SPRTs in case of nonnormal output. If we know that the output has a specific nonnormal distribution such as the exponential distribution, then we may also apply Wald (1945)’s original SPRT. Throughout our investigation we pay special attention to the design and analysis of these experiments.
    Keywords: sequential test; Wald; Hall; robustness; lognormal; gamma distribution; Monte Carlo
    JEL: C00 C10 C90 C15 C44
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:tiu:tiucen:5f24e30d-7931-4be4-96f0-68f8898e6667&r=ecm
  6. By: Kohn, Robert; Quiroz, Matias; Tran, Minh-Ngoc; Villani, Mattias
    Abstract: We propose Subsampling MCMC, a Markov Chain Monte Carlo (MCMC) framework where the likelihood function for n observations is estimated from a random subset of m observations. We introduce a general and highly efficient unbiased estimator of the log-likelihood based on control variates obtained from clustering the data. The cost of computing the log-likelihood estimator is much smaller than that of the full log-likelihood used by standard MCMC. The likelihood estimate is bias-corrected and used in two correlated pseudo-marginal algorithms to sample from a perturbed posterior, for which we derive the asymptotic error with respect to n and m, respectively. A practical estimator of the error is proposed and we show that the error is negligible even for a very small m in our applications. We demonstrate that Subsampling MCMC is substantially more efficient than standard MCMC in terms of sampling efficiency for a given computational budget, and that it outperforms other subsampling methods for MCMC proposed in the literature.
    Keywords: Survey sampling; Big Data; Block pseudo-marginal; Estimated likelihood; Correlated pseudo-marginal; Bayesian inference
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/16205&r=ecm
  7. By: Chia-Lin Chang (Department of Applied Economics Department of Finance National Chung Hsing University Taiwan.); Michael McAleer
    Abstract: The purpose of the paper is to show that univariate GARCH is not a special case of multivariate GARCH, specifically the Full BEKK model, except under parametric restrictions on the off-diagonal elements of the random coefficient autoregressive coefficient matrix, provides the regularity conditions that arise from the underlying random coefficient autoregressive process, and for which the (quasi-) maximum likelihood estimates have valid asymptotic properties under the appropriate parametric restrictions. The paper provides a discussion of the stochastic processes, regularity conditions, and asymptotic properties of univariate and multivariate GARCH models. It is shown that the Full BEKK model, which in practice is estimated almost exclusively, has no underlying stochastic process, regularity conditions, or asymptotic properties.
    Keywords: Random coefficient stochastic process, Off-diagonal parametric restrictions, Diagonal and Full BEKK, Regularity conditions, Asymptotic properties, Conditional volatility, Univariate and multivariate models.
    JEL: C22 C32 C52 C58
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:ucm:doicae:1706&r=ecm
  8. By: Christophe Chorro (Centre d'Economie de la Sorbonne); Florian Ielpo (Unigestion SA, Centre d'Economie de la Sorbonne et IPAG Business School); Benoît Sévi (LEMNA)
    Abstract: The extraction of the jump component in dynamics of asset prices haw witnessed a considerably growing body of literature. Of particular interest is the decomposition of returns' quadratic variation between their continuous and jump components. Recent contributions highlight the importance of this component in forecasting volatility at different horizons. In this article, we extend a methodology developed in Maheu and McCurdy (2011) to exploit the information content of intraday data in forecasting the density of returns at horizons up to sixty days. We follow Boudt et al. (2011) to detect intraday returns that should be considered as jumps. The methodology is robust to intra-week periodicity and further delivers estimates of signed jumps in contrast to the rest of the literature where only the squared jump component can be estimated. Then, we estimate a bivariate model of returns and volatilities where the jump component is independently modeled using a jump distribution that fits the stylized facts of the estimated jumps. Our empirical results for S&P 500 futures, U.S. 10-year Treasury futures, USD/CAD exchange rate and WTI crude oil futures highlight the importance of considering the continuous/jump decomposition for density forecasting while this is not the case for volatility point forecast. In particular, we show that the model considering jumps apart from the continuous component consistenly deliver better density forecasts for forecasting horizons ranging from 1 to 30 days
    Keywords: density forecasting; jumps; realized volatility; bipower variation; median realized volatility; leverage effect
    JEL: C15 C32 C53 G1
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:17006&r=ecm
  9. By: Arnaud Dufays; Maciej Augustyniak; Luc Bauwens
    Abstract: A new model -the high-dimensional Markov (HDM) model - is proposed for financial returns and their latent variances. It is also applicable to model directly realized variances. Volatility is modeled as a product of three components: a Markov chain driving volatility persistence, an independent discrete process capable of generating jumps in the volatility, and a predictable (data-driven) process capturing the leverage effect. The Markov chain and jump components allow volatility to switch abruptly between thousands of states. The transition probability matrix of the Markov chain is structured in such a way that the multiplicity of the second largest eigenvalue can be greater than one. This distinctive feature generates a high degree of volatility persistence. The statistical properties of the HDM model are derived and an economic interpretation is attached to each component. In-sample results on six financial time series highlight that the HDM model compares favorably to the main existing volatility processes. A forecasting experiment shows that the HDM model significantly outperforms its competitors when predicting volatility over time horizons longer than five days.
    Keywords: Volatility, Markov-switching, Persistence, Leverage effect.
    JEL: C22 C51 C58
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:lvl:crrecr:1609&r=ecm
  10. By: Kazuhiko Kakamu (Graduate School of Business Administration, Kobe University); Haruhisa Nishino (Faculty of Law, Politics and Economics, Chiba University)
    Abstract: This study considers the estimation method of generalized beta (GB) distribution parameters based on grouped data from a Bayesian point of view. Because the GB distribution, which was proposed by McDonald and Xu (1995), includes several kinds of familiar distributions as special or limiting cases, it performs at least as well as those special or limiting distributions. Therefore, it is reasonable to estimate the parameters of the GB distribution. However, when the number of groups is small or when the number of parameters increases, it may become difficult to estimate the distribution parameters for grouped data using the existing estimation methods. This study uses a Tailored randomized block Metropolis–Hastings (TaRBMH) algorithm proposed by Chib and Ramamurthy (2010) to estimate the GB distribution parameters, and this method is applied to one simulated and two real datasets. Moreover, the Gini coefficients from the estimated parameters for the GB distribution are examined.
    Keywords: Generalized beta (GB) distribution; Gini coefficient; grouped data; simulated annealing; Tailored randomized block Metropolis–Hastings (TaRBMH) algorithm.
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:kbb:dpaper:2016-08&r=ecm
  11. By: Jungwoo Kim (Yonsei University); Joocheol Kim (Yonsei University)
    Abstract: A new nonparametric forecasting using one-sided kernel is proposed via adopting pseudo one-step ahead data. Adopting pseudo one-step data is inspired from the difference between training error and test error, which motivates us to reduce test error minimization problem to training error minimization problem. The theoretical basis and the numerical justification of the new approach are presented.
    Keywords: Nonparametric methods, Time series, One-sided kernel, Local regression, Exponential smoothing
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:yon:wpaper:2017rwp-102&r=ecm
  12. By: Georgiev, Iliyan; Rodrigues, Paulo M M; Taylor, A M Robert
    Abstract: We evaluate the impact of heavy-tailed innovations on some popular unit root tests. In the context of a near-integrated series driven by linear-process shocks, we demonstrate that their limiting distributions are altered under in nite variance vis-Ã -vis finite variance. Reassuringly, however, simulation results suggest that the impact of heavy-tailed innovations on these tests are relatively small. We use the framework of Amsler and Schmidt (2012) whereby the innovations have local-to- nite variances being generated as a linear combination of draws from a thin- tailed distribution (in the domain of attraction of the Gaussian distribution) and a heavy-tailed distribution (in the normal domain of attraction of a stable law). We also explore the properties of ADF tests which employ Eicker-White standard errors, demonstrating that these can yield significant power improvements over conventional tests.
    Keywords: Infinite variance, α-stable distribution, Eicker-White standard errors, symptotic local power functions, weak dependence
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:esy:uefcwp:18832&r=ecm
  13. By: Bontemps, Christian; Magnac, Thierry
    Abstract: For the last ten years, the topic of set identification has been much studied in the econometric literature. Classical inference methods have been generalized to the case in which moment inequalities and equalities define a set instead of a point. We review several instances of partial identification by focusing on examples in which the underlying economic restrictions are expressed as linear moments. This setting illustrates the fact that convex analysis helps not only in characterizing the identified set but also for inference. In this perspective, we review inference methods using convex analysis or inversion of tests and detail how geometric characterizations can be useful.
    Keywords: set identification, moment inequality, convex set, support function.
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:31337&r=ecm
  14. By: Ruiz, Esther; Vicente, Javier de
    Abstract: In the context of Dynamic Factor Models, factors are unobserved latent variables of interest. One of the most popular procedures for the factor extraction is Principal Components (PC). Measuring the uncertainty associated to factor estimates should be part of interpreting these estimates. Several procedures have been proposed in the context of PC factor extraction to estimate this uncertainty. In this paper, we show that these methods are not adequate when implemented to measure the uncertainty associated to the factor estimation. We propose an alternative procedure and analyze its finite sample properties. The results are illustrated in the context of extracting the common factors of a large system of macroeconomic variables.
    Keywords: Bootstrap; Extraction uncertainty; Principal Components; Dynamic Factor Models
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:23974&r=ecm
  15. By: Stephanie Thomas
    Abstract: Empirical economics frequently involves testing whether the predictions of a theoretical model are realized under controlled conditions. This paper proposes a new method for assessing whether binary (‘Yes’/‘No’) observations ranging over a continuous covariate exhibit a discrete change which is consistent with an underlying theoretical model. An application using observations from a controlled laboratory environment illustrates the method, however, the methodology can be used for testing for a discrete change in any binary outcome variable which occurs over a continuous covariate such as medical practice guidelines, firm entry and exit decisions, labour market decisions and many others. The observations are optimally smoothed using a nonparametric approach which is demonstrated to be superior, judged by four common criteria for such settings. Next, using the smoothed observations, two novel methods for assessment of a step pattern are proposed. Finally, nonparametric bootstrapped confidence intervals are used to evaluate the match of the pattern of the observed responses to that predicted by the theoretical model. The key methodological contributions are the two innovative methods proposed for assessing the step pattern. The promise of this approach is illustrated in an application to a controlled experimental lab data set, while the methods are easily extendable to many other settings. Further, the results generated can be easily communicated to diverse audiences.
    Keywords: Evaluation of theoretical predictions, binary outcome data, applied nonparametric analysis, data from experiments
    JEL: C18 C14 C4 C9
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:mcm:deptwp:2016-12&r=ecm
  16. By: Arnaud Dufays; Jeroen V.K. Rombouts
    Abstract: Change-point time series specifications constitute flexible models that capture unknown structural changes by allowing for switches in the model parameters. Nevertheless most models suffer from an over-parametrization issue since typically only one latent state variable drives the switches in all parameters. This implies that all parameters have to change when a break happens. To gauge whether and where there are structural breaks in realized variance, we introduce the sparse change-point HAR model. The approach controls for model parsimony by limiting the number of parameters which evolve from one regime to another. Sparsity is achieved thanks to employing a nonstandard shrinkage prior distribution. We derive a Gibbs sampler for inferring the parameters of this process. Simulation studies illustrate the excellent performance of the sampler. Relying on this new framework, we study the stability of the HAR model using realized variance series of several major international indices between January 2000 and August 2015.
    Keywords: Realized variance, Bayesian inference, Time series, Shrinkage prior, Change-point model, Online forecasting
    JEL: C11 C15 C22 C51
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:lvl:crrecr:1607&r=ecm
  17. By: García-Martos, Carolina; Bastos, Guadalupe; Alonso Fernández, Andrés Modesto
    Abstract: In this paper we work with multivariate time series that follow a Dynamic Factor Model. In particular, we consider the setting where factors are dominated by highly persistent AutoRegressive (AR) processes, and samples that are rather small. Therefore, the factors' AR models are estimated using small sample bias correction techniques. A Monte Carlo study reveals that bias-correcting the AR coefficients of the factors allows to obtain better results in terms of prediction interval coverage. As expected, the simulation reveals that bias-correction is more successful for smaller samples. Results are gathered assuming the AR order and number of factors are known as well as unknown. We also study the advantages of this technique for a set of Industrial Production Indexes of several European countries.
    Keywords: Dynamic Factor Model; persistent process; auto-regressive models; small sample bias correction; Dimensionality reduction
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:24029&r=ecm
  18. By: Boneva, Lena (Bank of England); Linton, Oliver (University of Cambridge)
    Abstract: What is the effect of funding costs on the conditional probability of issuing a corporate bond? We study this question in a novel dataset covering 5,610 issuances by US firms over the period from 1990 to 2014. Identification of this effect is complicated because of unobserved, common shocks such as the global financial crisis. To account for these shocks, we extend the common correlated effects estimator to settings where outcomes are discrete. Both the asymptotic properties and the sample behaviour of this estimator are documented. We find that for non-financial firms, yields are negatively related to bond issuance but that effect is larger in the pre-crisis period.
    Keywords: Heterogeneous panel data; discrete choice models; capital structure
    JEL: C23 C25 G32
    Date: 2017–01–20
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0640&r=ecm
  19. By: Rub\'en Loaiza-Maya; Michael S. Smith; Worapree Maneesoonthorn
    Abstract: We propose parametric copulas that capture serial dependence in stationary heteroskedastic time series. We develop our copula for first order Markov series, and extend it to higher orders and multivariate series. We derive the copula of a volatility proxy, based on which we propose new measures of volatility dependence, including co-movement and spillover in multivariate series. In general, these depend upon the marginal distributions of the series. Using exchange rate returns, we show that the resulting copula models can capture their marginal distributions more accurately than univariate and multivariate GARCH models, and produce more accurate value at risk forecasts.
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1701.07152&r=ecm
  20. By: Enrique Moral-Benito (Banco de España); Paul Allison (University of pennsylvania); Richard Williams (University of Notre Dame)
    Abstract: The Arellano and Bond (1991) estimator is widely-used among applied researchers when estimating dynamic panels with fixed effects and predetermined regressors. This estimator might behave poorly in finite samples when the cross-section dimension of the data is small (i.e. small N), especially if the variables under analysis are persistent over time. This paper discusses a maximum likelihood estimator that is asymptotically equivalent to Arellano and Bond (1991) but presents better finite sample behaviour. Moreover, the estimator is easy to implement in Stata using the xtdpdml command as described in the companion paperWilliams et al. (2016), which also discusses further advantages of the proposed estimator for practitioners.
    Keywords: dynamic panel data, maximum likelihood estimation
    JEL: C23
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:bde:wpaper:1703&r=ecm
  21. By: Vincent Boucher
    Abstract: I present a strategic model of network formation with positive network externalities in which individuals have preferences for being part of a clique. I build on the theory of supermodular games (Topkis, 1979) and focus on the greatest Nash equilibrium of the game. Although the structure of the equilibrium network cannot be expressed analytically, I show that it can easily be simulated. I propose an approximate Bayesian computation (ABC) framework to make inferences about individuals' preferences, and provide an illustration using data on high school friendships.
    Keywords: Network Formation, Supermodular Games, Approximate Bayesian Computation
    JEL: D85 C11 C15 C72
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:lvl:crrecr:1604&r=ecm

This nep-ecm issue is ©2017 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.