[go: up one dir, main page]

IDEAS home Printed from https://ideas.repec.org/p/nbr/nberwo/24403.html
   My bibliography  Save this paper

Statistical Non-Significance in Empirical Economics

Author

Listed:
  • Alberto Abadie
Abstract
Significance tests are probably the most common form of inference in empirical economics, and significance is often interpreted as providing greater informational content than non-significance. In this article we show, however, that rejection of a point null often carries very little information, while failure to reject may be highly informative. This is particularly true in empirical contexts that are typical and even prevalent in economics, where data sets are large (and becoming larger) and where there are rarely reasons to put substantial prior probability on a point null. Our results challenge the usual practice of conferring point null rejections a higher level of scientific significance than non-rejections. In consequence, we advocate a visible reporting and discussion of non-significant results in empirical practice.

Suggested Citation

  • Alberto Abadie, 2018. "Statistical Non-Significance in Empirical Economics," NBER Working Papers 24403, National Bureau of Economic Research, Inc.
  • Handle: RePEc:nbr:nberwo:24403
    Note: DEV LS PE TWP
    as

    Download full text from publisher

    File URL: http://www.nber.org/papers/w24403.pdf
    Download Restriction: no
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. John P A Ioannidis, 2005. "Why Most Published Research Findings Are False," PLOS Medicine, Public Library of Science, vol. 2(8), pages 1-1, August.
    2. Alan B. Krueger & Jitka Maleckova, 2003. "Education, Poverty and Terrorism: Is There a Causal Connection?," Journal of Economic Perspectives, American Economic Association, vol. 17(4), pages 119-144, Fall.
    3. Isaiah Andrews & Maximilian Kasy, 2019. "Identification of and Correction for Publication Bias," American Economic Review, American Economic Association, vol. 109(8), pages 2766-2794, August.
    4. Ronald L. Wasserstein & Nicole A. Lazar, 2016. "The ASA's Statement on p -Values: Context, Process, and Purpose," The American Statistician, Taylor & Francis Journals, vol. 70(2), pages 129-133, May.
    5. Joshua D. Angrist & Victor Lavy & Jetson Leder-Luis & Adi Shany, 2019. "Maimonides' Rule Redux," American Economic Review: Insights, American Economic Association, vol. 1(3), pages 309-324, December.
    6. Peter E. Kennedy, 2005. "Oh No! I Got the Wrong Sign! What Should I Do?," The Journal of Economic Education, Taylor & Francis Journals, vol. 36(1), pages 77-92, January.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Eszter Czibor & David Jimenez‐Gomez & John A. List, 2019. "The Dozen Things Experimental Economists Should Do (More of)," Southern Economic Journal, John Wiley & Sons, vol. 86(2), pages 371-432, October.
    2. Furukawa, Chishio, 2019. "Publication Bias under Aggregation Frictions: Theory, Evidence, and a New Correction Method," EconStor Preprints 194798, ZBW - Leibniz Information Centre for Economics.
    3. Jyotirmoy Sarkar, 2018. "Will P†Value Triumph over Abuses and Attacks?," Biostatistics and Biometrics Open Access Journal, Juniper Publishers Inc., vol. 7(4), pages 66-71, July.
    4. Michaelides, Michael, 2021. "Large sample size bias in empirical finance," Finance Research Letters, Elsevier, vol. 41(C).
    5. Alexander Frankel & Maximilian Kasy, 2022. "Which Findings Should Be Published?," American Economic Journal: Microeconomics, American Economic Association, vol. 14(1), pages 1-38, February.
    6. David Spiegelhalter, 2017. "Trust in numbers," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 180(4), pages 948-965, October.
    7. Robert Rieg, 2018. "Tasks, interaction and role perception of management accountants: evidence from Germany," Journal of Management Control: Zeitschrift für Planung und Unternehmenssteuerung, Springer, vol. 29(2), pages 183-220, August.
    8. Brodeur, Abel & Cook, Nikolai & Heyes, Anthony, 2018. "Methods Matter: P-Hacking and Causal Inference in Economics," IZA Discussion Papers 11796, Institute of Labor Economics (IZA).
    9. Andrew Y. Chen & Tom Zimmermann, 2022. "Publication Bias in Asset Pricing Research," Papers 2209.13623, arXiv.org, revised Sep 2023.
    10. Graham Elliott & Nikolay Kudrin & Kaspar Wuthrich, 2022. "The Power of Tests for Detecting $p$-Hacking," Papers 2205.07950, arXiv.org, revised Apr 2024.
    11. Herman Carstens & Xiaohua Xia & Sarma Yadavalli, 2018. "Bayesian Energy Measurement and Verification Analysis," Energies, MDPI, vol. 11(2), pages 1-20, February.
    12. Stephan B. Bruns & David I. Stern, 2019. "Lag length selection and p-hacking in Granger causality testing: prevalence and performance of meta-regression models," Empirical Economics, Springer, vol. 56(3), pages 797-830, March.
    13. Felix Chopra & Ingar Haaland & Christopher Roth & Andreas Stegmann, 2024. "The Null Result Penalty," The Economic Journal, Royal Economic Society, vol. 134(657), pages 193-219.
    14. Andrew Y. Chen, 2022. "Most claimed statistical findings in cross-sectional return predictability are likely true," Papers 2206.15365, arXiv.org, revised Sep 2024.
    15. Nosek, Brian A. & Ebersole, Charles R. & DeHaven, Alexander Carl & Mellor, David Thomas, 2018. "The Preregistration Revolution," OSF Preprints 2dxu5, Center for Open Science.
    16. Bruns, Stephan B. & Asanov, Igor & Bode, Rasmus & Dunger, Melanie & Funk, Christoph & Hassan, Sherif M. & Hauschildt, Julia & Heinisch, Dominik & Kempa, Karol & König, Johannes & Lips, Johannes & Verb, 2019. "Reporting errors and biases in published empirical findings: Evidence from innovation research," Research Policy, Elsevier, vol. 48(9), pages 1-1.
    17. Mayo, Deborah & Morey, Richard Donald, 2017. "A Poor Prognosis for the Diagnostic Screening Critique of Statistical Tests," OSF Preprints ps38b, Center for Open Science.
    18. Mantas Radzvilas & Francesco De Pretis & William Peden & Daniele Tortoli & Barbara Osimani, 2020. "Double blind vs. open review: an evolutionary game logit-simulating the behavior of authors and reviewers," Papers 2011.07797, arXiv.org.
    19. Abel Brodeur & Nikolai Cook & Anthony Heyes, 2020. "Methods Matter: p-Hacking and Publication Bias in Causal Analysis in Economics," American Economic Review, American Economic Association, vol. 110(11), pages 3634-3660, November.
    20. Nicolas Vallois & Dorian Jullien, 2018. "A history of statistical methods in experimental economics," The European Journal of the History of Economic Thought, Taylor & Francis Journals, vol. 25(6), pages 1455-1492, November.

    More about this item

    JEL classification:

    • C01 - Mathematical and Quantitative Methods - - General - - - Econometrics
    • C12 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Hypothesis Testing: General

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nbr:nberwo:24403. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: the person in charge (email available below). General contact details of provider: https://edirc.repec.org/data/nberrus.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.