Blending Bayesian and classical tools to define optimal sample-size-dependent significance levels

Mark Andrew Gannon, Carlos Alberto de Braganca Pereira, Adriano Polpo

Research output: Contribution to journalArticlepeer-review

16 Citations (Scopus)


This article argues that researchers do not need to completely abandon the p-value, the best-known significance index, but should instead stop using significance levels that do not depend on sample sizes. A testing procedure is developed using a mixture of frequentist and Bayesian tools, with a significance level that is a function of sample size, obtained from a generalized form of the Neyman-Pearson Lemma that minimizes a linear combination of alpha, the probability of rejecting a true null hypothesis, and ,beta, the probability of failing to reject a false null, instead of fixing alpha and minimizing beta. The resulting hypothesis tests do not violate the Likelihood Principle and do not require any constraints on the dimensionalities of the sample space and parameter space. The procedure includes an ordering of the entire sample space and uses predictive probability (density) functions, allowing for testing of both simple and compound hypotheses. Accessible examples are presented to highlight specific characteristics of the new tests.

Original languageEnglish
Pages (from-to)213-222
Number of pages10
JournalAmerican Statistician
Issue numbersup1
Publication statusPublished - 29 Mar 2019
Externally publishedYes


Dive into the research topics of 'Blending Bayesian and classical tools to define optimal sample-size-dependent significance levels'. Together they form a unique fingerprint.

Cite this