Category: Multiple comparisons

Šidák correction for t-test
One of the application of Student's t-test is to test the location of one sequence of independent and identically distributed random variables. If we want to test the locations of multiple sequences o
False coverage rate
In statistics, a false coverage rate (FCR) is the average rate of false coverage, i.e. not covering the true parameters, among the selected intervals. The FCR gives a simultaneous coverage at a (1 − α
False positive rate
In statistics, when performing multiple comparisons, a false positive ratio (also known as fall-out or false alarm ratio) is the probability of falsely rejecting the null hypothesis for a particular t
Family-wise error rate
In statistics, family-wise error rate (FWER) is the probability of making one or more false discoveries, or type I errors when performing multiple hypotheses tests.
Contrast (statistics)
In statistics, particularly in analysis of variance and linear regression, a contrast is a linear combination of variables (parameters or statistics) whose coefficients add up to zero, allowing compar
Testing hypotheses suggested by the data
In statistics, hypotheses suggested by a given dataset, when tested with the same dataset that suggested them, are likely to be accepted even when they are not true. This is because circular reasoning
Extensions of Fisher's method
In statistics, extensions of Fisher's method are a group of approaches that allow approximately valid statistical inferences to be made when the assumptions required for the direct application of Fish
Tukey's range test
Tukey's range test, also known as Tukey's test, Tukey method, Tukey's honest significance test, or Tukey's HSD (honestly significant difference) test, is a single-step multiple comparison procedure an
Newman–Keuls method
The Newman–Keuls or Student–Newman–Keuls (SNK) method is a stepwise multiple comparisons procedure used to identify sample means that are significantly different from each other. It was named after St
Dunnett's test
In statistics, Dunnett's test is a multiple comparison procedure developed by Canadian statistician Charles Dunnett to compare each of a number of treatments with a single control. Multiple comparison
Working–Hotelling procedure
In statistics, particularly regression analysis, the Working–Hotelling procedure, named after Holbrook Working and Harold Hotelling, is a method of simultaneous estimation in linear regression models.
False discovery rate
In statistics, the false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are d
Scheffé's method
In statistics, Scheffé's method, named after the American statistician Henry Scheffé, is a method for adjusting significance levels in a linear regression analysis to account for multiple comparisons.
Šidák correction
In statistics, the Šidák correction, or Dunn–Šidák correction, is a method used to counteract the problem of multiple comparisons. It is a simple method to control the family-wise error rate. When all
Bonferroni correction
In statistics, the Bonferroni correction is a method to counteract the multiple comparisons problem. Bonferroni correction is the simplest method for counteracting this; however, it is a conservative
Q-value (statistics)
In statistical hypothesis testing, specifically multiple hypothesis testing, the q-value provides a means to control the positive false discovery rate (pFDR). Just as the p-value gives the expected fa
Post hoc analysis
In a scientific study, post hoc analysis (from Latin post hoc, "after this") consists of statistical analyses that were specified after the data were seen. This typically creates a multiple testing pr
Multiple comparisons problem
In statistics, the multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously or infers a subset of parameters selected bas
Duncan's new multiple range test
In statistics, Duncan's new multiple range test (MRT) is a multiple comparison procedure developed by in 1955. Duncan's MRT belongs to the general class of multiple comparison procedures that use the
Studentized range
In statistics, the studentized range, denoted q, is the difference between the largest and smallest data in a sample normalized by the sample standard deviation.It is named after William Sealy Gosset
Closed testing procedure
In statistics, the closed testing procedure is a general method for performing more than one hypothesis test simultaneously.
Holm–Bonferroni method
In statistics, the Holm–Bonferroni method, also called the Holm method or Bonferroni–Holm method, is used to counteract the problem of multiple comparisons. It is intended to control the family-wise e
Look-elsewhere effect
The look-elsewhere effect is a phenomenon in the statistical analysis of scientific experiments where an apparently statistically significant observation may have actually arisen by chance because of
Harmonic mean p-value
The harmonic mean p-value (HMP) is a statistical technique for addressing the multiple comparisons problem that controls the strong-sense family-wise error rate (this claim has been disputed). It impr