Category: Statistical tests

Mann–Whitney U test
In statistics, the Mann–Whitney U test (also called the Mann–Whitney–Wilcoxon (MWW/MWU), Wilcoxon rank-sum test, or Wilcoxon–Mann–Whitney test) is a nonparametric test of the null hypothesis that, for
Breusch–Pagan test
In statistics, the Breusch–Pagan test, developed in 1979 by Trevor Breusch and Adrian Pagan, is used to test for heteroskedasticity in a linear regression model. It was independently suggested with so
False positive rate
In statistics, when performing multiple comparisons, a false positive ratio (also known as fall-out or false alarm ratio) is the probability of falsely rejecting the null hypothesis for a particular t
Siegel–Tukey test
In statistics, the Siegel–Tukey test, named after Sidney Siegel and John Tukey, is a non-parametric test which may be applied to data measured at least on an ordinal scale. It tests for differences in
Van der Waerden test
Named after the Dutch mathematician Bartel Leendert van der Waerden, the Van der Waerden test is a statistical test that k population distribution functions are equal. The Van der Waerden test convert
Exact test
In statistics, an exact (significance) test is a test such that if the null hypothesis is true, then all assumptions made during the derivation of the distribution of the test statistic are met. Using
Analysis of variance
Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among m
Tukey's range test
Tukey's range test, also known as Tukey's test, Tukey method, Tukey's honest significance test, or Tukey's HSD (honestly significant difference) test, is a single-step multiple comparison procedure an
Shapiro–Francia test
The Shapiro–Francia test is a statistical test for the normality of a population, based on sample data. It was introduced by S. S. Shapiro and R. S. Francia in 1972 as a simplification of the Shapiro–
Sargan–Hansen test
The Sargan–Hansen test or Sargan's test is a statistical test used for testing in a statistical model. It was proposed by John Denis Sargan in 1958, and several variants were derived by him in 1975. L
Wilks' theorem
In statistics Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test st
Chauvenet's criterion
In statistical theory, Chauvenet's criterion (named for William Chauvenet) is a means of assessing whether one piece of experimental data — an outlier — from a set of observations, is likely to be spu
Fay and Wu's H
Fay and Wu's H is a statistical test created by and named after two researchers Justin Fay and Chung-I Wu. The purpose of the test is to distinguish between a DNA sequence evolving randomly ("neutrall
Randomness test
A randomness test (or test for randomness), in data evaluation, is a test used to analyze the distribution of a set of data to see if it can be described as random (patternless). In stochastic modelin
Permutation test
A permutation test (also called re-randomization test) is an exact statistical hypothesis test making use of the proof by contradiction.A permutation test involves two or more samples. The null hypoth
Student's t-test
A t-test is any statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis. It is most commonly applied when the test statistic would follow a
Cochran's Q test
In statistics, in the analysis of two-way randomized block designs where the response variable can take only two possible outcomes (coded as 0 and 1), Cochran's Q test is a non-parametric statistical
Welch's t-test
In statistics, Welch's t-test, or unequal variances t-test, is a two-sample location test which is used to test the hypothesis that two populations have equal means. It is named for its creator, Berna
Cuzick–Edwards test
In statistics, the Cuzick–Edwards test is a significance test whose aim is to detect the possible clustering of sub-populations within a clustered or non-uniformly-spread overall population. Possible
Kendall rank correlation coefficient
In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's τ coefficient (after the Greek letter τ, tau), is a statistic used to measure the ordinal association between
Tukey–Duckworth test
In statistics, the Tukey–Duckworth test is a two-sample location test – a statistical test of whether one of two samples was significantly greater than the other. It was introduced by John Tukey, who
Park test
In econometrics, the Park test is a test for heteroscedasticity. The test is based on the method proposed by Rolla Edward Park for estimating linear regression parameters in the presence of heterosced
Scheirer–Ray–Hare test
The Scheirer–Ray–Hare (SRH) test is a statistical test that can be used to examine whether a measure is affected by two or more factors. Since it does not require a normal distribution of the data, it
F-test of equality of variances
In statistics, an F-test of equality of variances is a test for the null hypothesis that two normal populations have the same variance. Notionally, any F-test can be regarded as a comparison of two va
Kaiser–Meyer–Olkin test
The Kaiser–Meyer–Olkin (KMO) test is a statistical measure to determine how suited data is for factor analysis. The test measures sampling adequacy for each variable in the model and the complete mode
Binomial test
In statistics, the binomial test is an exact test of the statistical significance of deviations from a theoretically expected distribution of observations into two categories using sample data.
Paired data
Scientific experiments often consist of comparing two or more sets of data. This data is described as unpaired or independent when the sets of data arise from separate individuals or paired when it ar
Shapiro–Wilk test
The Shapiro–Wilk test is a test of normality in frequentist statistics. It was published in 1965 by Samuel Sanford Shapiro and Martin Wilk.
Sign test
The sign test is a statistical method to test for consistent differences between pairs of observations, such as the weight of subjects before and after treatment. Given pairs of observations (such as
Cramér–von Mises criterion
In statistics the Cramér–von Mises criterion is a criterion used for judging the goodness of fit of a cumulative distribution function compared to a given empirical distribution function , or for comp
Dunnett's test
In statistics, Dunnett's test is a multiple comparison procedure developed by Canadian statistician Charles Dunnett to compare each of a number of treatments with a single control. Multiple comparison
Sequential probability ratio test
The sequential probability ratio test (SPRT) is a specific sequential hypothesis test, developed by Abraham Wald and later proven to be optimal by Wald and Jacob Wolfowitz. Neyman and Pearson's 1933 r
Continuity correction
In probability theory, a continuity correction is an adjustment that is made when a discrete distribution is approximated by a continuous distribution.
Location test
A location test is a statistical hypothesis test that compares the location parameter of a statistical population to a given constant, or that compares the location parameters of two statistical popul
Cochran's C test
In statistics, Cochran's C test, named after William G. Cochran, is a one-sided upper limit variance outlier test. The C test is used to decide if a single estimate of a variance (or a standard deviat
Mantel test
The Mantel test, named after Nathan Mantel, is a statistical test of the correlation between two matrices. The matrices must be of the same dimension; in most applications, they are matrices of interr
Lepage test
In statistics, the Lepage test is an exactly distribution-free test (nonparametric test) for jointly monitoring the location (central tendency) and scale (variability) in two-sample treatment versus c
Likelihood-ratio test
In statistics, the likelihood-ratio test assesses the goodness of fit of two competing statistical models based on the ratio of their likelihoods, specifically one found by maximization over the entir
Surrogate data testing
Surrogate data testing (or the method of surrogate data) is a statistical proof by contradiction technique and similar to permutation tests and as a resampling technique related (but different) to par
Information matrix test
In econometrics, the information matrix test is used to determine whether a regression model is misspecified. The test was developed by Halbert White, who observed that in a correctly specified model
Tukey's test of additivity
In statistics, Tukey's test of additivity, named for John Tukey, is an approach used in two-way ANOVA (regression analysis involving two qualitative factors) to assess whether the factor variables (ca
Duncan's new multiple range test
In statistics, Duncan's new multiple range test (MRT) is a multiple comparison procedure developed by in 1955. Duncan's MRT belongs to the general class of multiple comparison procedures that use the
ABX test
An ABX test is a method of comparing two choices of sensory stimuli to identify detectable differences between them. A subject is presented with two known samples (sample A, the first reference, and s
Bartlett's test
In statistics, Bartlett's test, named after Maurice Stevenson Bartlett, is used to test homoscedasticity, that is, if multiple samples are from populations with equal variances. Some statistical tests
One-way analysis of variance
In statistics, one-way analysis of variance (abbreviated one-way ANOVA) is a technique that can be used to compare whether two sample's means are significantly different or not (using the F distributi
Separation test
A separation test is a statistical procedure for early-phase research, to decide whether to pursue further research. It is designed to avoid the prevalent situation in early-phase research, when a sta
Hartley's test
In statistics, Hartley's test, also known as the Fmax test or Hartley's Fmax, is used in the analysis of variance to verify that different groups have a similar variance, an assumption needed for othe
Tajima's D
Tajima's D is a population genetic test statistic created by and named after the Japanese researcher Fumio Tajima. Tajima's D is computed as the difference between two measures of genetic diversity: t
Nemenyi test
In statistics, the Nemenyi test is a post-hoc test intended to find the groups of data that differ after a global statistical test (such as the Friedman test) has rejected the null hypothesis that the
Checking whether a coin is fair
In statistics, the question of checking whether a coin is fair is one whose importance lies, firstly, in providing a simple problem on which to illustrate basic ideas of statistical inference and, sec
Test statistic
A test statistic is a statistic (a quantity derived from the sample) used in statistical hypothesis testing. A hypothesis test is typically specified in terms of a test statistic, considered as a nume
GRIM test
The granularity-related inconsistency of means (GRIM) test is a simple statistical test used to identify inconsistencies in the analysis of data sets. The test relies on the fact that, given a dataset
Breusch–Godfrey test
In statistics, the Breusch–Godfrey test is used to assess the validity of some of the modelling assumptions inherent in applying regression-like models to observed data series. In particular, it tests
Logrank test
The logrank test, or log-rank test, is a hypothesis test to compare the survival distributions of two samples. It is a nonparametric test and appropriate to use when the data are right skewed and cens
Hosmer–Lemeshow test
The Hosmer–Lemeshow test is a statistical test for goodness of fit for logistic regression models. It is used frequently in risk prediction models. The test assesses whether or not the observed event
Goodman and Kruskal's gamma
In statistics, Goodman and Kruskal's gamma is a measure of rank correlation, i.e., the similarity of the orderings of the data when ranked by each of the quantities. It measures the strength of associ
Normality test
In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally
Lexis ratio
The Lexis ratio is used in statistics as a measure which seeks to evaluate differences between the statistical properties of random mechanisms where the outcome is two-valued — for example "success" o
Kolmogorov–Smirnov test
In statistics, the Kolmogorov–Smirnov test (K–S test or KS test) is a nonparametric test of the equality of continuous (or discontinuous, see ), one-dimensional probability distributions that can be u
Score test
In statistics, the score test assesses constraints on statistical parameters based on the gradient of the likelihood function—known as the score—evaluated at the hypothesized parameter value under the
Levene's test
In statistics, Levene's test is an inferential statistic used to assess the equality of variances for a variable calculated for two or more groups. Some common statistical procedures assume that varia
Jonckheere's trend test
In statistics, the Jonckheere trend test (sometimes called the Jonckheere–Terpstra test) is a test for an ordered alternative hypothesis within an independent samples (between-participants) design. It
Durbin test
In the analysis of designed experiments, the Friedman test is the most common non-parametric test for complete block designs. The Durbin test is a nonparametric test for balanced incomplete designs th
One- and two-tailed tests
In statistical significance testing, a one-tailed test and a two-tailed test are alternative ways of computing the statistical significance of a parameter inferred from a data set, in terms of a test
Analysis of similarities
Analysis of similarities (ANOSIM) is a non-parametric statistical test widely used in the field of ecology. The test was first suggested by K. R. Clarke as an ANOVA-like test, where instead of operati
Neyman–Pearson lemma
In statistics, the Neyman–Pearson lemma was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933. The Neyman-Pearson lemma is part of the Neyman-Pearson theory of statistical testing, which
Goldfeld–Quandt test
In statistics, the Goldfeld–Quandt test checks for homoscedasticity in regression analyses. It does this by dividing a dataset into two parts or groups, and hence the test is sometimes called a two-gr
Glejser test
In statistics, the Glejser test for heteroscedasticity, developed in 1969 by , regresses the residuals on the explanatory variable that is thought to be related to the heteroscedastic variance. After
Multinomial test
In statistics, the multinomial test is the test of the null hypothesis that the parameters of a multinomial distribution equal specified values; it is used for categorical data. Beginning with a sampl
White test
In statistics, the White test is a statistical test that establishes whether the variance of the errors in a regression model is constant: that is for homoskedasticity. This test, and an estimator for
Closed testing procedure
In statistics, the closed testing procedure is a general method for performing more than one hypothesis test simultaneously.
Kuiper's test
Kuiper's test is used in statistics to test that whether a given distribution, or family of distributions, is contradicted by evidence from a sample of data. It is named after Dutch mathematician Nico
Sobel test
In statistics, the Sobel test is a method of testing the significance of a mediation effect. The test is based on the work of Michael E. Sobel, a statistics professor at Columbia University in New Yor
Kruskal–Wallis one-way analysis of variance
The Kruskal–Wallis test by ranks, Kruskal–Wallis H test (named after William Kruskal and W. Allen Wallis), or one-way ANOVA on ranks is a non-parametric method for testing whether samples originate fr
Median test
In statistics, Mood's median test is a special case of Pearson's chi-squared test. It is a nonparametric test that tests the null hypothesis that the medians of the populations from which two or more
Omnibus test
Omnibus tests are a kind of statistical test. They test whether the explained variance in a set of data is significantly greater than the unexplained variance, overall. One example is the F-test in th
Wilcoxon signed-rank test
The Wilcoxon signed-rank test is a non-parametric statistical hypothesis test used either to test the location of a population based on a sample of data, or to compare the locations of two populations
Structural break test
No description available.
Hoeffding's independence test
In statistics, Hoeffding's test of independence, named after Wassily Hoeffding, is a test based on the population measure of deviation from independence where is the joint distribution function of two
Item-total correlation
The item-total correlation test arises in psychometrics in contexts where a number of tests or questions are given to an individual and where the problem is to construct a useful single quantity for e
Fisher's method
In statistics, Fisher's method, also known as Fisher's combined probability test, is a technique for data fusion or "meta-analysis" (analysis of analyses). It was developed by and named for Ronald Fis
Anderson–Darling test
The Anderson–Darling test is a statistical test of whether a given sample of data is drawn from a given probability distribution. In its basic form, the test assumes that there are no parameters to be
Vuong's closeness test
In statistics, the Vuong closeness test is a likelihood-ratio-based test for model selection using the Kullback–Leibler information criterion. This statistic makes probabilistic statements about two m
Grubbs's test
In statistics, Grubbs's test or the Grubbs test (named after Frank E. Grubbs, who published the test in 1950), also known as the maximum normalized residual test or extreme studentized deviate test, i
Page's trend test
In statistics, the Page test for multiple comparisons between ordered correlated variables is the counterpart of Spearman's rank correlation coefficient which summarizes the association of continuous
Q-statistic
The Q-statistic is a test statistic output by either the Box-Pierce test or, in a modified version which provides better small sample properties, by the Ljung-Box test. It follows the chi-squared dist
F-test
An F-test is any statistical test in which the test statistic has an F-distribution under the null hypothesis. It is most often used when comparing statistical models that have been fitted to a data s
P-rep
In statistical hypothesis testing, p-rep or prep has been proposed as a statistical alternative to the classic p-value. Whereas a p-value is the probability of obtaining a result under the null hypoth
Brown–Forsythe test
The Brown–Forsythe test is a statistical test for the equality of group variances based on performing an Analysis of Variance (ANOVA) on a transformation of the response variable. When a one-way ANOVA
Friedman test
The Friedman test is a non-parametric statistical test developed by Milton Friedman. Similar to the parametric repeated measures ANOVA, it is used to detect differences in treatments across multiple t
Phillips–Perron test
In statistics, the Phillips–Perron test (named after Peter C. B. Phillips and Pierre Perron) is a unit root test. That is, it is used in time series analysis to test the null hypothesis that a time se
QST (genetics)
In quantitative genetics, QST is a statistic intended to measure the degree of genetic differentiation among populations with regard to a quantitative trait. It was developed by Ken Spitze in 1993. It
Squared ranks test
In statistics, the Conover squared ranks test is a non-parametric version of the parametric Levene's test for equality of variance. Conover's squared ranks test is the only equality of variance test t
Spearman's rank correlation coefficient
In statistics, Spearman's rank correlation coefficient or Spearman's ρ, named after Charles Spearman and often denoted by the Greek letter (rho) or as , is a nonparametric measure of rank correlation
Z-test
A Z-test is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution. Z-tests test the mean of a distribution. For e
Durbin–Wu–Hausman test
The Durbin–Wu–Hausman test (also called Hausman specification test) is a statistical hypothesis test in econometrics named after James Durbin, , and Jerry A. Hausman. The test evaluates the consistenc
McNemar's test
In statistics, McNemar's test is a statistical test used on paired nominal data. It is applied to 2 × 2 contingency tables with a dichotomous trait, with matched pairs of subjects, to determine whethe
Dixon's Q test
In statistics, Dixon's Q test, or simply the Q test, is used for identification and rejection of outliers. This assumes normal distribution and per Robert Dean and Wilfrid Dixon, and others, this test
Ramsey RESET test
In statistics, the Ramsey Regression Equation Specification Error Test (RESET) test is a general specification test for the linear regression model. More specifically, it tests whether non-linear comb
Holm–Bonferroni method
In statistics, the Holm–Bonferroni method, also called the Holm method or Bonferroni–Holm method, is used to counteract the problem of multiple comparisons. It is intended to control the family-wise e
Wald test
In statistics, the Wald test (named after Abraham Wald) assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under
Location testing for Gaussian scale mixture distributions
In statistics, the topic of location testing for Gaussian scale mixture distributions arises in some particular types of situations where the more standard Student's t-test is inapplicable. Specifical
Wald–Wolfowitz runs test
The Wald–Wolfowitz runs test (or simply runs test), named after statisticians Abraham Wald and Jacob Wolfowitz is a non-parametric statistical test that checks a randomness hypothesis for a two-valued
Mauchly's sphericity test
Mauchly's sphericity test or Mauchly's W is a statistical test used to validate a repeated measures analysis of variance (ANOVA). It was developed in 1940 by John Mauchly.