Category: Statistical hypothesis testing

False coverage rate
In statistics, a false coverage rate (FCR) is the average rate of false coverage, i.e. not covering the true parameters, among the selected intervals. The FCR gives a simultaneous coverage at a (1 − α
Further research is needed
The phrases "further research is needed" (FRIN), "more research is needed" and other variants are commonly used in research papers. The cliché is so common that it has attracted research, regulation a
Omnibus test
Omnibus tests are a kind of statistical test. They test whether the explained variance in a set of data is significantly greater than the unexplained variance, overall. One example is the F-test in th
Round-robin test
In experimental methodology, a round-robin test is an interlaboratory test (measurement, analysis, or experiment) performed independently several times. This can involve multiple independent scientist
W-test
In statistics, the W-test is designed to test the distributional differences between cases and controls for categorical variable set, which can be a single SNP, SNP-SNP, or SNP-environment pairs. It t
Family-wise error rate
In statistics, family-wise error rate (FWER) is the probability of making one or more false discoveries, or type I errors when performing multiple hypotheses tests.
P-value
In null-hypothesis significance testing, the p-value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is
Testing hypotheses suggested by the data
In statistics, hypotheses suggested by a given dataset, when tested with the same dataset that suggested them, are likely to be accepted even when they are not true. This is because circular reasoning
Permutational analysis of variance
Permutational multivariate analysis of variance (PERMANOVA), is a non-parametric multivariate statistical permutation test. PERMANOVA is used to compare groups of objects and test the null hypothesis
Paired difference test
In statistics, a paired difference test is a type of location test that is used when comparing two sets of measurements to assess whether their population means differ. A paired difference test uses a
Monotone likelihood ratio
In statistics, the monotone likelihood ratio property is a property of the ratio of two probability density functions (PDFs). Formally, distributions ƒ(x) and g(x) bear the property if that is, if the
Probability of error
In statistics, the term "error" arises in two ways. Firstly, it arises in the context of decision making, where the probability of error may be considered as being the probability of making a wrong de
Asymmetric cointegration
In economics, testing for an asymmetric cointegration relationship among variables implies distinguishing the positive and the negative effects of the error obtained from the cointegration regression.
Box's M test
Box's M test is a multivariate statistical test used to check the equality of multiple variance-covariance matrices. The test is commonly used to test the assumption of homogeneity of variances and co
Behrens–Fisher problem
In statistics, the Behrens–Fisher problem, named after Walter Behrens and Ronald Fisher, is the problem of interval estimation and hypothesis testing concerning the difference between the means of two
Genome-wide significance
In genome-wide association studies, genome-wide significance (abbreviated GWS) is a specific threshold for determining the statistical significance of a reported association between a given single-nuc
Rare disease assumption
The rare disease assumption is a mathematical assumption in epidemiologic case-control studies where the hypothesis tests the association between an exposure and a disease. It is assumed that, if the
Size (statistics)
In statistics, the size of a test is the probability of falsely rejecting the null hypothesis. That is, it is the probability of making a type I error. It is denoted by the Greek letter α (alpha). For
Null hypothesis
In inferential statistics, the null hypothesis (often denoted H0) is that two possibilities are the same. The null hypothesis is that the observed difference is due to chance alone. Using statistical
Counternull
In statistics, and especially in the statistical analysis of psychological data, the counternull is a statistic used to aid the understanding and presentation of research results. It revolves around t
False discovery rate
In statistics, the false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are d
Sequential analysis
In statistics, sequential analysis or sequential hypothesis testing is statistical analysis where the sample size is not fixed in advance. Instead data are evaluated as they are collected, and further
Alternative hypothesis
In statistical hypothesis testing, the alternative hypothesis is one of the proposed proposition in the hypothesis test. In general the goal of hypothesis test is to demonstrate that in the given cond
Type III error
In statistical hypothesis testing, there are various notions of so-called type III errors (or errors of the third kind), and sometimes type IV errors or higher, by analogy with the type I and type II
Uniformly most powerful test
In statistical hypothesis testing, a uniformly most powerful (UMP) test is a hypothesis test which has the greatest power among all possible tests of a given size α. For example, according to the Neym
Zero degrees of freedom
In statistics, the non-central chi-squared distribution with zero degrees of freedom can be used in testing the null hypothesis that a sample is from a uniform distribution on the interval (0, 1). Thi
Null distribution
In statistical hypothesis testing, the null distribution is the probability distribution of the test statistic when the null hypothesis is true.For example, in an F-test, the null distribution is an F
Deviance (statistics)
In statistics, deviance is a goodness-of-fit statistic for a statistical model; it is often used for statistical hypothesis testing. It is a generalization of the idea of using the sum of squares of r
Lindley's paradox
Lindley's paradox is a counterintuitive situation in statistics in which the Bayesian and frequentist approaches to a hypothesis testing problem give different results for certain choices of the prior
Bonferroni correction
In statistics, the Bonferroni correction is a method to counteract the multiple comparisons problem. Bonferroni correction is the simplest method for counteracting this; however, it is a conservative
P-rep
In statistical hypothesis testing, p-rep or prep has been proposed as a statistical alternative to the classic p-value. Whereas a p-value is the probability of obtaining a result under the null hypoth
Kelly's ZnS
Kelly's is a test statistic that can be used to test a genetic region for deviations from the neutral model, based on the squared correlation of allelic identity between loci.
Lack-of-fit sum of squares
In statistics, a sum of squares due to lack of fit, or more tersely a lack-of-fit sum of squares, is one of the components of a partition of the sum of squares of residuals in an analysis of variance,
Lady tasting tea
In the design of experiments in statistics, the lady tasting tea is a randomized experiment devised by Ronald Fisher and reported in his book The Design of Experiments (1935). The experiment is the or
Power of a test
In statistics, the power of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis when a specific alternative hypothesis is true. It is commonly denoted by ,
Q-value (statistics)
In statistical hypothesis testing, specifically multiple hypothesis testing, the q-value provides a means to control the positive false discovery rate (pFDR). Just as the p-value gives the expected fa
Simple hypothesis
No description available.
Data dredging
Data dredging (also known as data snooping or p-hacking) is the misuse of data analysis to find patterns in data that can be presented as statistically significant, thus dramatically increasing and un
Generalized p-value
In statistics, a generalized p-value is an extended version of the classical p-value, which except in a limited number of applications, provides only approximate solutions. Conventional statistical me
Glejser test
In statistics, the Glejser test for heteroscedasticity, developed in 1969 by , regresses the residuals on the explanatory variable that is thought to be related to the heteroscedastic variance. After
Multiple comparisons problem
In statistics, the multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously or infers a subset of parameters selected bas
Error exponents in hypothesis testing
In statistical hypothesis testing, the error exponent of a hypothesis testing procedure is the rate at which the probabilities of Type I and Type II decay exponentially with the size of the sample use
Misuse of p-values
Misuse of p-values is common in scientific research and scientific education. p-values are often used or interpreted incorrectly; the American Statistical Association states that p-values can indicate
Equivalence test
Equivalence tests are a variety of hypothesis tests used to draw statistical inferences from observed data. In these tests, the null hypothesis is defined as an effect large enough to be deemed intere
Anna Karenina principle
The Anna Karenina principle states that a deficiency in any one of a number of factors dooms an endeavor to failure. Consequently, a successful endeavor (subject to this principle) is one for which ev
Effect size
In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a s
Two-sample hypothesis testing
In statistical hypothesis testing, a two-sample test is a test performed on the data of two random samples, each independently obtained from a different given population. The purpose of the test is to
Statistical hypothesis testing
A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis.Hypothesis testing allows us to make probabilist
Per-comparison error rate
In statistics, per-comparison error rate (PCER) is the probability of a Type I error in the absence of any multiple hypothesis testing correction. This is a liberal error rate relative to the false di
Closed testing procedure
In statistics, the closed testing procedure is a general method for performing more than one hypothesis test simultaneously.
Statisticians' and engineers' cross-reference of statistical terms
The following terms are used by electrical engineers in statistical signal processing studies instead of typical statistician's terms. In other engineering fields, particularly mechanical engineering,
Almost sure hypothesis testing
In statistics, almost sure hypothesis testing or a.s. hypothesis testing utilizes almost sure convergence in order to determine the validity of a statistical hypothesis with probability one. This is t
Type I and type II errors
In statistical hypothesis testing, a type I error is the mistaken rejection of an actually true null hypothesis (also known as a "false positive" finding or conclusion; example: "an innocent person is
Dichotomous thinking
In statistics, dichotomous thinking or binary thinking is the process of seeing a discontinuity in the possible values that a p-value can take during null hypothesis significance testing: it is either
Uncomfortable science
Uncomfortable science, as identified by statistician John Tukey, comprises situations in which there is a need to draw an inference from a limited sample of data, where further samples influenced by t
Holm–Bonferroni method
In statistics, the Holm–Bonferroni method, also called the Holm method or Bonferroni–Holm method, is used to counteract the problem of multiple comparisons. It is intended to control the family-wise e
Test statistic
A test statistic is a statistic (a quantity derived from the sample) used in statistical hypothesis testing. A hypothesis test is typically specified in terms of a test statistic, considered as a nume
Optimality criterion
In statistics, an optimality criterion provides a measure of the fit of the data to a given hypothesis, to aid in model selection. A model is designated as the "best" of the candidate models if it giv
Energy distance
Energy distance is a statistical distance between probability distributions. If X and Y are independent random vectors in Rd with cumulative distribution functions (cdf) F and G respectively, then the
Compact letter display
Compact Letter Display (CLD) is a statistical method to clarify the output of multiple hypothesis testing when using the ANOVA and Tukey's range tests. CLD can also be applied following the Duncan's n
Minimum chi-square estimation
In statistics, minimum chi-square estimation is a method of estimation of unobserved quantities based on observed data. In certain chi-square tests, one rejects a null hypothesis about a population di
Cohen's h
In statistics, Cohen's h, popularized by Jacob Cohen, is a measure of distance between two proportions or probabilities. Cohen's h has several related uses: * It can be used to describe the differenc
Statistical significance
In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis (simply by chance alone). More precisely, a study's defined
Harmonic mean p-value
The harmonic mean p-value (HMP) is a statistical technique for addressing the multiple comparisons problem that controls the strong-sense family-wise error rate (this claim has been disputed). It impr