Category: Statistical ratios

False positive rate
In statistics, when performing multiple comparisons, a false positive ratio (also known as fall-out or false alarm ratio) is the probability of falsely rejecting the null hypothesis for a particular t
Coefficient of variation
In probability theory and statistics, the coefficient of variation (CV), also known as relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequ
P4-metric
P4 metric enables performance evaluation of the binary classifier.It is calculated from precision, recall, specificity and NPV (negative predictive value).P4 is designed in similar way to F1 metric, h
Positive and negative predictive values
The positive and negative predictive values (PPV and NPV respectively) are the proportions of positive and negative results in statistics and diagnostic tests that are true positive and true negative
Normalization (statistics)
In statistics and applications of statistics, normalization can have a range of meanings. In the simplest cases, normalization of ratings means adjusting values measured on different scales to a notio
Experimental event rate
In epidemiology and biostatistics, the experimental event rate (EER) is a measure of how often a particular statistical event (such as response to a drug, adverse event or death) occurs within the exp
Bayes factor
The Bayes factor is a ratio of two competing statistical models represented by their marginal likelihood, and is used to quantify the support for one model over the other. The models in questions can
Cramér's V
In statistics, Cramér's V (sometimes referred to as Cramér's phi and denoted as φc) is a measure of association between two nominal variables, giving a value between 0 and +1 (inclusive). It is based
Pseudo-R-squared
Pseudo-R-squared values are used when the outcome variable is nominal or ordinal such that the coefficient of determination R2 cannot be applied as a measure for goodness of fit. In linear regression,
T-statistic
In statistics, the t-statistic is the ratio of the departure of the estimated value of a parameter from its hypothesized value to its standard error. It is used in hypothesis testing via Student's t-t
Ratio distribution
A ratio distribution (also known as a quotient distribution) is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions.Given t
Standard score
In statistics, the standard score is the number of standard deviations by which the value of a raw score (i.e., an observed value or data point) is above or below the mean value of what is being obser
Standardized mortality ratio
In epidemiology, the standardized mortality ratio or SMR, is a quantity, expressed as either a ratio or percentage quantifying the increase or decrease in mortality of a study cohort with respect to t
Attack rate
In epidemiology, the attack rate is the proportion of an at-risk population that contracts the disease during a specified time interval. It is used in hypothetical predictions and during actual outbre
Sortino ratio
The Sortino ratio measures the risk-adjusted return of an investment asset, portfolio, or strategy. It is a modification of the Sharpe ratio but penalizes only those returns falling below a user-speci
Lexis ratio
The Lexis ratio is used in statistics as a measure which seeks to evaluate differences between the statistical properties of random mechanisms where the outcome is two-valued — for example "success" o
Beta (finance)
In finance, the beta (β or market beta or beta coefficient) is a measure of how an individual asset moves (on average) when the overall stock market increases or decreases. Thus, beta is a useful meas
Index of dispersion
In probability theory and statistics, the index of dispersion, dispersion index, coefficient of dispersion, relative variance, or variance-to-mean ratio (VMR), like the coefficient of variation, is a
Hansen–Jagannathan bound
Hansen–Jagannathan bound is a theorem in financial economics that says that the ratio of the standard deviation of a stochastic discount factor to its mean exceeds the Sharpe ratio attained by any por
Variation ratio
The variation ratio is a simple measure of statistical dispersion in nominal distributions; it is the simplest measure of qualitative variation. It is defined as the proportion of cases which are not
Wilks' theorem
In statistics Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test st
Strikeout-to-walk ratio
In baseball statistics, strikeout-to-walk ratio (K/BB) is a measure of a pitcher's ability to control pitches, calculated as strikeouts divided by bases on balls. A hit by pitch is not counted statist
Variance inflation factor
In statistics, the variance inflation factor (VIF) is the ratio (quotient) of the variance of estimating some parameter in a model that includes multiple other terms (parameters) by the variance of a
Fano factor
In statistics, the Fano factor, like the coefficient of variation, is a measure of the dispersion of a probability distribution of a Fano noise. It is named after Ugo Fano, an Italian American physici
F-test
An F-test is any statistical test in which the test statistic has an F-distribution under the null hypothesis. It is most often used when comparing statistical models that have been fitted to a data s
Standardized moment
In probability theory and statistics, a standardized moment of a probability distribution is a moment (often a higher degree central moment) that is normalized, typically by a power of the standard de
Failure rate
Failure rate is the frequency with which an engineered system or component fails, expressed in failures per unit of time. It is usually denoted by the Greek letter λ (lambda) and is often used in reli
Relative change and difference
In any quantitative science, the terms relative change and relative difference are used to compare two quantities while taking into account the "sizes" of the things being compared, i.e. dividing by a
Information gain ratio
In decision tree learning, Information gain ratio is a ratio of information gain to the intrinsic information. It was proposed by Ross Quinlan, to reduce a bias towards multi-valued attributes by taki
Likelihood-ratio test
In statistics, the likelihood-ratio test assesses the goodness of fit of two competing statistical models based on the ratio of their likelihoods, specifically one found by maximization over the entir
Goodman and Kruskal's lambda
In probability theory and statistics, Goodman & Kruskal's lambda is a measure of proportional reduction in error in cross tabulation analysis. For any sample with a nominal independent variable and de
Treynor ratio
The Treynor reward to volatility model (sometimes called the reward-to-volatility ratio or Treynor measure), named after Jack L. Treynor, is a measurement of the returns earned in excess of that which
Correlation ratio
In statistics, the correlation ratio is a measure of the curvilinear relationship between the statistical dispersion within individual categories and the dispersion across the whole population or samp
Hazard ratio
In survival analysis, the hazard ratio (HR) is the ratio of the hazard rates corresponding to the conditions characterised by two distinct levels of a treatment variable of interest. For example, in a
Ka/Ks ratio
In genetics, the Ka/Ks ratio, also known as ω or dN/dS ratio, is used to estimate the balance between neutral mutations, purifying selection and beneficial mutations acting on a set of homologous prot
Diagnostic odds ratio
In medical testing with binary classification, the diagnostic odds ratio (DOR) is a measure of the effectiveness of a diagnostic test. It is defined as the ratio of the odds of the test being positive
Pearson correlation coefficient
In statistics, the Pearson correlation coefficient (PCC, pronounced /ˈpɪərsən/) ― also known as Pearson's r, the Pearson product-moment correlation coefficient (PPMCC), the bivariate correlation, or c
Conditional probability
In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred.
Odds
Odds provide a measure of the likelihood of a particular outcome. They are calculated as the ratio of the number of events that produce that outcome to the number that do not. Odds are commonly used i
Polynomial and rational function modeling
In statistical modeling (especially process modeling), polynomial functions and rational functions are sometimes used as an empirical technique for curve fitting.
Quadrant count ratio
The quadrant count ratio (QCR) is a measure of the association between two quantitative variables. The QCR is not commonly used in the practice of statistics; rather, it is a useful tool in statistics
Relative index of inequality
The relative index of inequality (RII) is a regression-based index which summarizes the magnitude of socio-economic status (SES) as a source of inequalities in health. RII is useful because it takes i
Sensitivity and specificity
Sensitivity and specificity mathematically describe the accuracy of a test which reports the presence or absence of a condition. Individuals for which the condition is satisfied are considered "positi
Sharpe ratio
In finance, the Sharpe ratio (also known as the Sharpe index, the Sharpe measure, and the reward-to-variability ratio) measures the performance of an investment such as a security or portfolio compare
Ground ball/fly ball ratio
In baseball statistics, ground ball-fly ball ratio (denoted by G/F or GB/FB) is a measure of: * Frequency of batted ground balls in play versus fly balls in play to denote what kind of contact a batt
Studentized range
In statistics, the studentized range, denoted q, is the difference between the largest and smallest data in a sample normalized by the sample standard deviation.It is named after William Sealy Gosset
Signal-to-noise statistic
In mathematics the signal-to-noise statistic distance between two vectors a and b with mean values and and standard deviation and respectively is: In the case of Gaussian-distributed data and unbiased
Ratio estimator
The ratio estimator is a statistical parameter and is defined to be the ratio of means of two random variables. Ratio estimates are biased and corrections must be made when they are used in experiment
F-score
In statistical analysis of binary classification, the F-score or F-measure is a measure of a test's accuracy. It is calculated from the precision and recall of the test, where the precision is the num
Information ratio
The information ratio measures and compares the active return of an investment (e.g., a security or portfolio) compared to a benchmark index relative to the volatility of the active return (also known
Survival rate
Survival rate is a part of survival analysis. It is the proportion of people in a study or treatment group still alive at a given period of time after diagnosis. It is a method of describing prognosis
Coefficient of determination
In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent vari
Relative risk reduction
In epidemiology, the relative risk reduction (RRR) or efficacy is the relative decrease in the risk of an adverse event in the exposed group compared to an unexposed group. It is computed as , where i
Fraction of variance unexplained
In statistics, the fraction of variance unexplained (FVU) in the context of a regression task is the fraction of variance of the regressand (dependent variable) Y which cannot be explained, i.e., whic
Phi coefficient
In statistics, the phi coefficient (or mean square contingency coefficient and denoted by φ or rφ) is a for two binary variables. In machine learning, it is known as the Matthews correlation coefficie
Prevalence
In epidemiology, prevalence is the proportion of a particular population found to be affected by a medical condition (typically a disease or a risk factor such as smoking or seatbelt use) at a specifi
Relative risk
The relative risk (RR) or risk ratio is the ratio of the probability of an outcome in an exposed group to the probability of an outcome in an unexposed group. Together with risk difference and odds ra
Rescaled range
The rescaled range is a statistical measure of the variability of a time series introduced by the British hydrologist Harold Edwin Hurst (1880–1978). Its purpose is to provide an assessment of how the
Sampling fraction
In sampling theory, the sampling fraction is the ratio of sample size to population size or, in the context of stratified sampling, the ratio of the sample size to the size of the stratum.The formula
Signal-to-noise ratio
Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal powe
Studentized residual
In statistics, a studentized residual is the quotient resulting from the division of a residual by an estimate of its standard deviation. It is a form of a Student's t-statistic, with the estimate of
Upside potential ratio
The upside-potential ratio is a measure of a return of an investment asset relative to the . The measurement allows a firm or individual to choose investments which have had relatively good upside per
Mills ratio
In probability theory, the Mills ratio (or Mills's ratio) of a continuous random variable is the function where is the probability density function, and is the complementary cumulative distribution fu
Response-rate ratio
Response-rate ratio is a measure of efficacy of therapy in clinical trials. It is defined as the proportion of improved patients in thetreatment group, divided by the proportion of improved patients i
Prevalence effect
In psychology, the prevalence effect is the phenomenon that one is more likely to miss (or fail to detect) a target with a low prevalence (or frequency) than a target with a high prevalence or frequen
Uncertainty coefficient
In statistics, the uncertainty coefficient, also called proficiency, entropy coefficient or Theil's U, is a measure of nominal association. It was first introduced by Henri Theil and is based on the c
Quartile coefficient of dispersion
In statistics, the quartile coefficient of dispersion is a descriptive statistic which measures dispersion and which is used to make comparisons within and between data sets. Since it is based on quan
Studentization
In statistics, Studentization, named after William Sealy Gosset, who wrote under the pseudonym Student, is the adjustment consisting of division of a first-degree statistic derived from a sample, by a
Outliers ratio
In objective video quality assessment, the outliers ratio (OR) is a measure of the performance of an objective video quality metric. It is the ratio of "false" scores given by the objective metric to
F-test of equality of variances
In statistics, an F-test of equality of variances is a test for the null hypothesis that two normal populations have the same variance. Notionally, any F-test can be regarded as a comparison of two va