Category: Theory of probability distributions

Law of total variance
In probability theory, the law of total variance or variance decomposition formula or conditional variance formulas or law of iterated variances also known as Eve's law, states that if and are random
List of convolutions of probability distributions
In probability theory, the probability distribution of the sum of two or more independent random variables is the convolution of their individual distributions. The term is motivated by the fact that
Expected return
The expected return (or expected gain) on a financial investment is the expected value of its return (of the profit on the investment). It is a measure of the center of the distribution of the random
Neutral vector
In statistics, and specifically in the study of the Dirichlet distribution, a neutral vector of random variables is one that exhibits a particular type of statistical independence amongst its elements
Bhatia–Davis inequality
In mathematics, the Bhatia–Davis inequality, named after Rajendra Bhatia and Chandler Davis, is an upper bound on the variance σ2 of any bounded probability distribution on the real line.
Besov measure
In mathematics — specifically, in the fields of probability theory and inverse problems — Besov measures and associated Besov-distributed random variables are generalisations of the notions of Gaussia
Posterior predictive distribution
In Bayesian statistics, the posterior predictive distribution is the distribution of possible unobserved values conditional on the observed values. Given a set of N i.i.d. observations , a new value w
Schrödinger method
In combinatorial mathematics and probability theory, the Schrödinger method, named after the Austrian physicist Erwin Schrödinger, is used to solve some problems of . Suppose are independent random va
Univariate
In mathematics, a univariate object is an expression, equation, function or polynomial involving only one variable. Objects involving more than one variable are multivariate. In some cases the distinc
Monotone likelihood ratio
In statistics, the monotone likelihood ratio property is a property of the ratio of two probability density functions (PDFs). Formally, distributions ƒ(x) and g(x) bear the property if that is, if the
Popoviciu's inequality on variances
In probability theory, Popoviciu's inequality, named after Tiberiu Popoviciu, is an upper bound on the variance σ2 of any bounded probability distribution. Let M and m be upper and lower bounds on the
Group actions in computational anatomy
Group actions are central to Riemannian geometry and defining orbits (control theory). The orbits of computational anatomy consist of anatomical shapes and medical images; the anatomical shapes are su
Normally distributed and uncorrelated does not imply independent
In probability theory, although simple examples illustrate that linear uncorrelatedness of two random variables does not in general imply their independence, it is sometimes mistakenly thought that it
Hellinger distance
In probability and statistics, the Hellinger distance (closely related to, although different from, the Bhattacharyya distance) is used to quantify the similarity between two probability distributions
Stein's method
Stein's method is a general method in probability theory to obtain bounds on the distance between two probability distributions with respect to a probability metric. It was introduced by Charles Stein
Infinite divisibility (probability)
In probability theory, a probability distribution is infinitely divisible if it can be expressed as the probability distribution of the sum of an arbitrary number of independent and identically distri
Continuity correction
In probability theory, a continuity correction is an adjustment that is made when a discrete distribution is approximated by a continuous distribution.
Kernel embedding of distributions
In machine learning, the kernel embedding of distributions (also called the kernel mean or mean map) comprises a class of nonparametric methods in which a probability distribution is represented as an
Limiting density of discrete points
In information theory, the limiting density of discrete points is an adjustment to the formula of Claude Shannon for differential entropy. It was formulated by Edwin Thompson Jaynes to address defects
Probability integral transform
In probability theory, the probability integral transform (also known as universality of the uniform) relates to the result that data values that are modeled as being random variables from any given c
Probable error
In statistics, probable error defines the half-range of an interval about a central point for the distribution, such that half of the values from the distribution will lie within the interval and half
Cokurtosis
In probability theory and statistics, cokurtosis is a measure of how much two random variables change together. Cokurtosis is the fourth standardized cross central moment. If two random variables exhi
Khmaladze transformation
In statistics, the Khmaladze transformation is a mathematical tool used in constructing convenient goodness of fit tests for hypothetical distribution functions. More precisely, suppose are i.i.d., po
Marginal distribution
In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the p
Pareto interpolation
Pareto interpolation is a method of estimating the median and other properties of a population that follows a Pareto distribution. It is used in economics when analysing the distribution of incomes in
Law of total cumulance
In probability theory and mathematical statistics, the law of total cumulance is a generalization to cumulants of the law of total probability, the law of total expectation, and the law of total varia
Convolution of probability distributions
The convolution/sum of probability distributions arises in probability theory and statistics as the operation in terms of probability distributions that corresponds to the addition of independent rand
Conditional probability distribution
In probability theory and statistics, given two jointly distributed random variables and , the conditional probability distribution of given is the probability distribution of when is known to be a pa
Stability (probability)
In probability theory, the stability of a random variable is the property that a linear combination of two independent copies of the variable has the same distribution, up to location and scale parame
Null distribution
In statistical hypothesis testing, the null distribution is the probability distribution of the test statistic when the null hypothesis is true.For example, in an F-test, the null distribution is an F
Tail dependence
In probability theory, the tail dependence of a pair of random variables is a measure of their comovements in the tails of the distributions. The concept is used in extreme value theory. Random variab
Concomitant (statistics)
In statistics, the concept of a concomitant, also called the induced order statistic, arises when one sorts the members of a random sample according to corresponding values of another random sample. L
Law of total expectation
The proposition in probability theory known as the law of total expectation, the law of iterated expectations (LIE), Adam's law, the tower rule, and the smoothing theorem, among other names, states th
Panjer recursion
The Panjer recursion is an algorithm to compute the probability distribution approximation of a compound random variablewhere both and are random variables and of special types. In more general cases
Combinant
In the mathematical theory of probability, the combinants cn of a random variable X are defined via the combinant-generating function G(t), which is defined from the moment generating function M(z) as
Lévy metric
In mathematics, the Lévy metric is a metric on the space of cumulative distribution functions of one-dimensional random variables. It is a special case of the Lévy–Prokhorov metric, and is named after
Khinchin's theorem on the factorization of distributions
Khinchin's theorem on the factorization of distributions says that every probability distribution P admits (in the convolution semi-group of probability distributions) a factorization where P1 is a pr
Rayleigh test
Rayleigh test can refer to : * a test for periodicity in irregularly sampled data. * a derivation of the above to test for non-uniformity (as unimodal clustering) of a set of points on a circle (eg
Mean-preserving spread
In probability and statistics, a mean-preserving spread (MPS) is a change from one probability distribution A to another probability distribution B, where B is formed by spreading out one or more port
Coskewness
In probability theory and statistics, coskewness is a measure of how much three random variables change together. Coskewness is the third standardized cross central moment, related to skewness as cova
Glivenko's theorem (probability theory)
In probability theory, Glivenko's theorem states that if , are the characteristic functions of some probability distributions respectively and almost everywhere, then in the sense of probability distr
Joint probability distribution
Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint
Power law
In statistics, a power law is a functional relationship between two quantities, where a relative change in one quantity results in a proportional relative change in the other quantity, independent of
Law of total covariance
In probability theory, the law of total covariance, covariance decomposition formula, or conditional covariance formula states that if X, Y, and Z are random variables on the same probability space, a
Shape of a probability distribution
In statistics, the concept of the shape of a probability distribution arises in questions of finding an appropriate distribution to use to model the statistical properties of a population, given a sam
Truncated distribution
In statistics, a truncated distribution is a conditional distribution that results from restricting the domain of some other probability distribution. Truncated distributions arise in practical statis
German tank problem
In the statistical theory of estimation, the German tank problem consists of estimating the maximum of a discrete uniform distribution from sampling without replacement. In simple terms, suppose there
Unimodality
In mathematics, unimodality means possessing a unique mode. More generally, unimodality means there is only a single highest value, somehow defined, of some mathematical object.
Pairwise independence
In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are independent. Any collection of mutually independent random variables is p
Wasserstein metric
In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn
Mean absolute difference
The mean absolute difference (univariate) is a measure of statistical dispersion equal to the average absolute difference of two independent values drawn from a probability distribution. A related sta
Memorylessness
In probability and statistics, memorylessness is a property of certain probability distributions. It usually refers to the cases when the distribution of a "waiting time" until a certain event does no
Ewens's sampling formula
In population genetics, Ewens's sampling formula, describes the probabilities associated with counts of how many different alleles are observed a given number of times in the sample.
Distance correlation
In statistics and in probability theory, distance correlation or distance covariance is a measure of dependence between two paired random vectors of arbitrary, not necessarily equal, dimension. The po
Stein discrepancy
A Stein discrepancy is a statistical divergence between two probability measures that is rooted in Stein's method. It was first formulated as a tool to assess the quality of Markov chain Monte Carlo s
Expected value
In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the e
Normalizing constant
The concept of a normalizing constant arises in probability theory and a variety of other areas of mathematics. The normalizing constant is used to reduce any probability function to a probability den
Rencontres numbers
In combinatorial mathematics, the rencontres numbers are a triangular array of integers that enumerate permutations of the set { 1, ..., n } with specified numbers of fixed points: in other words, par
Comonotonicity
In probability theory, comonotonicity mainly refers to the perfect positive dependence between the components of a random vector, essentially saying that they can be represented as increasing function
Ursell function
In statistical mechanics, an Ursell function or connected correlation function, is a cumulant of a random variable. It can often be obtained by summing over connected Feynman diagrams (the sum over al
Zero bias transform
The zero-bias transform is a transform from one probability distribution to another. The transform arises in applications of Stein's method in probability and statistics.
Evidence lower bound
In variational Bayesian methods, the evidence lower bound (often abbreviated ELBO, also sometimes called the variational lower bound or negative variational free energy) is a useful lower bound on the
Parametric family
In mathematics and its applications, a parametric family or a parameterized family is a family of objects (a set of related objects) whose differences depend only on the chosen values for a set of par
Benford's law
Benford's law, also known as the Newcomb–Benford law, the law of anomalous numbers, or the first-digit law, is an observation that in many real-life sets of numerical data, the leading digit is likely
Truncation (statistics)
In statistics, truncation results in values that are limited above or below, resulting in a truncated sample. A random variable is said to be truncated from below if, for some threshold value , the ex
Mills ratio
In probability theory, the Mills ratio (or Mills's ratio) of a continuous random variable is the function where is the probability density function, and is the complementary cumulative distribution fu
Relationships among probability distributions
In probability theory and statistics, there are several relationships among probability distributions. These relations can be categorized in the following groups: * One distribution is a special case
Law of the unconscious statistician
In probability theory and statistics, the law of the unconscious statistician, or LOTUS, is a theorem used to calculate the expected value of a function g(X) of a random variable X when one knows the
Energy distance
Energy distance is a statistical distance between probability distributions. If X and Y are independent random vectors in Rd with cumulative distribution functions (cdf) F and G respectively, then the
Smoothness (probability theory)
In probability theory and statistics, smoothness of a density function is a measure which determines how many times the density function can be differentiated, or equivalently the limiting behavior of
Conditional variance
In probability theory and statistics, a conditional variance is the variance of a random variable given the value(s) of one or more other variables.Particularly in econometrics, the conditional varian
Nearest neighbour distribution
In probability and statistics, a nearest neighbor function, nearest neighbor distance distribution, nearest-neighbor distribution function or nearest neighbor distribution is a mathematical function t