Category: Statistical approximations

Welch–Satterthwaite equation
In statistics and uncertainty analysis, the Welch–Satterthwaite equation is used to calculate an approximation to the effective degrees of freedom of a linear combination of independent sample varianc
68–95–99.7 rule
In statistics, the 68–95–99.7 rule, also known as the empirical rule, is a shorthand used to remember the percentage of values that lie withinan interval estimate in a normal distribution: 68%, 95%, a
Morris method
In applied statistics, the Morris method for is a so-called (OAT), meaning that in each run only one input parameter is given a new value. It facilitates a global sensitivity analysis by making a numb
Monte Carlo method
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use rando
Approximate Bayesian computation
Approximate Bayesian computation (ABC) constitutes a class of computational methods rooted in Bayesian statistics that can be used to estimate the posterior distributions of model parameters. In all m
Stochastic approximation
Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods
Alpha beta filter
An alpha beta filter (also called alpha-beta filter, f-g filter or g-h filter) is a simplified form of observer for estimation, data smoothing and control applications. It is closely related to Kalman
Fieller's theorem
In statistics, Fieller's theorem allows the calculation of a confidence interval for the ratio of two means.
Rare disease assumption
The rare disease assumption is a mathematical assumption in epidemiologic case-control studies where the hypothesis tests the association between an exposure and a disease. It is assumed that, if the
Three-point estimation
The three-point estimation technique is used in management and information systems applications for the construction of an approximate probability distribution representing the outcome of future event
Function approximation
In general, a function approximation problem asks us to select a function among a well-defined class that closely matches ("approximates") a target function in a task-specific way. The need for functi
Target function
No description available.
Edgeworth series
The Gram–Charlier A series (named in honor of Jørgen Pedersen Gram and Carl Charlier), and the Edgeworth series (named in honor of Francis Ysidro Edgeworth) are series that approximate a probability d
Imprecise probability
Imprecise probability generalizes probability theory to allow for partial probability specifications, and is applicable when information is scarce, vague, or conflicting, in which case a unique probab
Pearson's chi-squared test
Pearson's chi-squared test is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely u
Delta method
In statistics, the delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance
Kirkwood approximation
The Kirkwood superposition approximation was introduced in 1935 by John G. Kirkwood as a means of representing a discrete probability distribution. The Kirkwood approximation for a discrete probabilit
Welch's t-test
In statistics, Welch's t-test, or unequal variances t-test, is a two-sample location test which is used to test the hypothesis that two populations have equal means. It is named for its creator, Berna
Interval propagation
In numerical mathematics, interval propagation or interval constraint propagation is the problem of contracting interval domains associated to variables of R without removing any value that is consist
Propagation of uncertainty
In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on the
Binomial proportion confidence interval
In statistics, a binomial proportion confidence interval is a confidence interval for the probability of success calculated from the outcome of a series of success–failure experiments (Bernoulli trial
Cunningham function
In statistics, the Cunningham function or Pearson–Cunningham function ωm,n(x) is a generalisation of a special function introduced by and studied in the form here by . It can be defined in terms of th
Rule of three (statistics)
In statistical analysis, the rule of three states that if a certain event did not occur in a sample with n subjects, the interval from 0 to 3/n is a 95% confidence interval for the rate of occurrences
Taylor expansions for the moments of functions of random variables
In probability theory, it is possible to approximate the moments of a function f of a random variable X using Taylor expansions, provided that f is sufficiently differentiable and that the moments of
Stochastic gradient descent
Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can b