Category: Optimization algorithms and methods

Space mapping
The space mapping methodology for modeling and design optimization of engineering systems was first discovered by John Bandler in 1993. It uses relevant existing knowledge to speed up model generation
Golden-section search
The golden-section search is a technique for finding an extremum (minimum or maximum) of a function inside a specified interval. For a strictly unimodal function with an extremum inside the interval,
Karmarkar's algorithm
Karmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems. It was the first reasonably efficient algorithm that solves these problems in po
Fireworks algorithm
The Fireworks Algorithm (FWA) is a swarm intelligence algorithm that explores a very large solution space by choosing a set of random points confined by some distance metric in the hopes that one or m
CMA-ES
Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. Evolution strategies (ES) are stochastic, derivative-free methods for numerical op
Very large-scale neighborhood search
In mathematical optimization, neighborhood search is a technique that tries to find good or near-optimal solutions to a combinatorial optimisation problem by repeatedly transforming a current solution
Minimax
Minimax (sometimes MinMax, MM or saddle point) is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for minimizing the possible loss for a worst
Nonlinear programming
In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints or the objective function are nonlinear. An optimization problem is one of c
Stochastic programming
In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. A stochastic program is an optimization problem in which s
Network simplex algorithm
In mathematical optimization, the network simplex algorithm is a graph theoretic specialization of the simplex algorithm. The algorithm is usually formulated in terms of a minimum-cost flow problem. T
Multiple subset sum
The multiple subset sum problem is an optimization problem in computer science and operations research. It is a generalization of the subset sum problem. The input to the problem is a multiset of n in
Automatic label placement
Automatic label placement, sometimes called text placement or name placement, comprises the computer methods of placing labels automatically on a map or chart. This is related to the typographic desig
Adaptive simulated annealing
Adaptive simulated annealing (ASA) is a variant of simulated annealing (SA) algorithm in which the algorithm parameters that control temperature schedule and random step selection are automatically ad
Symmetric rank-one
The Symmetric Rank 1 (SR1) method is a quasi-Newton method to update the second derivative (Hessian)based on the derivatives (gradients) calculated at two points. It is a generalization to the secant
Genetic improvement (computer science)
In computer software development, genetic Improvement is the use of optimisation and machine learning techniques, particularly search-based software engineering techniques such as genetic programming
Communication-avoiding algorithm
Communication-avoiding algorithms minimize movement of data within a memory hierarchy for improving its running-time and energy consumption. These minimize the total of two costs (in terms of time and
Rosenbrock methods
Rosenbrock methods refers to either of two distinct ideas in numerical computation, both named for Howard H. Rosenbrock.
Simulated annealing
Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search
Sequential quadratic programming
Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization. SQP methods are used on mathematical problems for which the objective function and the constraints
Luus–Jaakola
In computational engineering, Luus–Jaakola (LJ) denotes a heuristic for global optimization of a real-valued function. In engineering use, LJ is not an algorithm that terminates with an optimal soluti
Sequential minimal optimization
Sequential minimal optimization (SMO) is an algorithm for solving the quadratic programming (QP) problem that arises during the training of support-vector machines (SVM). It was invented by John Platt
Least squares
The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by m
Sequential linear-quadratic programming
Sequential linear-quadratic programming (SLQP) is an iterative method for nonlinear optimization problems where objective function and constraints are twice continuously differentiable. Similarly to s
Natural evolution strategy
Natural evolution strategies (NES) are a family of numerical optimization algorithms for black box problems. Similar in spirit to evolution strategies, they iteratively update the (continuous) paramet
Pattern search (optimization)
Pattern search (also known as direct search, derivative-free search, or black-box search) is a family of numerical optimization methods that does not require a gradient. As a result, it can be used on
Branch and cut
Branch and cut is a method of combinatorial optimization for solving integer linear programs (ILPs), that is, linear programming (LP) problems where some or all the unknowns are restricted to integer
Critical line method
No description available.
Parallel metaheuristic
Parallel metaheuristic is a class of techniques that are capable of reducing both the numerical effort and the run time of a metaheuristic. To this end, concepts and technologies from the field of par
Linear-fractional programming
In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective
Broyden–Fletcher–Goldfarb–Shanno algorithm
In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems. Like the related Davidon–Fletcher–Pow
MM algorithm
The MM algorithm is an iterative optimization method which exploits the convexity of a function in order to find its maxima or minima. The MM stands for “Majorize-Minimization” or “Minorize-Maximizati
Simultaneous perturbation stochastic approximation
Simultaneous perturbation stochastic approximation (SPSA) is an algorithmic method for optimizing systems with multiple unknown parameters. It is a type of stochastic approximation algorithm. As an op
Criss-cross algorithm
In mathematical optimization, the criss-cross algorithm is any of a family of algorithms for linear programming. Variants of the criss-cross algorithm also solve more general problems with linear ineq
Simplex algorithm
In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was
Successive linear programming
Successive Linear Programming (SLP), also known as Sequential Linear Programming, is an optimization technique for approximately solving nonlinear optimization problems. Starting at some estimate of t
Evolutionary programming
Evolutionary programming is one of the four major evolutionary algorithm paradigms. It is similar to genetic programming, but the structure of the program to be optimized is fixed, while its numerical
Ordered subset expectation maximization
In mathematical optimization, the ordered subset expectation maximization (OSEM) method is an iterative method that is used in computed tomography. In applications in medical imaging, the OSEM method
Powell's method
Powell's method, strictly Powell's conjugate direction method, is an algorithm proposed by Michael J. D. Powell for finding a local minimum of a function. The function need not be differentiable, and
Quasi-Newton method
Quasi-Newton methods are methods used to either find zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or
Exact algorithm
In computer science and operations research, exact algorithms are algorithms that always solve an optimization problem to optimality. Unless P = NP, an exact algorithm for an NP-hard optimization prob
Genetic algorithms in economics
Genetic algorithms have increasingly been applied to economics since the pioneering work by John H. Miller in 1986. It has been used to characterize a variety of models including the cobweb model, the
OR-Tools
Google OR-Tools is a free and open-source software suite developed by Google for solving linear programming (LP), mixed integer programming (MIP), constraint programming (CP), vehicle routing (VRP), a
Stochastic hill climbing
Stochastic hill climbing is a variant of the basic hill climbing method. While basic hill climbing always chooses the steepest uphill move, "stochastic hill climbing chooses at random from among the u
Space allocation problem
The space allocation problem (SAP) is the process in architecture, or in any kind of space planning (SP) technique, of determining the position and size of several elements according to the input-spec
IPOPT
IPOPT, short for "Interior Point OPTimizer, pronounced I-P-Opt", is a software library for large scale nonlinear optimization of continuous systems. It is written in Fortran and C and is released unde
Negamax
Negamax search is a variant form of minimax search that relies on the zero-sum property of a two-player game. This algorithm relies on the fact that to simplify the implementation of the minimax algor
Branch and bound
Branch and bound (BB, B&B, or BnB) is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of
Augmented Lagrangian method
Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization
Subgradient method
Subgradient methods are iterative methods for solving convex minimization problems. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when appl
Crew scheduling
Crew scheduling is the process of assigning crews to operate transportation systems, such as rail lines or airlines.
Stochastic gradient Langevin dynamics
Stochastic gradient Langevin dynamics (SGLD) is an optimization and sampling technique composed of characteristics from Stochastic gradient descent, a Robbins–Monro optimization algorithm, and Langevi
Bregman method
The Bregman method is an iterative algorithm to solve certain convex optimization problems involving regularization. The original version is due to Lev M. Bregman, who published it in 1967. The algori
Quantum annealing
Quantum annealing (QA) is an optimization process for finding the global minimum of a given objective function over a given set of candidate solutions (candidate states), by a process using quantum fl
Trust region
In mathematical optimization, a trust region is the subset of the region of the objective function that is approximated using a model function (often a quadratic). If an adequate model of the objectiv
Penalty method
Penalty methods are a certain class of algorithms for solving constrained optimization problems. A penalty method replaces a constrained optimization problem by a series of unconstrained problems whos
Nonlinear conjugate gradient method
In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function the minimum of is obtained when the gra
Iterated local search
Iterated Local Search (ILS) is a term in applied mathematics and computer sciencedefining a modification of local search or hill climbing methods for solving discrete optimization problems. Local sear
Learning rate
In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss functio
Odds algorithm
The odds algorithm (or Bruss algorithm) is a mathematical method for computing optimal strategies for a class of problems that belong to the domain of optimal stopping problems. Their solution follows
Divide-and-conquer algorithm
In computer science, divide and conquer is an algorithm design paradigm. A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-problems of the same or related type, unt
Cunningham's rule
In mathematical optimization, Cunningham's rule (also known as least recently considered rule or round-robin rule) is an algorithmic refinement of the simplex method for linear optimization. The rule
Second-order cone programming
A second-order cone program (SOCP) is a convex optimization problem of the form minimize subject to where the problem parameters are , and . is the optimization variable. is the Euclidean norm and ind
Guillotine cutting
Guillotine cutting is the process of producing small rectangular items of fixed dimensions from a given large rectangular sheet, using only guillotine-cuts. A guillotine-cut (also called an edge-to-ed
Interior-point method
Interior-point methods (also referred to as barrier methods or IPMs) are a certain class of algorithms that solve linear and nonlinear convex optimization problems. An interior point method was discov
Bacterial colony optimization
The bacterial colony optimization algorithm is an optimization algorithm which is based on a lifecycle model that simulates some typical behaviors of E. coli bacteria during their whole lifecycle, inc
Gradient descent
In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take re
Spiral optimization algorithm
In mathematics, the spiral optimization (SPO) algorithm is a metaheuristic inspired by spiral phenomena in nature. The first SPO algorithm was proposed for two-dimensional unconstrained optimizationba
Stochastic dynamic programming
Originally introduced by Richard E. Bellman in, stochastic dynamic programming is a technique for modelling and solving problems of decision making under uncertainty. Closely related to stochastic pro
Branch and price
In applied mathematics, branch and price is a method of combinatorial optimization for solving integer linear programming (ILP) and mixed integer linear programming (MILP) problems with many variables
List of algorithms
The following is a list of well-known algorithms along with one-line descriptions for each.
Mehrotra predictor–corrector method
Mehrotra's predictor–corrector method in optimization is a specific interior point method for linear programming. It was proposed in 1989 by Sanjay Mehrotra. The method is based on the fact that at ea
Semidefinite embedding
Maximum Variance Unfolding (MVU), also known as Semidefinite Embedding (SDE), is an algorithm in computer science that uses semidefinite programming to perform non-linear dimensionality reduction of h
Frank–Wolfe algorithm
The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization. Also known as the conditional gradient method, reduced gradient algorithm and the conv
Gradient method
In optimization, a gradient method is an algorithm to solve problems of the form with the search directions defined by the gradient of the function at the current point. Examples of gradient methods a
Bregman Lagrangian
The Bregman-Lagrangian framework permits a systematic understanding of the matching rates associated with higher-order gradient methods in discrete and continuous time.
Line search
In optimization, the line search strategy is one of two basic iterative approaches to find a local minimum of an objective function . The other approach is trust region. The line search approach first
Local search (optimization)
In computer science, local search is a heuristic method for solving computationally hard optimization problems. Local search can be used on problems that can be formulated as finding a solution maximi
Successive parabolic interpolation
Successive parabolic interpolation is a technique for finding the extremum (minimum or maximum) of a continuous unimodal function by successively fitting parabolas (polynomials of degree two) to a fun
Fractional programming
In mathematical optimization, fractional programming is a generalization of linear-fractional programming. The objective function in a fractional program is a ratio of two functions that are in genera
Lemke's algorithm
In mathematical optimization, Lemke's algorithm is a procedure for solving linear complementarity problems, and more generally mixed linear complementarity problems. It is named after Carlton E. Lemke
Derivation of the conjugate gradient method
In numerical linear algebra, the conjugate gradient method is an iterative method for numerically solving the linear system where is symmetric positive-definite. The conjugate gradient method can be d
Envy minimization
In computer science and operations research, the envy minimization problem is the problem of allocating discrete items among agents with different valuations over the items, such that the amount of en
Auction algorithm
The term "auction algorithm" applies to several variations of a combinatorial optimization algorithm which solves assignment problems, and network optimization problems with linear and convex/nonlinea
Alpha–beta pruning
Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. It is an adversarial search algorithm used commonly
Gauss–Newton algorithm
The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a m
Interval contractor
In mathematics, an interval contractor (or contractor for short) associated to a set X is an operator C which associates to a box [x] in Rn another box C([x]) of Rn such that the two following propert
Active-set method
In mathematical optimization, the active-set method is an algorithm used to identify the active constraints in a set of inequality constraints. The active constraints are then expressed as equality co
Expectation–maximization algorithm
In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where t
Berndt–Hall–Hall–Hausman algorithm
The Berndt–Hall–Hall–Hausman (BHHH) algorithm is a numerical optimization algorithm similar to the Newton–Raphson algorithm, but it replaces the observed negative Hessian matrix with the outer product
HiGHS optimization solver
HiGHS is open-source software to solve linear programming (LP), mixed-integer programming (MIP), and convex quadratic programming (QP) models. Written in C++ and published under an MIT license, HiGHS
Guided local search
Guided local search is a metaheuristic search method. A meta-heuristic method is a method that sits on top of a local search algorithm to change its behavior. Guided local search builds up penalties d
In-crowd algorithm
The in-crowd algorithm is a numerical method for solving basis pursuit denoising quickly; faster than any other algorithm for large, sparse problems. This algorithm is an active set method, which mini
Maximum subarray problem
In computer science, the maximum sum subarray problem, also known as the maximum segment sum problem, is the task of finding a contiguous subarray with the largest sum, within a given one-dimensional
Robust fuzzy programming
Robust fuzzy programming (ROFP) is a powerful mathematical optimization approach to deal with optimization problems under uncertainty. This approach is firstly introduced at 2012 by Pishvaee, Razmi &
Special ordered set
In discrete optimization, a special ordered set (SOS) is an ordered set of variables used as an additional way to specify integrality conditions in an optimization model. Special order sets are basica
Bin packing problem
The bin packing problem is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes
Extremal optimization
Extremal optimization (EO) is an optimization heuristic inspired by the Bak–Sneppen model of self-organized criticality from the field of statistical physics. This heuristic was designed initially to
Zadeh's rule
In mathematical optimization, Zadeh's rule (also known as the least-entered rule) is an algorithmic refinement of the simplex method for linear optimization. The rule was proposed around 1980 by Norma
Matheuristics
Matheuristics are problem agnostic optimization algorithms that make use of mathematical programming (MP) techniques in order to obtain heuristic solutions. Problem-dependent elements are included onl
Newton's method in optimization
In calculus, Newton's method is an iterative method for finding the roots of a differentiable function F, which are solutions to the equation F (x) = 0. As such, Newton's method can be applied to the
Random optimization
Random optimization (RO) is a family of numerical optimization methods that do not require the gradient of the problem to be optimized and RO can hence be used on functions that are not continuous or
Adaptive coordinate descent
Adaptive coordinate descent is an improvement of the coordinate descent algorithm to non-separable optimization by the use of . The adaptive coordinate descent approach gradually builds a transformati
Greedy algorithm
A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage. In many problems, a greedy strategy does not produce an optimal solut
Hyper-heuristic
A hyper-heuristic is a heuristic search method that seeks to automate, often by the incorporation of machine learning techniques, the process of selecting, combining, generating or adapting several si
Truncated Newton method
Truncated Newton methods, also known as Hessian-free optimization, are a family of optimization algorithms designed for optimizing non-linear functions with large numbers of independent variables. A t
Least-squares spectral analysis
Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum, based on a least squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the mos
Great deluge algorithm
The Great deluge algorithm (GD) is a generic algorithm applied to optimization problems. It is similar in many ways to the hill-climbing and simulated annealing algorithms. The name comes from the ana
Guillotine partition
Guillotine partition is the process of partitioning a rectilinear polygon, possibly containing some holes, into rectangles, using only guillotine-cuts. A guillotine-cut (also called an edge-to-edge cu
Levenberg–Marquardt algorithm
In mathematics and computing, the Levenberg–Marquardt algorithm (LMA or just LM), also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimiz
Constructive heuristic
A constructive heuristic is a type of heuristic method which starts with an empty solution and repeatedly extends the current solution until a complete solution is obtained. It differs from local sear
Kantorovich theorem
The Kantorovich theorem, or Newton–Kantorovich theorem, is a mathematical statement on the semi-local convergence of Newton's method. It was first stated by Leonid Kantorovich in 1948. It is similar t
PSeven
pSeven is a DSE (Design Space Exploration) software platform developed by DATADVANCE, extending design, simulation and analysis capabilities and assisting in faster design decisions. It provides integ
Derivative-free optimization
Derivative-free optimization is a discipline in mathematical optimization that does not use derivative information in the classical sense to find optimal solutions: Sometimes information about the der
Limited-memory BFGS
Limited-memory BFGS (L-BFGS or LM-BFGS) is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amoun
Fourier–Motzkin elimination
Fourier–Motzkin elimination, also known as the FME method, is a mathematical algorithm for eliminating variables from a system of linear inequalities. It can output real solutions. The algorithm is na
Affine scaling
In mathematical optimization, affine scaling is an algorithm for solving linear programming problems. Specifically, it is an interior point method, discovered by Soviet mathematician I. I. Dikin in 19
Iterated conditional modes
In statistics, iterated conditional modes is a deterministic algorithm for obtaining a configuration of a local maximum of the joint probability of a Markov random field. It does this by iteratively m
Destination dispatch
Destination dispatch is an optimization technique used for multi-elevator installations, in which groups passengers heading to the same destinations use the same elevators, thereby reducing waiting an
Graduated optimization
Graduated optimization is a global optimization technique that attempts to solve a difficult optimization problem by initially solving a greatly simplified problem, and progressively transforming that
Social cognitive optimization
Social cognitive optimization (SCO) is a population-based metaheuristic optimization algorithm which was developed in 2002. This algorithm is based on the social cognitive theory, and the key point of
Level-set method
Level-set methods (LSM) are a conceptual framework for using level sets as a tool for numerical analysis of surfaces and shapes. The advantage of the level-set model is that one can perform numerical
Davidon–Fletcher–Powell formula
The Davidon–Fletcher–Powell formula (or DFP; named after William C. Davidon, Roger Fletcher, and Michael J. D. Powell) finds the solution to the secant equation that is closest to the current estimate
Ant colony optimization algorithms
In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths thro
Parametric programming
Parametric programming is a type of mathematical optimization, where the optimization problem is solved as a function of one or multiple parameters. Developed in parallel to sensitivity analysis, its
Tree rearrangement
Tree rearrangements are deterministic algorithms devoted to searching for an optimal tree structure. They can be applied to any set of data that are naturally arranged into a tree, but have most appli
Benson's algorithm
Benson's algorithm, named after , is a method for solving multi-objective linear programming problems and vector linear programs. This works by finding the "efficient extreme points in the outcome set
Quadratic programming
Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize (minimize or maximize) a multivariate
Fernandez's method
Fernandez's method (FB) in computer science and operations research, is a method which is used in the multiprocessor scheduling algorithm. It is actually used to improve the quality of the lower bound
Greedy triangulation
The Greedy Triangulation is a method to compute a polygon triangulation or a Point set triangulation using a greedy schema, which adds edges one by one to the solution in strict increasing order by le
Basin-hopping
In applied mathematics, Basin-hopping is a global optimization technique that iterates by performing random perturbation of coordinates, performing local optimization, and accepting or rejecting new c
Column generation
Column generation or delayed column generation is an efficient algorithm for solving large linear programs. The overarching idea is that many linear programs are too large to consider all the variable
IOSO
IOSO (Indirect Optimization on the basis of Self-Organization) is a multiobjective, multidimensional nonlinear optimization technology.
Dykstra's projection algorithm
Dykstra's algorithm is a method that computes a point in the intersection of convex sets, and is a variant of the alternating projection method (also called the projections onto convex sets method). I
Lloyd's algorithm
In electrical engineering and computer science, Lloyd's algorithm, also known as Voronoi iteration or relaxation, is an algorithm named after Stuart P. Lloyd for finding evenly spaced sets of points i
Dynamic programming
Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields
Powell's dog leg method
Powell's dog leg method is an iterative optimisation algorithm for the solution of non-linear least squares problems, introduced in 1970 by Michael J. D. Powell. Similarly to the Levenberg–Marquardt a
Mirror descent
In mathematics, mirror descent is an iterative optimization algorithm for finding a local minimum of a differentiable function. It generalizes algorithms such as gradient descent and multiplicative we
Generalized iterative scaling
In statistics, generalized iterative scaling (GIS) and improved iterative scaling (IIS) are two early algorithms used to fit log-linear models, notably multinomial logistic regression (MaxEnt) classif
Killer heuristic
In competitive two-player games, the killer heuristic is a move-ordering method based on the observation that a strong move or small set of such moves in a particular position may be equally strong in
MCS algorithm
For mathematical optimization, Multilevel Coordinate Search (MCS) is an efficient algorithm for bound constrained global optimization using function values only. To do so, the n-dimensional search spa
Matrix chain multiplication
Matrix chain multiplication (or the matrix chain ordering problem) is an optimization problem concerning the most efficient way to multiply a given sequence of matrices. The problem is not actually to
Cutting-plane method
In mathematical optimization, the cutting-plane method is any of a variety of optimization methods that iteratively refine a feasible set or objective function by means of linear inequalities, termed
Local convergence
In numerical analysis, an iterative method is called locally convergent if the successive approximations produced by the method are guaranteed to converge to a solution when the initial approximation
Backtracking line search
In (unconstrained) mathematical optimization, a backtracking line search is a line search method to determine the amount to move along a given search direction. Its use requires that the objective fun
Bin covering problem
In the bin covering problem, items of different sizes must be packed into a finite number of bins or containers, each of which must contain at least a certain given total size, in a way that maximizes
DATADVANCE
DATADVANCE Is a software development company, evolved out of a collaborative research program between Airbus and Institute for Information Transmission Problems of .
Random search
Random search (RS) is a family of numerical optimization methods that do not require the gradient of the problem to be optimized, and RS can hence be used on functions that are not continuous or diffe
Bland's rule
In mathematical optimization, Bland's rule (also known as Bland's algorithm, Bland's anti-cycling rule or Bland's pivot rule) is an algorithmic refinement of the simplex method for linear optimization
Zionts–Wallenius method
Within computer science, the Zionts–Wallenius method is an interactive method used to find a best solution to a multi-criteria optimization problem.
Evolutionary algorithm
In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspi
Ternary search
A ternary search algorithm is a technique in computer science for finding the minimum or maximum of a unimodal function. A ternary search determines either that the minimum or maximum cannot be in the
Cross-entropy method
The cross-entropy (CE) method is a Monte Carlo method for importance sampling and optimization. It is applicable to both combinatorial and continuous problems, with either a static or noisy objective.
Adaptive dimensional search
Adaptive dimensional search algorithms differ from nature-inspired metaheuristic techniques in the sense that they do not use any metaphor as an underlying principle for implementation. Rather, they u
Ruzzo–Tompa algorithm
The Ruzzo–Tompa algorithm is a linear-time algorithm for finding all non-overlapping, contiguous, maximal scoring subsequences in a sequence of real numbers. This algorithm is an improvement over prev
Search-based software engineering
Search-based software engineering (SBSE) applies metaheuristic search techniques such as genetic algorithms, simulated annealing and tabu search to software engineering problems. Many activities in so
Fly algorithm
The Fly Algorithm is a type of cooperative coevolution based on the Parisian approach. The Fly Algorithm has first been developed in 1999 in the scope of the application of Evolutionary algorithms to
Newton's method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximati
Nelder–Mead method
The Nelder–Mead method (also downhill simplex method, amoeba method, or polytope method) is a numerical method used to find the minimum or maximum of an objective function in a multidimensional space.