Quadratic Programming Algorithms - MATLAB & Simulink (2024)

Quadratic Programming Algorithms

Quadratic Programming Definition

Quadratic programming is the problem of finding a vector x thatminimizes a quadratic function, possibly subject to linear constraints:

minx12xTHx+cTx(1)

such that A·xb, Aeq·x=beq, lxu.

interior-point-convex quadprog Algorithm

The interior-point-convex algorithm performs thefollowing steps:

  • Presolve/Postsolve

  • Generate Initial Point

  • Predictor-Corrector

  • Stopping Conditions

  • Infeasibility Detection

Note

The algorithm has two code paths. It takes one when the Hessianmatrix H is an ordinary (full) matrix of doubles,and it takes the other when H is a sparse matrix.For details of the sparse data type, see Sparse Matrices.Generally, the algorithm is faster for large problems that have relativelyfew nonzero terms when you specify H as sparse. Similarly, the algorithm is fasterfor small or relatively dense problems when you specify H as full.

Presolve/Postsolve

The algorithm first tries to simplify the problem by removing redundancies and simplifying constraints. The tasks performed during the presolve step can include the following:

  • Check if any variables have equal upper and lowerbounds. If so, check for feasibility, and then fix and remove thevariables.

  • Check if any linear inequality constraint involves only one variable. If so, check for feasibility, and then change the linear constraint to a bound.

  • Check if any linear equality constraint involves only one variable. If so, check for feasibility, and then fix and remove the variable.

  • Check if any linear constraint matrix has zero rows. If so, check for feasibility, and then delete the rows.

  • Determine if the bounds and linear constraints are consistent.

  • Check if any variables appear only as linear terms in the objective function and do not appear in any linear constraint. If so, check for feasibility and boundedness, and then fix the variables at their appropriate bounds.

  • Change any linear inequality constraints to linearequality constraints by adding slack variables.

If the algorithm detects an infeasible or unbounded problem, it halts and issues an appropriate exit message.

The algorithm might arrive at a single feasible point, whichrepresents the solution.

If the algorithm does not detect an infeasible or unbounded problem in the presolve step, and if the presolve has not produced the solution, the algorithm continues to its next steps. After reaching a stopping criterion, the algorithm reconstructs the original problem, undoing any presolve transformations. This final step is the postsolve step.

For details, see Gould and Toint [63].

Generate Initial Point

The initial point x0 for the algorithm is:

  1. Initialize x0 to ones(n,1), where n is the number of rows in H.

  2. For components that have both an upper bound ub anda lower bound lb, if a component of x0 isnot strictly inside the bounds, the component is set to (ub+lb)/2.

  3. For components that have only one bound, modify thecomponent if necessary to lie strictly inside the bound.

  4. Take a predictor step (see Predictor-Corrector), with minor corrections for feasibility,not a full predictor-corrector step. This places the initial pointcloser to the central path without entailingthe overhead of a full predictor-corrector step. For details of thecentral path, see Nocedal and Wright [8], page 397.

Predictor-Corrector

The sparse and full interior-point-convex algorithms differmainly in the predictor-corrector phase. The algorithms are similar,but differ in some details. For the basic algorithm description, seeMehrotra [47].

The algorithms begin by turning the linear inequalities Ax <= b into inequalities of the form Ax >= b by multiplying A and b by -1. This has no bearing on the solution, but makes the problem of the same form found in some literature.

  • Sparse Predictor-Corrector

  • Full Predictor-Corrector

Sparse Predictor-Corrector.Similar to the fmincon interior-point algorithm,the sparse interior-point-convex algorithm tries to finda point where the Karush-Kuhn-Tucker (KKT) conditionshold. For the quadratic programming problem described in Quadratic Programming Definition, these conditions are:

Hx+cAeqTyA¯Tz=0A¯xb¯s=0Aeqxbeq=0sizi=0,i=1,2,...,ms0z0.

Here

  • A¯ is the extendedlinear inequality matrix that includes bounds written as linear inequalities. b¯ is the corresponding linearinequality vector, including bounds.

  • s is the vector of slacks thatconvert inequality constraints to equalities. s haslength m, the number of linear inequalities andbounds.

  • z is the vector of Lagrange multiplierscorresponding to s.

  • y is the vector of Lagrange multipliersassociated with the equality constraints.

The algorithm first predicts a step from the Newton-Raphsonformula, then computes a corrector step. The corrector attempts tobetter enforce the nonlinear constraint sizi=0.

Definitions for the predictor step:

In a Newton step, the changes in x, s, y,and z, are given by:

(H0AeqTA¯TAeq000A¯I000Z0S)(ΔxΔsΔyΔz)=(rdreqrineqrsz).(2)

However, a full Newton step might be infeasible, because ofthe positivity constraints on s and z.Therefore, quadprog shortens the step, if necessary,to maintain positivity.

Additionally, to maintain a “centered” positionin the interior, instead of trying to solve sizi=0, the algorithmtakes a positive parameter σ, and tries tosolve

sizi = σrc.

quadprog replaces rsz inthe Newton step equation with rszsΔzσrc1, where 1 isthe vector of ones. Also, quadprog reorders theNewton equations to obtain a symmetric, more numerically stable systemfor the predictor step calculation.

After calculating the corrected Newton step, the algorithm performsmore calculations to get both a longer current step, and to preparefor better subsequent steps. These multiple correction calculationscan improve both performance and robustness. For details, see Gondzio [5].

Full Predictor-Corrector.The full predictor-corrector algorithm does not combine boundsinto linear constraints, so it has another set of slack variablescorresponding to the bounds. The algorithm shifts lower bounds tozero. And, if there is only one bound on a variable, the algorithmturns it into a lower bound of zero, by negating the inequality ofan upper bound.

A¯ is the extended linear matrixthat includes both linear inequalities and linear equalities. b¯ is the corresponding linearequality vector. A¯ also includesterms for extending the vector x with slack variables s thatturn inequality constraints to equality constraints:

A¯x=(Aeq0AI)(x0s),

where x0 means theoriginal x vector.

The KKT conditions are

Hx+cA¯Tyv+w=0A¯x=b¯x+t=uvixi=0,i=1,2,...,mwiti=0,i=1,2,...,nx,v,w,t0.(3)

To find the solution x, slack variables anddual variables to Equation3,the algorithm basically considers a Newton-Raphson step:

(HA¯T0IIA¯0000I0I00V00X000W0T)(ΔxΔyΔtΔvΔw)=(Hx+cA¯Tyv+wA¯xb¯uxtVXWT)=(rdrprubrvxrwt),(4)

where X, V, W,and T are diagonal matrices corresponding to thevectors x, v, w,and t respectively. The residual vectors on thefar right side of the equation are:

  • rd, thedual residual

  • rp, theprimal residual

  • rub, theupper bound residual

  • rvx, thelower bound complementarity residual

  • rwt, theupper bound complementarity residual

The algorithm solves Equation4 by first converting it to the symmetricmatrix form

(DA¯TA¯0)(ΔxΔy)=(Rrp),(5)

where

D=H+X1V+T1WR=rdX1rvx+T1rwt+T1Wrub.

All the matrix inverses in the definitions of D and R aresimple to compute because the matrices are diagonal.

To derive Equation5 from Equation4, notice that thesecond row of Equation5 isthe same as the second matrix row of Equation4. The first row of Equation5 comes from solvingthe last two rows of Equation4 for Δv andΔw, and then solving for Δt.

To solve Equation5,the algorithm follows the essential elements of Altman and Gondzio [1]. The algorithm solves the symmetricsystem by an LDL decomposition. As pointed out by authors such asVanderbei and Carpenter [2], this decomposition is numericallystable without any pivoting, so can be fast.

After calculating the corrected Newton step, the algorithm performsmore calculations to get both a longer current step, and to preparefor better subsequent steps. These multiple correction calculationscan improve both performance and robustness. For details, see Gondzio [5].

The full quadprog predictor-corrector algorithmis largely the same as that in the linprog 'interior-point' algorithm,but includes quadratic terms as well. See Predictor-Corrector.

References

[1] Altman, Anna and J. Gondzio. Regularized symmetric indefinite systems in interior point methods for linear and quadratic optimization. Optimization Methods and Software, 1999. Available for download here.

[2] Vanderbei, R. J. and T. J. Carpenter. Symmetric indefinite systems for interior point methods. Mathematical Programming 58, 1993. pp. 1–32. Available for download here.

Stopping Conditions

The predictor-corrector algorithm iterates until it reachesa point that is feasible (satisfies the constraints to within tolerances)and where the relative step sizes are small. Specifically, define

ρ=max(1,H,A¯,Aeq,c,b¯,beq)

The algorithm stops when all of these conditions are satisfied:

rp1+rub1ρTolConrdρTolFunrcTolFun,

where

rc=maxi(min(|xivi|,|xi|,|vi|),min(|tiwi|,|ti|,|wi|)).

rc essentially measuresthe size of the complementarity residuals xv and tw,which are each vectors of zeros at a solution.

Infeasibility Detection

quadprog calculates a merit function φ atevery iteration. The merit function is a measure of feasibility. quadprog stopsif the merit function grows too large. In this case, quadprog declaresthe problem to be infeasible.

The merit function is related to the KKT conditions for theproblem—see Predictor-Corrector.Use the following definitions:

ρ=max(1,H,A¯,Aeq,c,b¯,beq)req=Aeqxbeqrineq=A¯xb¯srd=Hx+c+AeqTλeq+A¯Tλ¯ineqg=12xTHx+cTxb¯Tλ¯ineqbeqTλeq.

The notation A¯ and b¯ means the linear inequalitycoefficients, augmented with terms to represent bounds for the sparsealgorithm. The notation λ¯ineq similarly representsLagrange multipliers for the linear inequality constraints, includingbound constraints. This was called z in Predictor-Corrector, and λeq was called y.

The merit function φ is

1ρ(max(req,rineq,rd)+g).

If this merit function becomes too large, quadprog declaresthe problem to be infeasible and halts with exit flag -2.

trust-region-reflective quadprog Algorithm

Many of the methods used in Optimization Toolbox™ solversare based on trust regions, a simple yet powerfulconcept in optimization.

To understand the trust-region approach to optimization, considerthe unconstrained minimization problem, minimize f(x),where the function takes vector arguments and returns scalars. Supposeyou are at a point x in n-spaceand you want to improve, i.e., move to a point with a lower functionvalue. The basic idea is to approximate f witha simpler function q, which reasonably reflectsthe behavior of function f in a neighborhood N aroundthe point x. This neighborhood is the trust region.A trial step s is computed by minimizing (or approximatelyminimizing) over N. This is the trust-region subproblem,

mins{q(s),sN}.(6)

The current point is updated to be x+s if f(x+s)<f(x);otherwise, the current point remains unchanged and N,the region of trust, is shrunk and the trial step computation is repeated.

The key questions in defining a specific trust-region approachto minimizing f(x) are how tochoose and compute the approximation q (definedat the current point x), how to choose and modifythe trust region N, and how accurately to solvethe trust-region subproblem. This section focuses on the unconstrainedproblem. Later sections discuss additional complications due to thepresence of constraints on the variables.

In the standard trust-region method ([48]), the quadratic approximation q isdefined by the first two terms of the Taylor approximation to F at x;the neighborhood N is usually spherical or ellipsoidalin shape. Mathematically the trust-region subproblem is typicallystated

min{12sTHs+sTgsuchthatDsΔ},(7)

where g is the gradient of f at the current point x, H is the Hessian matrix (the symmetric matrix of second derivatives), D is a diagonal scaling matrix, Δ is a positive scalar, and ‖ . ‖ is the 2-norm. Good algorithms exist for solving Equation7 (see [48]); such algorithms typically involve the computation of all eigenvalues of H and a Newton process applied to the secular equation

1Δ1s=0.

Such algorithms provide an accurate solution to Equation7. However, theyrequire time proportional to several factorizations of H.Therefore, for large-scale problems a different approach is needed.Several approximation and heuristic strategies, based on Equation7, have been proposedin the literature ([42] and [50]). The approximation approach followedin Optimization Toolbox solvers is to restrict the trust-regionsubproblem to a two-dimensional subspace S ([39] and [42]).Once the subspace S has been computed, the workto solve Equation7 istrivial even if full eigenvalue/eigenvector information is needed(since in the subspace, the problem is only two-dimensional). Thedominant work has now shifted to the determination of the subspace.

The two-dimensional subspace S isdetermined with the aid of a preconditionedconjugate gradient process described below. The solver defines S asthe linear space spanned by s1 and s2,where s1 is in the directionof the gradient g, and s2 iseither an approximate Newton direction, i.e.,a solution to

Hs2=g,(8)

or a direction of negative curvature,

s2THs2<0.(9)

The philosophy behind this choice of S isto force global convergence (via the steepest descent direction ornegative curvature direction) and achieve fast local convergence (viathe Newton step, when it exists).

A sketch of unconstrained minimization using trust-region ideasis now easy to give:

  1. Formulate the two-dimensional trust-region subproblem.

  2. Solve Equation7 to determine the trial step s.

  3. If f(x + s)< f(x),then x = x + s.

  4. Adjust Δ.

These four steps are repeated until convergence. The trust-regiondimension Δ is adjusted according to standard rules. In particular,it is decreased if the trial step is not accepted, i.e., f(x + s)≥ f(x).See [46] and [49] for a discussion of this aspect.

Optimization Toolbox solvers treat a few important specialcases of f with specialized functions: nonlinearleast-squares, quadratic functions, and linear least-squares. However,the underlying algorithmic ideas are the same as for the general case.These special cases are discussed in later sections.

The subspace trust-region method is used to determine a searchdirection. However, instead of restricting the step to (possibly)one reflection step, as in the nonlinear minimization case, a piecewise reflective linesearch is conducted at each iteration. See [45] for details of the line search.

Preconditioned Conjugate Gradient Method

A popular way to solve large, symmetric, positive definite systems of linear equations Hp=–g is the method of Preconditioned Conjugate Gradients (PCG). This iterative approach requires the ability to calculate matrix-vector products of the form H·v where v is an arbitrary vector. The symmetric positive definite matrix M is a preconditioner for H. That is, M=C2, where C–1HC–1 is a well-conditioned matrix or a matrix with clustered eigenvalues.

In a minimization context, you can assume that the Hessian matrix H is symmetric. However, H is guaranteed to be positive definite only in the neighborhood of a strong minimizer. Algorithm PCG exits when it encounters a direction of negative (or zero) curvature, that is, dTHd≤0. The PCG output direction p is either a direction of negative curvature or an approximate solution to the Newton system Hp=–g. In either case, p helps to define the two-dimensional subspace used in the trust-region approach discussed in Trust-Region Methods for Nonlinear Minimization.

Linear Equality Constraints

Linear constraints complicate the situation described for unconstrainedminimization. However, the underlying ideas described previously canbe carried through in a clean and efficient way. The trust-regionmethods in Optimization Toolbox solvers generate strictly feasibleiterates.

The general linear equality constrained minimization problemcan be written

min{f(x)suchthatAx=b},(10)

where A is an m-by-n matrix(mn). Some Optimization Toolbox solverspreprocess A to remove strict linear dependenciesusing a technique based on the LU factorization of AT [46]. Here A is assumedto be of rank m.

The method used to solve Equation10 differs from the unconstrained approachin two significant ways. First, an initial feasible point x0 iscomputed, using a sparse least-squares step, so that Ax0=b.Second, Algorithm PCG is replaced with Reduced Preconditioned ConjugateGradients (RPCG), see [46], in orderto compute an approximate reduced Newton step (or a direction of negativecurvature in the null space of A). The key linearalgebra step involves solving systems of the form

[CA˜TA˜0][st]=[r0],(11)

where A˜ approximates A (smallnonzeros of A are set to zero provided rank isnot lost) and C is a sparse symmetric positive-definiteapproximation to H, i.e., C=H.See [46] for more details.

Box Constraints

The box constrained problem is of the form

min{f(x)suchthatlxu},(12)

where l is a vector of lower bounds, and u isa vector of upper bounds. Some (or all) of the components of l canbe equal to –∞ and some (or all) of the components of u canbe equal to ∞. The method generates a sequence of strictlyfeasible points. Two techniques are used to maintain feasibility whileachieving robust convergence behavior. First, a scaled modified Newtonstep replaces the unconstrained Newton step (to define the two-dimensionalsubspace S). Second, reflectionsare used to increase the step size.

The scaled modified Newton step arises from examining the Kuhn-Tuckernecessary conditions for Equation12,

(D(x))2g=0,(13)

where

D(x)=diag(|vk|1/2),

and the vector v(x) isdefined below, for each 1≤in:

  • If gi<0 and ui<∞ then vi=xiui

  • If gi≥0 and li>–∞ then vi=xili

  • If gi<0 and ui=∞ then vi=–1

  • If gi≥0 and li=–∞ then vi=1

The nonlinear system Equation13 is not differentiable everywhere.Nondifferentiability occurs when vi=0. You can avoidsuch points by maintaining strict feasibility, i.e., restricting l<x<u.

The scaled modified Newton step sk forthe nonlinear system of equations given by Equation13 is defined as the solution to thelinear system

M^DsN=g^(14)

at the kth iteration, where

g^=D1g=diag(|v|1/2)g,(15)

and

M^=D1HD1+diag(g)Jv.(16)

Here Jv playsthe role of the Jacobian of |v|. Each diagonalcomponent of the diagonal matrix Jv equals0, –1, or 1. If all the components of l and u arefinite, Jv=diag(sign(g)).At a point where gi=0, vi mightnot be differentiable. Jiiv=0 is defined atsuch a point. Nondifferentiability of this type is not a cause forconcern because, for such a component, it is not significant whichvalue vi takes. Further,|vi| will still be discontinuousat this point, but the function |vigi iscontinuous.

Second, reflections are used to increasethe step size. A (single) reflection step is defined as follows. Givena step p that intersects a bound constraint, considerthe first bound constraint crossed by p; assumeit is the ith bound constraint (either the ithupper or ith lower bound). Then the reflectionstep pR=p exceptin the ith component, where pRi=–pi.

active-set quadprog Algorithm

After completing a presolve step, the active-set algorithm proceeds in two phases.

  • Phase 1 — Obtain a feasible point with respect to all constraints.

  • Phase 2 — Iteratively lower the objective function while maintaining a list of the active constraints and maintaining feasibility in each iteration.

The active-set strategy (also known as a projection method) is similar to the strategy of Gill et al., described in [18] and [17].

Presolve Step

The algorithm first tries to simplify the problem by removing redundancies and simplifying constraints. The tasks performed during the presolve step can include the following:

  • Check if any variables have equal upper and lowerbounds. If so, check for feasibility, and then fix and remove thevariables.

  • Check if any linear inequality constraint involves only one variable. If so, check for feasibility, and then change the linear constraint to a bound.

  • Check if any linear equality constraint involves only one variable. If so, check for feasibility, and then fix and remove the variable.

  • Check if any linear constraint matrix has zero rows. If so, check for feasibility, and then delete the rows.

  • Determine if the bounds and linear constraints are consistent.

  • Check if any variables appear only as linear terms in the objective function and do not appear in any linear constraint. If so, check for feasibility and boundedness, and then fix the variables at their appropriate bounds.

  • Change any linear inequality constraints to linearequality constraints by adding slack variables.

If the algorithm detects an infeasible or unbounded problem, it halts and issues an appropriate exit message.

The algorithm might arrive at a single feasible point, whichrepresents the solution.

If the algorithm does not detect an infeasible or unbounded problem in the presolve step, and if the presolve has not produced the solution, the algorithm continues to its next steps. After reaching a stopping criterion, the algorithm reconstructs the original problem, undoing any presolve transformations. This final step is the postsolve step.

For details, see Gould and Toint [63].

Phase 1 Algorithm

In Phase 1, the algorithm attempts to find a point x that satisfies all constraints, with no consideration of the objective function. quadprog runs the Phase 1 algorithm only if the supplied initial point x0 is infeasible.

To begin, the algorithm tries to find a point that is feasible with respect to all equality constraints, such as X = Aeq\beq. If there is no solution x to the equations Aeq*x = beq, then the algorithm halts. If there is a solution X, the next step is to satisfy the bounds and linear inequalities. In the case of no equality constraints set X = x0, the initial point.

Starting from X, the algorithm calculates M = max(A*X – b, X - ub, lb – X). If M <= options.ConstraintTolerance, then the point X is feasible and the Phase 1 algorithm halts.

If M > options.ConstraintTolerance, the algorithm introduces a nonnegative slack variable γ for the auxiliary linear programming problem

minx,γγ

such that

AxγbAeqx=beqlbγxub+γγρ.

Here, ρ is the ConstraintTolerance option multiplied by the absolute value of the largest element in A and Aeq. If the algorithm finds γ = 0 and a point x that satisfies the equations and inequalities, then x is a feasible Phase 1 point. If there is no solution to the auxiliary linear programming problem x with γ = 0, then the Phase 1 problem is infeasible.

To solve the auxiliary linear programming problem, the algorithm sets γ0 = M + 1, sets x0 = X, and initializes the active set as the fixed variables (if any) and all the equality constraints. The algorithm reformulates the linear programming variables p to be the offset of x from the current point x0, namely x = x0 + p. The algorithm solves the linear programming problem by the same iterations as it takes in Phase 2 to solve the quadratic programming problem, with an appropriately modified Hessian.

Phase 2 Algorithm

In terms of a variable d, the problem is

mindnq(d)=12dTHd+cTd,Aid=bi,i=1,...,meAidbi,i=me+1,...,m.(17)

Here, Ai refers to the ith row of the m-by-n matrix A.

During Phase 2, an active set A¯k, which is an estimate of the active constraints (those on the constraint boundaries) at the solution point.

The algorithm updates A¯k at each iteration k, forming the basis for a search direction dk. Equality constraints always remain in the active set A¯k. The search direction dk is calculated and minimizes the objective function while remaining on any active constraint boundaries. The algorithm forms the feasible subspace for dk from a basis Zk whose columns are orthogonal to the estimate of the active set A¯k (that is, A¯kZk=0). Therefore, a search direction, which is formed from a linear summation of any combination of the columns of Zk, is guaranteed to remain on the boundaries of the active constraints.

The algorithm forms the matrix Zk from the last nl columns of the QR decomposition of the matrix A¯kT, where l is the number of active constraints and l < n. That is, Zk is given by

Zk=Q[:,l+1:n],(18)

where

QTA¯kT=[R0].

After finding Zk, the algorithm looks for a new search direction dk that minimizes q(d), where dk is in the null space of the active constraints. That is, dk is a linear combination of the columns of Zk: d^k=Zkp for some vector p.

Viewing the quadratic as a function of p by substituting for dk, gives

q(p)=12pTZkTHZkp+cTZkp.(19)

Differentiating this equation with respect to p yields

q(p)=ZkTHZkp+ZkTc.(20)

q(p) is referred to as the projected gradient of the quadratic function because it is the gradient projected in the subspace defined by Zk. The term ZkTHZk is called the projected Hessian. Assuming the projected Hessian matrix ZkTHZk is positive semidefinite, the minimum of the function q(p) in the subspace defined by Zk occurs when q(p)=0, which is the solution of the system of linear equations

ZkTHZkp=ZkTc.(21)

The algorithm then takes a step of the form

xk+1=xk+αdk,

where

dk=Zkp.

Due to the quadratic nature of the objective function, only two choices of step length α exist at each iteration. A step of unity along dk is the exact step to the minimum of the function restricted to the null space of A¯k. If the algorithm can take such a step without violating the constraints, then this step is the solution to the quadratic program (Equation18). Otherwise, the step along dk to the nearest constraint is less than unity, and the algorithm includes a new constraint in the active set at the next iteration. The distance to the constraint boundaries in any direction dk is given by

α=mini{1,...,m}{(Aixkbi)Aidk},

which is defined for constraints not in the active set, and where the direction dk is towards the constraint boundary, that is, Aidk>0,i=1,...,m.

When the active set includes n independent constraints, without location of the minimum, the algorithm calculates the Lagrange multipliers λk, which satisfy the nonsingular set of linear equations

A¯kTλk=c+Hxk.(22)

If all elements of λk are nonnegative, xk is the optimal solution of the quadratic programming problem Equation1. However, if any component of λk is negative, and the component does not correspond to an equality constraint, then the minimization is not complete. The algorithm deletes the element corresponding to the most negative multiplier and starts a new iteration.

Sometimes, when the solver finishes with all nonnegative Lagrange multipliers, the first-order optimality measure is above the tolerance, or the constraint tolerance is not met. In these cases, the solver attempts to reach a better solution by following the restart procedure described in [1]. In this procedure, the solver discards the current set of active constraints, relaxes the constraints a bit, and constructs a new set of active constraints while attempting to solve the problem in a manner that avoids cycling (repeatedly returning to the same state). If necessary, the solver can perform the restart procedure several times.

Note

Do not try to stop the algorithm early by setting large values of the ConstraintTolerance and OptimalityTolerance options. Generally, the solver iterates without checking these values until it reaches a potential stopping point, and only then checks to see whether the tolerances are satisfied.

Occasionally, the active-set algorithm can have difficulty detecting when a problem is unbounded. This issue can occur if the direction of unboundedness v is a direction where the quadratic term v'Hv = 0. For numerical stability and to enable a Cholesky factorization, the active-set algorithm adds a small, strictly convex term to the quadratic objective. This small term causes the objective function to be bounded away from –Inf. In this case, the active-set algorithm reaches an iteration limit instead of reporting that the solution is unbounded. In other words, the algorithm halts with exit flag 0 instead of –3.

References

[1] Gill, P. E., W. Murray, M. A. Saunders, and M. H. Wright. A practical anti-cycling procedure for linearly constrained optimization. Math. Programming 45 (1), August 1989, pp. 437–474.

Warm Start

When you run the quadprog or lsqlin 'active-set' algorithm with a warm start object as the start point, the solver attempts to skip many of the Phase 1 and Phase 2 steps. The warm start object contains the active set of constraints, and this set can be correct or close to correct for the new problem. Therefore, the solver can avoid iterations to add constraints to the active set. Also, the initial point might be close to the solution for the new problem. For more information, see optimwarmstart.

MATLAB Command

You clicked a link that corresponds to this MATLAB command:

 

Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.

Quadratic Programming Algorithms- MATLAB & Simulink (1)

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list:

Americas

  • América Latina (Español)
  • Canada (English)
  • United States (English)

Europe

  • Belgium (English)
  • Denmark (English)
  • Deutschland (Deutsch)
  • España (Español)
  • Finland (English)
  • France (Français)
  • Ireland (English)
  • Italia (Italiano)
  • Luxembourg (English)
  • Netherlands (English)
  • Norway (English)
  • Österreich (Deutsch)
  • Portugal (English)
  • Sweden (English)
  • Switzerland
    • Deutsch
    • English
    • Français
  • United Kingdom (English)

Asia Pacific

Contact your local office

Quadratic Programming Algorithms
- MATLAB & Simulink (2024)

FAQs

How to solve quadratic programming in Matlab? ›

x = quadprog( H , f ) returns a vector x that minimizes 1/2*x'*H*x + f'*x . The input H must be positive definite for the problem to have a finite minimum. If H is positive definite, then the solution x = H\(-f) . x = quadprog( H , f , A , b ) minimizes 1/2*x'*H*x + f'*x subject to the restrictions A*x ≤ b .

How to use a quadratic equation in Matlab? ›

Use the == operator to specify the familiar quadratic equation and solve it using solve . solx is a symbolic vector containing the two solutions of the quadratic equation. If the input eqn is an expression and not an equation, solve solves the equation eqn == 0 .

What is quadratic programming in machine learning? ›

Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize (minimize or maximize) a multivariate quadratic function subject to linear constraints on the variables.

What is the quadratic programming function? ›

Quadratic programming (QP) is minimizing or maximizing an objective function subject to bounds, linear equality, and inequality constraints. Example problems include portfolio optimization in finance, power generation optimization for electrical utilities, and design optimization in engineering.

Can you use Matlab to solve a system of equations? ›

symbolic vector | symbolic matrix. Variables for which you solve an equation or system of equations, specified as a symbolic vector or symbolic matrix. By default, solve uses the variables determined by symvar . The order in which you specify these variables defines the order in which the solver returns the solutions.

How do you solve quadratic AC method? ›

Step 1: Simplify the quadratic by factoring out the greatest common factor if it is greater than 1.
  1. Step 2: Identify the values of the coefficients and in the standard form of a quadratic: a x 2 + b x + c .
  2. Step 3: Multiply a × c . ...
  3. Step 4: Separate the middle term using the factors.

How do you fit a quadratic to data in MATLAB? ›

Use the fit function to fit a polynomial to data. You specify a quadratic, or second-degree polynomial, using 'poly2' . The first output from fit is the polynomial, and the second output, gof , contains the goodness of fit statistics you will examine in a later step. [population2,gof] = fit(cdate,pop,'poly2');

Can MATLAB do algebra? ›

Linear algebra functions in MATLAB® provide fast, numerically robust matrix calculations. Capabilities include a variety of matrix factorizations, linear equation solving, computation of eigenvalues or singular values, and more.

How to square root in MATLAB? ›

B = sqrt( X ) returns the square root of each element of the array X . For the elements of X that are negative or complex, sqrt(X) produces complex results.

What is quadratic algorithm? ›

The quadratic sieve algorithm (QS) is an integer factorization algorithm and, in practice, the second-fastest method known (after the general number field sieve). It is still the fastest for integers under 100 decimal digits or so, and is considerably simpler than the number field sieve.

What are the 3 types of quadratic? ›

Read below for an explanation of the three main forms of quadratics (standard form, factored form, and vertex form), examples of each form, as well as strategies for converting between the various quadratic forms. Your mathematics journey has taken you far.

What is the quadratic formula algorithm? ›

The quadratic formula helps us solve any quadratic equation. First, we bring the equation to the form ax²+bx+c=0, where a, b, and c are coefficients. Then, we plug these coefficients in the formula: (-b±√(b²-4ac))/(2a) . See examples of using the formula to solve a variety of equations.

What are the applications of quadratic programming? ›

In production and operations management, QP has been widely used. There have been many applications in the problems of plant location and layout, scheduling, transportation, assignment, capacity planning and product design. In marketing brand switching problems have been modeled as QP.

Which method is commonly used to solve a quadratic programming problem? ›

Wolfe [2] modified the simplex method to solve quadratic programming problems by adding a requirement Karush-Kuhn-Tucker (KKT) and changing the quadratic objective function into a linear objective function. The extension of Wolfe method is used to solve quadratic programming problem with interval coefficients.

Is quadratic programming NP hard? ›

Quadratic programming is an important example of optimization with applications to engineering design, coombinatorical optimization, game theory, and economics. Garey and Johnson [1979] state that quadratic programming is NP-hard. In this report we show that it lies in NP, thereby proving that it is NP-complete.

How does quadprog work in Matlab? ›

quadprog runs the Phase 1 algorithm only if the supplied initial point x0 is infeasible. To begin, the algorithm tries to find a point that is feasible with respect to all equality constraints, such as X = Aeq\beq . If there is no solution x to the equations Aeq*x = beq , then the algorithm halts.

How do you solve square root in Matlab? ›

Description. B = sqrt( X ) returns the square root of each element of the array X . For the elements of X that are negative or complex, sqrt(X) produces complex results.

How does Matlab solve ax b? ›

If A is a square n -by- n matrix and B is a matrix with n rows, then x = A\B is a solution to the equation A*x = B , if it exists. If A is a rectangular m -by- n matrix with m ~= n , and B is a matrix with m rows, then A \ B returns a least-squares solution to the system of equations A*x= B .

References

Top Articles
Latest Posts
Article information

Author: Moshe Kshlerin

Last Updated:

Views: 5364

Rating: 4.7 / 5 (77 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Moshe Kshlerin

Birthday: 1994-01-25

Address: Suite 609 315 Lupita Unions, Ronnieburgh, MI 62697

Phone: +2424755286529

Job: District Education Designer

Hobby: Yoga, Gunsmithing, Singing, 3D printing, Nordic skating, Soapmaking, Juggling

Introduction: My name is Moshe Kshlerin, I am a gleaming, attractive, outstanding, pleasant, delightful, outstanding, famous person who loves writing and wants to share my knowledge and understanding with you.