Optimization - Maple Programming Help

Home : Support : Online Help : Mathematics : Optimization : Optimization Package : Optimization/LSSolve

Optimization

 LSSolve
 solve a least-squares problem

 Calling Sequence LSSolve(obj, constr, bd, opts) LSSolve(opfobj, ineqcon, eqcon, opfbd, opts)

Parameters

 obj - list(algebraic); least-squares residuals constr - (optional) set(relation) or list(relation); constraints bd - (optional) sequence of name = range; bounds for one or more variables opfobj - list(procedure); least-squares residuals ineqcon - (optional) set(procedure) or list(procedure); inequality constraints eqcon - (optional) set(procedure) or list(procedure); equality constraints opfbd - (optional) sequence of ranges; bounds for all variables opts - (optional) equation(s) of the form option = value where option is one of assume, feasibilitytolerance, infinitebound, initialpoint, iterationlimit, method, optimalitytolerance, output or variables ;specify options for the LSSolve command

Description

 • The LSSolve command solves a least-squares (LS) problem, which involves computing the minimum of a real-valued objective function having the form $\left(\frac{1}{2}\right)\left({\mathrm{f1}\left(x\right)}^{2}+{\mathrm{f2}\left(x\right)}^{2}+\mathrm{...}+{\mathrm{fq}\left(x\right)}^{2}\right)$ where $x$ is a vector of problem variables. The objective function may be subject to constraints. Generally, a local minimum is returned unless the problem is convex. The algorithms used by LSSolve assume the residuals $\mathrm{fi}\left(x\right)$ and the constraints are twice continuously differentiable, though they will sometimes succeed even if this condition is not met. The number of residuals, $q$, must be greater than or equal to the number of problem variables.
 • This help page describes the use of the LSSolve command when the LS problem is specified in algebraic or operator form.  Summaries of these forms are given in the Optimization/AlgebraicForm and Optimization/OperatorForm help pages. LSSolve also recognizes problems in Matrix form (see the LSSolve (Matrix Form) help page). Matrix form leads to more efficient computation, but is more complex.
 • The first calling sequence uses the algebraic form of input.  The first parameter obj is the objective function, which must be a list of algebraic expressions $[\mathrm{f1},\mathrm{f2},...,\mathrm{fq}]$.
 The second parameter constr is optional and is a set or list of relations (of type <= or =) involving the problem variables. The problem variables are the indeterminates of type name found in obj and constr. They can also be specified using the variables option.
 Bounds, bd, on one or more of the variables are given as additional arguments, each of the form varname = varrange where varname is a variable name and varrange is its range.
 • The second calling sequence uses the operator form of input.  The objective function opfobj must be a list of procedures each taking $n$ floating-point parameters representing the problem variables $\mathrm{x1}$, $\mathrm{x2}$, ..., $\mathrm{xn}$, and returning a float.  These procedures compute the least-squares residuals: $\mathrm{f1}$, $\mathrm{f2}$, ..., $\mathrm{fq}$.
 Inequality and equality constraints are provided using the optional ineqcon and eqcon parameters. An inequality constraint $v\left(\mathrm{x1},\mathrm{x2},\mathrm{...},\mathrm{xn}\right)\le 0$ is specified by a procedure v in ineqcon that has the same form as opfobj and returns the left-hand side value of the constraint. Similarly, an equality constraint $w\left(\mathrm{x1},\mathrm{x2},\mathrm{...},\mathrm{xn}\right)=0$ is specified by a procedure w in eqcon. Either ineqcon or eqcon may be empty.
 Bounds, opfbd, on the variables are optional, but if given must be a sequence of exactly n ranges corresponding in order to $\mathrm{x1}$, $\mathrm{x2}$, ..., $\mathrm{xn}$.
 • For either form of input, non-negativity of the variables is not assumed by default, but can be specified using the assume = nonnegative option.  Bounds can include values having type infinity.
 • Maple returns the solution as a list containing the final minimum value and a point (the extremum).  If the output = solutionmodule option is provided, then a module is returned.  See the Optimization/Solution help page for more information.

Options

 The opts argument can contain one or more of the following options. These options are described in more detail in the Optimization/Options help page.
 • assume = nonnegative -- Assume that all variables are non-negative.
 • feasibilitytolerance = realcons(positive) -- Set the maximum absolute allowable constraint violation.
 • infinitebound = realcons(positive) -- Set any value of a variable greater than the infinitebound value to be equivalent to infinity during the computation.
 • initialpoint = set(equation), list(equation), or list(numeric) --  Use the provided initial point, which is a set or list of equations $\mathrm{varname}=\mathrm{value}$ (for algebraic form input) or a list of exactly $n$ values (for operator form input).
 • iterationlimit = posint --  Set the maximum number of iterations performed by the algorithm.
 • method = modifiednewton or sqp -- Specify the method.  See the Optimization/Methods help page for more information.
 • optimalitytolerance = realcons(positive) -- Set the tolerance that determines whether an optimal point has been found. This option is not available when the problem is linear.
 • output = solutionmodule -- Return a module as described in the Optimization/Solution help page.
 • variables = list(name) or set(name) -- Specify the problem variables when the objective function is in algebraic form.

Notes

 • The LSSolve command uses various methods implemented in a built-in library provided by the Numerical Algorithms Group (NAG). See the Optimization/Methods help page for more details. The solvers are iterative in nature and require an initial point.  The quality of the solution can depend greatly on the point chosen, particularly for nonlinear problems, so it is recommended that you provide a point using the initialpoint option. Otherwise, a point is automatically generated.
 • The computation is performed in floating-point. Therefore, all data provided must have type realcons and all returned solutions are floating-point, even if the problem is specified with exact values. Because the solver fails when a complex value is encountered, it is sometimes necessary to add additional constraints to ensure that the objective function and constraints always evaluate to real values. For more information about numeric computation in the Optimization package, see the Optimization/Computation help page.
 • For some methods of solving nonlinear problems, the computation is more efficient when derivatives of the objective function and constraints are available. LSSolve attempts to compute these derivatives automatically, but better performance can be achieved by directly providing derivatives using the Matrix form calling sequence described in the LSSolve (Matrix Form) help page. For information on the methods that use derivatives, see the Optimization/Methods help page.
 • Although the assume = nonnegative option is accepted, general assumptions are not supported by commands in the Optimization package.
 • An answer is returned when necessary first-order conditions for optimality have been met and the iterates have converged.  If the initial point already satisfies the conditions, then a warning is issued.  Generally, the result is a local extremum but it is possible for the solver to return a saddle point.  It is recommended that you try different initial points with each problem to verify that the solution is indeed an extremum.
 Occasionally the solver will return a solution even if the iterates have not converged but the point satisfies the first-order conditions.  Setting infolevel[Optimization] to 1 or higher will produce a message indicating this situation if it occurs.
 • If LSSolve returns an error saying that no solution could be found, it is recommended that you try a different initial point or use tolerance parameters that are less restrictive.

Examples

 > $\mathrm{with}\left(\mathrm{Optimization}\right):$

Solve an LS problem with three linear residuals.  The objective function in this case is $\left(\frac{1}{2}\right)\left({\left(x-2\right)}^{2}+{\left(x-6\right)}^{2}+{\left(x-9\right)}^{2}\right)$.

 > $\mathrm{LSSolve}\left(\left[x-2,x-6,x-9\right]\right)$
 $\left[{12.3333333333333321}{,}\left[{x}{=}{5.66666666666667}\right]\right]$ (1)

Solve an LS problem with nonlinear residuals.  Provide an initial point using the initialpoint option.

 > $\mathrm{LSSolve}\left(\left[{x}^{3}-2,{x}^{2}-6,{x}^{2}-9\right],\mathrm{initialpoint}=\left\{x=1\right\}\right)$
 $\left[{27.5839512531713}{,}\left[{x}{=}{1.75156454919679}\right]\right]$ (2)

Solve a linearly constrained linear LS problem.

 > $\mathrm{LSSolve}\left(\left[x-1,y-1,z-1\right],\left\{6x+3y\le 1,x\le 0\right\},\mathrm{initialpoint}=\left\{x=-1,y=1\right\}\right)$
 $\left[{0.711111111111111138}{,}\left[{x}{=}{-}{0.0666666666666667}{,}{y}{=}{0.466666666666667}{,}{z}{=}{1.}\right]\right]$ (3)

Solve an LS problem with a nonlinear constraint.

 > $\mathrm{LSSolve}\left(\left[x-1\right],\left\{{\left(x+1\right)}^{2}\le 0\right\}\right)$
 $\left[{1.99998465585440166}{,}\left[{x}{=}{-}{0.999992327912486}\right]\right]$ (4)