Student[NumericalAnalysis] - Maple Programming Help

Home : Support : Online Help : Education : Student Packages : Numerical Analysis : Computation : Student/NumericalAnalysis/LinearSolve

Student[NumericalAnalysis]

 LinearSolve
 numerically approximate the solution to a linear system

 Calling Sequence LinearSolve(A, b, opts) LinearSolve(A, opts)

Parameters

 A - Matrix; a square $\mathrm{nxn}$ matrix or an augmented (A|b) $\mathrm{nxm}$ matrix where $m=n+1$ b - (optional) Vector or Matrix; a vector of length $n$ or a matrix of column length $n$ opts - (optional) equation(s) of the form keyword = value, where keyword is one of initialapprox, maxiterations, method, stoppingcriterion, tolerance; options for numerically approximating the solution to a linear system

Options

 • initialapprox = Vector or Matrix
 The initial approximation. To obtain a float solution instead of an exact solution, the initialapprox should contain floats instead of integers. By default, a zero vector is used.
 • maxiterations = posint
 The maximum number of iterations to perform while approximating the solution to $A·x=b$.  If the maximum number of iterations is reached and the solution is not within the specified tolerance, a plot of distances can still be returned. By default, maxiterations = 20.
 • method = jacobi, gaussseidel, SOR(numeric), LU, LU[tridiagonal], PLU, or PLU[scaled]
 The method to use when solving the linear system $A·x=b$. Please see the Notes section for sufficient conditions for convergence. This option is required. Each method is described below:
 – jacobi : The Jacobi method. Optionally, the stoppingcriterion, maxiterations, initialapprox and tolerance options may be specified as well.
 – gaussseidel : The Gauss-Seidel method. Optionally, the stoppingcriterion, maxiterations, initialapprox and tolerance options may be specified as well.
 – SOR(w) : The Successive Overrelaxation method with w as its extrapolation factor. Optionally, the stoppingcriterion, maxiterations, initialapprox and tolerance options may be specified as well.
 – LU and LU[tridiagonal] : LU Decomposition. This method performs LU factorization on A and then solves the subsequent systems. None of the remaining options are used with this method. An error will be raised if the LU[tridiagonal] method is specified and A is not tridiagonal.
 – PLU and PLU[scaled] :  PLU Decomposition. This method performs PLU factorization on A and then solves the subsequent systems. None of the remaining options are used with this method.
 • stoppingcriterion = distance(norm)
 The stopping criterion for an iterative technique; it is of the form stoppingcriterion=distance(norm), where distance is either relative or absolute and norm is a nonnegative integer, infinity, or Euclidean. By default, stoppingcriterion=relative(infinity).
 • tolerance = positive
 The tolerance of the approximation. By default, a tolerance of $\frac{1}{10000}$ is used.

Description

 • The LinearSolve command numerically approximates the solution to the linear system $A·x=b$, using the specified method.
 • The IterativeApproximate command and the MatrixDecomposition command are both used by the LinearSolve command.
 • If b is a matrix, then the systems $A·x={b}_{i}$ will be solved for each column ${b}_{i}$ of $b$, and hence there will be multiple solutions returned.
 • Different options are required to be specified in opts, depending on the method.  These dependencies are outlined in the Options section.
 • The Notes section in the Student[NumericalAnalysis][IterativeApproximate] help page lists conditions under which the Jacobi, Gauss-Seidel, and successive over-relaxation iterative methods produce a solution.

Examples

 > $\mathrm{with}\left(\mathrm{Student}\left[\mathrm{NumericalAnalysis}\right]\right):$
 > $A≔\mathrm{Matrix}\left(\left[\left[10.,-1.,2.,0.\right],\left[-1.,11.,-1.,3.\right],\left[2.,-1.,10.,-1.\right],\left[0.,3.,-1.,8.\right]\right]\right)$
 ${A}{≔}\left[\begin{array}{cccc}{10.}& {-1.}& {2.}& {0.}\\ {-1.}& {11.}& {-1.}& {3.}\\ {2.}& {-1.}& {10.}& {-1.}\\ {0.}& {3.}& {-1.}& {8.}\end{array}\right]$ (1)
 > $b≔\mathrm{Vector}\left(\left[6.,25.,-11.,15.\right]\right)$
 ${b}{≔}\left[\begin{array}{c}{6.}\\ {25.}\\ {-11.}\\ {15.}\end{array}\right]$ (2)
 > $\mathrm{LinearSolve}\left(A,b,\mathrm{method}=\mathrm{SOR}\left(1.25\right),\mathrm{initialapprox}=\mathrm{Vector}\left(\left[0.,0.,0.,0.\right]\right),\mathrm{maxiterations}=100,\mathrm{tolerance}={10}^{-4}\right)$
 $\left[\begin{array}{c}{0.9999776440}\\ {2.000001578}\\ {-0.9999942334}\\ {0.9999867498}\end{array}\right]$ (3)
 > $\mathrm{LinearSolve}\left(A,b,\mathrm{method}=\mathrm{LU}\right)$
 $\left[\begin{array}{c}{1.000000000}\\ {2.000000000}\\ {-1.000000000}\\ {0.9999999999}\end{array}\right]$ (4)

Try solving multiple systems (but with the same coefficient Matrix)

 > $B≔\mathrm{Matrix}\left(\left[\left[6.,25.,-11.,15.\right],\left[7.,8.,16.,4.\right],\left[4.,2.,9.,5.\right],\left[17.,6.,3.,22.\right]\right]\right)$
 ${B}{≔}\left[\begin{array}{cccc}{6.}& {25.}& {-11.}& {15.}\\ {7.}& {8.}& {16.}& {4.}\\ {4.}& {2.}& {9.}& {5.}\\ {17.}& {6.}& {3.}& {22.}\end{array}\right]$ (5)
 > $\mathrm{LinearSolve}\left(A,B,\mathrm{method}=\mathrm{PLU}\right)$
 $\left[\left[\begin{array}{c}{0.5095334686}\\ {0.1482082486}\\ {0.5264367816}\\ {2.135226505}\end{array}\right]{,}\left[\begin{array}{c}{2.623529412}\\ {0.8352941177}\\ {-0.2000000000}\\ {0.4117647058}\end{array}\right]{,}\left[\begin{array}{c}{-1.210953347}\\ {1.465179175}\\ {1.287356321}\\ {-0.01352265042}\end{array}\right]{,}\left[\begin{array}{c}{1.376064909}\\ {-0.2600405681}\\ {0.4896551724}\\ {2.908722110}\end{array}\right]\right]$ (6)