Numerics - Maple Help

New Numerics in Maple 7

 Maple 7 includes a number of changes to the numeric functionality. For information about basic numerics in Maple, see numerics.

Floating-Point Numbers in Maple

 • The default action on any of the numerical events overflow or underflow has been modified in the case of exact rational computation. Rather than returning the default value of undefined, these events now signal the event (i.e., return an error). (In the floating-point computation environment, the default behavior remains that the default value is returned and the corresponding status flag is raised.) To modify this behavior, define a handler for the event. For example, after $\mathrm{NumericEventHandler}\left(\mathrm{overflow}=\mathbf{proc}\left(\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathrm{args}[3]\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{end proc}\right)$, any overflow event will return its default value rather than signal the event.  See NumericEventHandler for more details.
 • Frequently, certain computations in mathematics are ill-defined, yet a value is customarily assigned for those computations.  A typical example is the computation $1$.  There is no mathematically valid value for this, but it is often taken to be $1$.  To accommodate such conventions, Maple uses the numeric event invalid_operation, setting the default value for this event accordingly.  So, for example, $1$ is computed as NumericEvent(invalid_operation, 1). The default action on signaling of this event is to return the default value (in this case, $1$).  If you wish to have Maple behave differently with respect to this computation, then install a handler for this event.
 This approach is used to deal with several similar computational conventions.  For example, Maple computes $\mathrm{pochhammer}\left(0,0\right)$ as NumericEvent(invalid_operation, 1).
 Note, however, that Maple distinguishes the exact rational computation domain from the floating-point computation domain with regard to its treatment of some conventions.  For example, the value returned by ${x}^{y}$ when both $x=0$ and $y=0$ but at least one of them is a floating-point number is NumericEvent(invalid_operation, Float(undefined)).  That is, the default value for this event is $1$ in the exact rational case, but Float(undefined) in the floating-point case.
 You can determine if a convention has been implemented in this way by checking the status flag for the invalid_operation event. See NumericStatus for more details.
 • The semantics of undefined have been changed slightly for this release.  The change was motivated by a problem of infinite looping. For example, consider this code snippet: $\mathrm{while}x<>\mathrm{x0}\mathrm{do}\mathrm{x0}:=x;x:=\mathrm{...};\mathrm{od};$.  This sort of code occurs frequently in Maple routines, and the intent is clearly to have the loop exit once the value of $x$ is not changed by the loop body.  However, if it should ever happen that $x$ is assigned the value undefined in this loop body, the loop will likely never terminate.  This is because the Maple 6 semantics of undefined were that the test undefined = undefined always returned false.
 To deal with this situation, the semantics of undefined have been changed so that in boolean tests, symbolic undefineds test as equal to each other, and floating-point undefineds compare as equal to each other.  Note, however, that undefined = Float(undefined) is always false.
 The semantics of arithmetic involving undefined have not been changed.
 Note: Because the boolean test $x=\mathrm{x0}$ is not the same as the test $x-\mathrm{x0}=0$, if $x$ or $\mathrm{x0}$ is some form of undefined, it is important to write loop termination conditions using the first form.
 It is important to note that this change to the semantics of undefined means that the IEEE/754 suggested test for NaN'' (not a number'', which Maple implements as undefined), namely, that x = x'' should return false, does not work.  Rather, to test for an undefined object in Maple, you must use a type test, e.g., type(x, undefined), which will return true for any expression $x$ which is an undefined object, or $\mathrm{type}\left(x,\mathrm{NumericClass}\left(\mathrm{undefined}\right)\right)$, which will return true if $x$ is an undefined object in the exact rational environment.
 • Improvements have been made to Maple's handling of extreme arithmetic, i.e., arithmetic involving values very near to the boundaries of its numeric systems, principally of its floating-point systems. While previously the values returned by Maple_floats(MAX_FLOAT) and Maple_floats(MIN_FLOAT) were just considered to be safe'' rather than sharp'', it is now the case that these values firmly delineate the boundaries of Maple's floating-point computation domain. All arithmetic operations in Maple should now either return a number or signal a numeric event (possibly both).
 • The $\mathrm{UseHardwareFloats}$ flag has been extended to be 3-valued.  It can now take any of the values true, false, or deduced.  The value deduced tells Maple to deduce the computation environment (hardware or software) from the current setting of the Digits environment variable: if $\mathrm{Digits}\le \mathrm{evalhf}\left(\mathrm{Digits}\right)$ then hardware float computation is performed; otherwise, the computation is performed in software.  The default value is deduced.

Numerical Solution of ODE

 Significant additions and improvements have been made to Maple's capabilities for numerical ODE solution.

 • Improved Methods
 The default method for numerical solution of IVP problems has been replaced by two new methods written by L. Shampine and R. Corless. The rkf45 method replaces the prior rkf45 code for solution of non-stiff IVP, and the new rosenbrock method is the default method for solution of stiff IVP.
 • Stopping Conditions
 A new option, stop_cond, has been provided for the default methods (rkf45 and rosenbrock). This allows for specification of a method for solving an overdetermined differential algebraic equation problem (DAE), where subsequent failure to satisfy the additional algebraic constraints will halt computation. Examples of constraints or halting conditions include problems where a reactant concentration becomes negative, or where a projectile passes a given boundary in a targeting problem.
 The following example is for a projectile targeting problem in rolling hills (sinusoidal):
 > sys:={diff(x(t),t,t)=0,diff(y(t),t,t)+9.8=0};
 ${\mathrm{sys}}{≔}\left\{\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{t}}^{{2}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({t}\right){+}{9.8}{=}{0}{,}\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{t}}^{{2}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{x}{}\left({t}\right){=}{0}\right\}$ (1)
 > stopcond:= [y(t)-8*sin(x(t)),x(t)-100];
 ${\mathrm{stopcond}}{≔}\left[{y}{}\left({t}\right){-}{8}{}{\mathrm{sin}}{}\left({x}{}\left({t}\right)\right){,}{x}{}\left({t}\right){-}{100}\right]$ (2)
 > IC1 := {x(0)=0,y(0)=10,D(x)(0)=10,D(y)(0)=40};
 ${\mathrm{IC1}}{≔}\left\{{x}{}\left({0}\right){=}{0}{,}{y}{}\left({0}\right){=}{10}{,}{\mathrm{D}}{}\left({x}\right){}\left({0}\right){=}{10}{,}{\mathrm{D}}{}\left({y}\right){}\left({0}\right){=}{40}\right\}$ (3)
 > ans:=dsolve(sys union IC1, 'numeric', 'stop_cond'=stopcond);
 ${\mathrm{ans}}{:=}{\mathbf{proc}}\left({\mathrm{x_rkf45}}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (4)
 > ans(100);
 $\left[{t}{=}{8.25697905978775}{,}{x}{}\left({t}\right){=}{82.5697905978775}{,}\frac{{ⅆ}}{{ⅆ}{t}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{x}{}\left({t}\right){=}{10.}{,}{y}{}\left({t}\right){=}{6.20841674202050}{,}\frac{{ⅆ}}{{ⅆ}{t}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({t}\right){=}{-40.9183947859199}\right]$ (5)
 > IC2 := {x(0)=0,y(0)=10,D(x)(0)=10,D(y)(0)=80};
 ${\mathrm{IC2}}{≔}\left\{{x}{}\left({0}\right){=}{0}{,}{y}{}\left({0}\right){=}{10}{,}{\mathrm{D}}{}\left({x}\right){}\left({0}\right){=}{10}{,}{\mathrm{D}}{}\left({y}\right){}\left({0}\right){=}{80}\right\}$ (6)
 > ans:=dsolve(sys union IC2, 'numeric', 'stop_cond'=stopcond);
 ${\mathrm{ans}}{:=}{\mathbf{proc}}\left({\mathrm{x_rkf45}}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (7)
 > ans(100);
 $\left[{t}{=}{10.0000000000000}{,}{x}{}\left({t}\right){=}{100.000000000000}{,}\frac{{ⅆ}}{{ⅆ}{t}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{x}{}\left({t}\right){=}{10.}{,}{y}{}\left({t}\right){=}{320.000000000000}{,}\frac{{ⅆ}}{{ⅆ}{t}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({t}\right){=}{-18.0000000000000}\right]$ (8)
 • Boundary Value Problems
 Maple can numerically solve boundary value problems with the new bvp solver. This solver is designed for non-stiff to mildly stiff BVP and uses grid adaptation and convergence acceleration to obtain the solution. It is also capable of producing high-accuracy solutions using arbitrary precision floating-point arithmetic, and has a method for removable end-point singularities as well as a faster method for problems without singularities.
 > dsol := dsolve({diff(y(x),x,x)-2*y(x)=0, y(0)=1.2, y(1)=0.9}, 'numeric');
 ${\mathrm{dsol}}{:=}{\mathbf{proc}}\left({\mathrm{x_bvp}}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (9)
 > dsol(0);
 $\left[{x}{=}{0.}{,}{y}{}\left({x}\right){=}{1.20000000000000}{,}\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){=}{-1.25251895272792}\right]$ (10)
 > dsol(0.2);
 $\left[{x}{=}{0.2}{,}{y}{}\left({x}\right){=}{0.994463627112648}{,}\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){=}{-0.816528958014159}\right]$ (11)
 > dsol(0.5);
 $\left[{x}{=}{0.5}{,}{y}{}\left({x}\right){=}{0.832942089176703}{,}\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){=}{-0.276385195570455}\right]$ (12)
 > dsol(1);
 $\left[{x}{=}{1.}{,}{y}{}\left({x}\right){=}{0.900000000000000}{,}\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){=}{0.555701111490340}\right]$ (13)
 High-accuracy solution
 > Digits:=30:
 > sys := {diff(y(x),x,x)-2*y(x)=0, y(0)=1.2, y(1)=0.9};
 ${\mathrm{sys}}{≔}\left\{\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{x}}^{{2}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){-}{2}{}{y}{}\left({x}\right){=}{0}{,}{y}{}\left({0}\right){=}{1.2}{,}{y}{}\left({1}\right){=}{0.9}\right\}$ (14)
 > exact:=dsolve(sys);
 ${\mathrm{exact}}{≔}{y}{}\left({x}\right){=}{-}\frac{{3}{}\left({4}{}{{ⅇ}}^{{-}\sqrt{{2}}}{-}{3}\right){}{{ⅇ}}^{\sqrt{{2}}{}{x}}}{{10}{}\left({-}{{ⅇ}}^{{-}\sqrt{{2}}}{+}{{ⅇ}}^{\sqrt{{2}}}\right)}{+}\frac{{3}{}\left({4}{}{{ⅇ}}^{\sqrt{{2}}}{-}{3}\right){}{{ⅇ}}^{{-}\sqrt{{2}}{}{x}}}{{10}{}\left({-}{{ⅇ}}^{{-}\sqrt{{2}}}{+}{{ⅇ}}^{\sqrt{{2}}}\right)}$ (15)
 > v1:=evalf(eval(rhs(exact),x=0.5));
 ${\mathrm{v1}}{≔}{0.832942090833706259044696272732}$ (16)
 > dsol := dsolve(sys, 'numeric', 'abserr'=1e-20);
 ${\mathrm{dsol}}{:=}{\mathbf{proc}}\left({\mathrm{x_bvp}}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (17)
 > v2:=eval(y(x),dsol(0.5));
 ${\mathrm{v2}}{≔}{0.832942090833706259045253674662}$ (18)
 > abs(v1-v2);
 ${5.57401930}{×}{{10}}^{{-22}}$ (19)
 > Digits:=10:

Improvements

 • Speed
 The default methods, rkf45 for non-stiff problems and rosenbrock for stiff problems, can be provided an option, $\mathrm{range}=a..b$, to allow precomputation of the ODE solution, allowing interpolation to be used for plotting and obtaining solution values between steps. This results in faster plotting of solutions, higher quality solution plots, better solutions, and avoids prior problems with the dependence of the solution on the resolution of the plot. The solution procedures for these methods store their computations, so if the numerical solution of an IVP problem was calculated for $t=0..100$, any later request for solution values in that range is obtained through interpolation of the already computed solution.
 In addition, these methods can run in hardware mode, where the bulk of the computation is being performed by compiled code and evaluation of the ODE system is performed using evalhf. This results in significant speed improvements for problems that are evalhf-compatible.
 For example, the following long-time integration of the Lotka-Volterra model is computed in one quarter the time than in prior releases:
 > dsys := {diff(x(t),t)=x(t)*(1-y(t)),          diff(y(t),t)=.3*y(t)*(x(t)-1),          x(0)=1.0, y(0)=0.5};
 ${\mathrm{dsys}}{≔}\left\{\frac{{ⅆ}}{{ⅆ}{t}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{x}{}\left({t}\right){=}{x}{}\left({t}\right){}\left({1}{-}{y}{}\left({t}\right)\right){,}\frac{{ⅆ}}{{ⅆ}{t}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({t}\right){=}{0.3}{}{y}{}\left({t}\right){}\left({x}{}\left({t}\right){-}{1}\right){,}{x}{}\left({0}\right){=}{1.0}{,}{y}{}\left({0}\right){=}{0.5}\right\}$ (20)
 > dsol := dsolve(dsys, 'numeric', 'maxfun'=0,                'interpolate'=false);
 ${\mathrm{dsol}}{:=}{\mathbf{proc}}\left({\mathrm{x_rkf45}}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (21)
 > dsol(2000);
 $\left[{t}{=}{2000.}{,}{x}{}\left({t}\right){=}{0.356896878813509}{,}{y}{}\left({t}\right){=}{0.657193538022169}\right]$ (22)
 • Consistency and Ease of Use
 The dsolve/numeric methods have been made more uniform and easier to use.
 Nearly all dsolve/numeric methods for IVP understand the $\mathrm{abserr}$ and $\mathrm{relerr}$ options for step error control, the $\mathrm{maxfun}$ option for limiting the amount of work performed by the routine, and the $\mathrm{minstep}$, $\mathrm{maxstep}$, and $\mathrm{initstep}$ options for step size control.
 The interface has also been simplified, no longer requiring the names of the dependent variables of the problem (which can be determined from the input), and allowing for specification of the use of a stiff method with the option stiff=true.
 > deq  := diff(y(t),t,t) + 1001*diff(y(t),t) + 1000*y(t)=0:
 > ic   := y(0)=1, D(y)(0)=-1:
 > dsol := dsolve({deq,ic}, 'numeric');
 ${\mathrm{dsol}}{:=}{\mathbf{proc}}\left({\mathrm{x_rkf45}}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (23)
 > dsol(50);
 > dsol := dsolve({deq,ic}, 'numeric', 'stiff'=true);
 ${\mathrm{dsol}}{:=}{\mathbf{proc}}\left({\mathrm{x_rosenbrock}}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (24)
 > dsol(50);
 $\left[{t}{=}{50.}{,}{y}{}\left({t}\right){=}{9.37362189971127}{×}{{10}}^{{-10}}{,}\frac{{ⅆ}}{{ⅆ}{t}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({t}\right){=}{-9.37362189971127}{×}{{10}}^{{-10}}\right]$ (25)
 The $\mathrm{mgear}$ method has been removed. The $\mathrm{rosenbrock}$ or $\mathrm{lsode}$ methods should be used instead.
 The value=array option is deprecated, as the same functionality is now available as part of the output option.
 • Precision
 The rkf45, rosenbrock, and dverk78 methods can now compute solutions in arbitrary precision Maple floating-point (based on Digits). In prior releases, the taylorseries and gear methods were the only methods capable of arbitrary-precision solutions.
 • Miscellaneous
 The current IVP methods (with the exception of taylorseries) can now handle complex-valued problems.
 > dsol := dsolve({diff(y(t),t) + I*y(t) = 0, y(0)=I},                'numeric'):
 > dsol(Pi/4);
 $\left[{t}{=}{0.785398163397448}{,}{y}{}\left({t}\right){=}{0.707106875083858}{+}{0.707106902685889}{}{I}\right]$ (26)
 > dsol := dsolve({diff(y(t),t) + I*y(t) = 0, y(0)=I},                'numeric', 'method'='rosenbrock'):
 > dsol(Pi/4);
 $\left[{t}{=}{0.785398163397448}{,}{y}{}\left({t}\right){=}{0.707106735017522}{+}{0.707106826931253}{}{I}\right]$ (27)
 > dsol := dsolve({diff(y(t),t) + I*y(t) = 0, y(0)=I},                'numeric', 'method'='gear'):
 > dsol(Pi/4);
 $\left[{t}{=}{0.785398163397448}{,}{y}{}\left({t}\right){=}{0.707106781179390}{+}{0.707106781190284}{}{I}\right]$ (28)
 > dsol := dsolve({diff(y(t),t) + I*y(t) = 0, y(0)=I},                'numeric', 'method'='dverk78'):
 > dsol(Pi/4);
 $\left[{t}{=}{0.785398163397448}{,}{y}{}\left({t}\right){=}{0.707106781184395}{+}{0.707106781187889}{}{I}\right]$ (29)
 > dsol := dsolve({diff(y(t),t) + I*y(t) = 0, y(0)=I},                'numeric', 'method'='lsode'):
 > dsol(Pi/4);
 $\left[{t}{=}{0.785398163397448}{,}{y}{}\left({t}\right){=}{0.707107828482804}{+}{0.707106783632396}{}{I}\right]$ (30)
 The solution outputs when using the $\mathrm{output}=\mathrm{listprocedure}$ option are now more flexible, and can be used directly with other Maple commands, such as fsolve and plot.
 > dsol := dsolve({diff(y(t),t,t)+y(t)=0,y(0)=1,D(y)(0)=0},                'numeric', 'abserr'=1e-9, 'relerr'=1e-9,                'output'=listprocedure);
 ${\mathrm{dsol}}{≔}\left[{t}{=}{\mathbf{proc}}\left({t}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}{,}{y}{}\left({t}\right){=}{\mathbf{proc}}\left({t}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}{,}\frac{{ⅆ}}{{ⅆ}{t}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({t}\right){=}{\mathbf{proc}}\left({t}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}\right]$ (31)
 > ysol := eval(y(t),dsol);
 ${\mathrm{ysol}}{:=}{\mathbf{proc}}\left({t}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{...}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (32)
 > fsolve(ysol(t)=0,t,1..2);
 ${1.570796327}$ (33)
 > evalf(Pi/2);
 ${1.570796327}$ (34)
 A long-standing bug, relating to integration of IVP on both sides of the initial condition has been fixed for all methods.

 Significant improvements in speed and functionality have been achieved through improvements in both code and methodologies, and also through the use of NAG routines for one-dimensional quadrature.

User Interface Changes

 The most common forms for invoking numerical integration remain unchanged, namely
 • evalf(Int(expr, x=a..b))
 • evalf/int(expr, x=a..b))
 • evalf(int(expr, x=a..b))
 where the last form only invokes numerical integration if the invocation of symbolic integration returns an unevaluated integral. There is a variation of the above forms in the case where the first argument (the integrand) is specified as a procedure or a Maple operator, in which case the second argument must be the range a..b and not an equation.
 The syntax for specifying additional options has been modified in Maple 7 to accept equation forms. (For backward compatibility, some options are accepted as values rather than equations, as specified below.) One or more of the following options may be specified as additional arguments:

 method =     or    digits =   or    epsilon =

 The specification method =  (or simply ) indicates a particular numerical integration method to be applied. For the list of methods which may be specified see evalf,int. By default, a hybrid symbolic-numeric strategy is applied.
 The specification digits =  (or simply ) indicates the number of digits of precision for the computation. Some additional guard digits are carried during the computation to attempt to achieve a result with  correct digits (although a larger tolerance can be specified by using the 'epsilon' option). By default, the Maple environment variable Digits specifies the precision for the computation.
 The specification epsilon =  specifies the relative error tolerance for the computed result. The routines attempt to achieve a final result with a relative error less than this value. By default, the relative error tolerance which the routines attempt to achieve for the final result is

 eps = 0.5 * 10^(1-digits)

 where digits is the precision specified for the computation. In attempting to achieve this accuracy, the working value of Digits is increased as deemed necessary. It is an error to specify 'epsilon' smaller than the default value above.

Incorporation of NAG Routines

 In Maple 7, compiled routines from the NAG (Numerical Algorithms Group) subroutine library are automatically invoked to perform quadrature (numerical integration) tasks, when appropriate. This results in significant speed improvements compared with earlier versions of Maple. (See below for some timing information.)
 In the default case (no particular method specified), the Maple integration problem is first passed to NAG integration routines if Digits is not too large (i.e., if $\mathrm{Digits}\le \mathrm{evalhf}\left(\mathrm{Digits}\right)$). The NAG routines are in a compiled C library and hence operate at hardware floating-point speed. If the NAG routines cannot perform the integration, then some singularity handling may be performed and control may pass back to the NAG routines with a modified problem. Native Maple routines are invoked if the NAG routines cannot solve the problem (e.g., for larger values of Digits and for integrands involving functions unknown to NAG).

Functionality Improvements

 Various modifications in the hybrid symbolic-numeric solution strategy for integrands with singularities have been implemented, driven by a study of cases which were not handled (or were inefficiently handled) in previous versions. Thus a wider class of integrands can now be integrated successfully.
 For example, integrands containing various nonanalytic functions are treated by breaking the interval of integration at the points at which the function is not analytic and by separating real and imaginary parts of the integrand. The following nonanalytic functions known to Maple are treated: piecewise, Dirac, Heaviside, abs, signum, csgn, trunc, frac, floor, ceil, round, ilog10, argument .
 The following two examples will illustrate. By setting ${\mathrm{infolevel}}_{\mathrm{evalf/int}}≔2$ one can see how the interval of integration is split at each potential singularity of the integrand $\mathrm{f1}$ and the integrand $\mathrm{f2}$ is split into its real and imaginary parts.
 > infolevel[evalf/int] := 2:
 > f1 := ceil(cos(x^2-1));
 ${\mathrm{f1}}{≔}⌈{\mathrm{cos}}{}\left({{x}}^{{2}}{-}{1}\right)⌉$ (35)
 > evalf(Int(f1, x=0..Pi));
 evalf/int/control:   integrating on 0 .. Pi the integrand
 evalf/int/control:   non-analytic function detectedevalf/int/pieces:   attempt to break up the interval of integration evalf/int/control:   integrating on 0 .. 1 the integrand
 evalf/int/control:   integrating on 1 .. 1/2*(2*Pi+4)^(1/2) the integrand
 evalf/int/control:   integrating on 1/2*(2*Pi+4)^(1/2) .. (1+Pi)^(1/2) the integrand
 evalf/int/control:   integrating on (1+Pi)^(1/2) .. 1/2*(6*Pi+4)^(1/2) the integrand
 evalf/int/control:   integrating on 1/2*(6*Pi+4)^(1/2) .. (2*Pi+1)^(1/2) the integrand
 evalf/int/control:   integrating on (2*Pi+1)^(1/2) .. 1/2*(10*Pi+4)^(1/2) the integrand
 evalf/int/control:   integrating on 1/2*(10*Pi+4)^(1/2) .. Pi the integrand
 $⌈{\mathrm{cos}}{}\left({{x}}^{{2}}{-}{1}\right)⌉$
 ${1}$
 ${1}$
 ${0}$
 ${0}$
 ${1}$
 ${1}$
 ${0}$
 ${2.188873934}$ (36)
 > f2 := conjugate(sin(x+I*arctan(x^2,sin(x))));
 ${\mathrm{f2}}{≔}{\mathrm{sin}}{}\left(\stackrel{{&conjugate0;}}{\left({x}{+}{I}{}{\mathrm{arctan}}{}\left({{x}}^{{2}}{,}{\mathrm{sin}}{}\left({x}\right)\right)\right)}\right)$ (37)
 > evalf(Int(f2, x=0..2*Pi));
 evalf/int/control:   integrating on 0 .. 2*Pi the integrand
 evalf/int/control:   attempt to split integrand into Re and Im partsevalf/int/control:   integrating on 0 .. 2*Pi the integrand
 Control:   Entering NAGIntControl:   trying d01ajc (nag_1d_quad_gen) d01ajc:   "trying evalhf callbacks" d01ajc:   result=-1.65939040794013204 d01ajc:   abserr=.204184750295993220e-9; num_subint=2; fun_count=63 Control:   result=-1.65939040794013204 evalf/int/control:   integrating on 0 .. 2*Pi the integrand
 Control:   Entering NAGIntControl:   trying d01ajc (nag_1d_quad_gen) d01ajc:   "trying evalhf callbacks" d01ajc:   result=1.55748944065209516 d01ajc:   abserr=.728975743687537407e-10; num_subint=2; fun_count=63 Control:   result=1.55748944065209516
 ${\mathrm{sin}}{}\left(\stackrel{{&conjugate0;}}{\left({x}{+}{I}{}{\mathrm{arctan}}{}\left({{x}}^{{2}}{,}{\mathrm{sin}}{}\left({x}\right)\right)\right)}\right)$
 ${\mathrm{sin}}{}\left({x}\right){}{\mathrm{cosh}}{}\left({\mathrm{arctan}}{}\left({{x}}^{{2}}{,}{\mathrm{sin}}{}\left({x}\right)\right)\right)$
 ${-}{\mathrm{cos}}{}\left({x}\right){}{\mathrm{sinh}}{}\left({\mathrm{arctan}}{}\left({{x}}^{{2}}{,}{\mathrm{sin}}{}\left({x}\right)\right)\right)$
 ${-1.659390408}{+}{1.557489441}{}{I}$ (38)

Timing Improvements

 The following two timing tests are examples of the timing improvements for numerical integration which have been achieved in Maple 7 due to the incorporation of compiled NAG routines. For integration problems on a finite interval with no integrand singularities, the speedup is typically a factor of 20 to 30.

 Test  Maple 6 Maple 7 Speedup ----  ------- ------- ------- ex1    12.25    0.27    45.4 ex2     5.12    0.29    17.7

 ex1
 > evalf(Int(ln(2*x^2)*x^4 / (1 - exp(2*x^2)), x = -infinity..infinity));
 evalf/int/control:   integrating on -infinity .. infinity the integrand
 evalf/int/control:   "attempting Levin method"Control:   Entering NAGInt trying d01amc (nag_1d_quad_inf) d01amc:   "trying evalhf callbacks" d01amc:   result=-.653212119735574331e-1 d01amc:   abserr=.222032947583272744e-10; num_subint=12; fun_count=345 result=-.653212119735574331e-1 Control:   Entering NAGInt trying d01amc (nag_1d_quad_inf) d01amc:   "trying evalhf callbacks" d01amc:   result=-.653212119735574331e-1 d01amc:   abserr=.222032947583272744e-10; num_subint=12; fun_count=345 result=-.653212119735574331e-1
 $\frac{{\mathrm{ln}}{}\left({2}{}{{x}}^{{2}}\right){}{{x}}^{{4}}}{{1}{-}{{ⅇ}}^{{2}{}{{x}}^{{2}}}}$
 ${-0.1306424239}$ (39)
 ex2
 > evalf(Int(1/exp(x*ln(x)), x=1..infinity));
 evalf/int/control:   integrating on 1 .. infinity the integrand
 evalf/int/control:   "attempting Levin method"evalf/int/IsolateInt:   splitting interval based on points: [1, 10.8462559396596, infinity] evalf/int/control:   integrating on 10.8462559396596 .. infinity the integrand
 evalf/int/control:   "attempting Levin method"Control:   Entering NAGInt trying d01amc (nag_1d_quad_inf) d01amc:   "trying evalhf callbacks" d01amc:   result=.173116322605122934e-11 d01amc:   abserr=.860731693119929988e-14; num_subint=1; fun_count=15 result=.173116322605122934e-11 evalf/int/control:   integrating on 1 .. 10.8462559396596 the integrand
 Control:   Entering NAGIntControl:   trying d01ajc (nag_1d_quad_gen) d01ajc:   "trying evalhf callbacks" d01ajc:   result=.704169960435743203 d01ajc:   abserr=.472390388719768988e-10; num_subint=2; fun_count=63 Control:   result=.704169960435743203
 $\frac{{1}}{{{ⅇ}}^{{x}{}{\mathrm{ln}}{}\left({x}\right)}}$
 ${\left(\frac{{1}}{{x}}\right)}^{{x}}$
 $\frac{{1}}{{{ⅇ}}^{{x}{}{\mathrm{ln}}{}\left({x}\right)}}$
 ${0.7041699604}$ (40)