LogLikelihood - Maple Help

TimeSeriesAnalysis

 LogLikelihood
 log likelihood of a time series coming from an exponential smoothing model

 Calling Sequence LogLikelihood(model, ts, extraparameters)

Parameters

 model - ts - Time series consisting of a single data set extraparameters - (optional) table of parameter values

Description

 • The LogLikelihood command determines the logarithm of the likelihood of obtaining a particular time series from a given exponential smoothing model.
 • For models with additive errors, the value returned is the actual log likelihood: the logarithm of the product of the normal PDF, evaluated at each error value. More precisely, the value is

$\mathrm{log}\left({\prod }_{t=1}^{N}\frac{{ⅇ}^{-\frac{{\mathrm{\epsilon }}_{t}^{2}}{2{\mathrm{\sigma }}^{2}}}}{\sqrt{2\mathrm{\pi }}\mathrm{\sigma }}\right)=-\frac{{\sum }_{t=1}^{N}{\mathrm{\epsilon }}_{t}^{2}}{2{\mathrm{\sigma }}^{2}}-\frac{N\left(\mathrm{log}\left(2\right)+\mathrm{log}\left(\mathrm{\pi }\right)+2\mathrm{log}\left(\mathrm{\sigma }\right)\right)}{2}$

 where ${\mathrm{\epsilon }}_{t}$ is the additive error at time $t$, for $t=1..N$. Typically (if $\mathrm{\sigma }$ is not specified when defining the model)$\mathrm{\sigma }$ is optimized to maximize this value; that is, it is set to $\sqrt{\frac{{\sum }_{t=1}^{N}{\mathrm{\epsilon }}_{t}^{2}}{N}}$. With this value substituted, the likelihood becomes

$-\frac{N\left(\mathrm{log}\left(2\mathrm{\pi }\right)+1+\mathrm{log}\left({\sum }_{t=1}^{N}{\mathrm{\epsilon }}_{t}^{2}\right)-\mathrm{log}\left(N\right)\right)}{2}$

 • For models with multiplicative error, given a time series consisting of numbers substantially larger than 1, the absolute magnitude of the errors will be much smaller than for a model with additive errors. Conversely, given a time series consisting of numbers substantially smaller than 1, the errors will be much larger than for a model with additive errors. This happens because multiplicative errors are scaled by multiplying them by the forecast in the representation of the model. Nonetheless, we would like to compare models with additive and multiplicative errors on an equal footing. This is accomplished by including an extra term that compensates for this effect: it scales the errors back to their original sizes, by multiplying them by the geometric mean of all forecasts. In particular, the final formula for the likelihood is in this case

$-\frac{N\left(\mathrm{log}\left(2\mathrm{\pi }\right)+1+\mathrm{log}\left({\sum }_{t=1}^{N}{\mathrm{\epsilon }}_{t}^{2}{\left({\prod }_{t=1}^{N}\left|{f}_{t}\right|\right)}^{\frac{2}{N}}\right)-\mathrm{log}\left(N\right)\right)}{2}=-\frac{N\left(\mathrm{log}\left(2\mathrm{\pi }\right)+1+\mathrm{log}\left({\sum }_{t=1}^{N}{\mathrm{\epsilon }}_{t}^{2}\right)+\frac{2\left({\sum }_{t=1}^{N}\mathrm{log}\left(\left|{f}_{t}\right|\right)\right)}{N}-\mathrm{log}\left(N\right)\right)}{2}$

 where ${f}_{t}$ is the forecast at time $t$, for $t=1..N$.
 • If any of the parameters used by the model are unset, the log likelihood cannot be computed. If this is the case, a table of parameter values (such as the one generated by Initialize) can be supplied as a third argument. If a parameter occurs both in the model and in the table, the table takes precedence.

Examples

 > $\mathrm{with}\left(\mathrm{TimeSeriesAnalysis}\right):$

Consider the following time series.

 > $\mathrm{ts}≔\mathrm{TimeSeries}\left(\left[2.7,1.8,3.4,2.5,2.6,2.4,2.9,2.9\right],\mathrm{period}=2\right)$
 ${\mathrm{ts}}{≔}\left[\begin{array}{c}{\mathrm{Time series}}\\ {\mathrm{data set}}\\ {\mathrm{8 rows of data:}}\\ {\mathrm{2013 - 2020}}\end{array}\right]$ (1)

We fit a model to it.

 > $\mathrm{model}≔\mathrm{ExponentialSmoothingModel}\left(\mathrm{ts}\right)$
 ${\mathrm{model}}{≔}{\mathrm{< an ETS\left(A,N,N\right) model >}}$ (2)

The log likelihood of the time series $\mathrm{ts}$ arising from the model $\mathrm{model}$ is computed below.

 > $\mathrm{LogLikelihood}\left(\mathrm{model},\mathrm{ts}\right)$
 ${-4.667483693}$ (3)

This model has only two parameters.

 > $\mathrm{NumberOfParameters}\left(\mathrm{model}\right)$
 ${2}$ (4)
 > $\mathrm{GetParameter}\left(\mathrm{model},\left[\mathrm{\alpha },\mathrm{l0}\right]\right)$
 $\left[{0.00113636255264286}{,}{2.63136099750421}\right]$ (5)

Let us consider an alternative parameter settings.

 > $\mathrm{LogLikelihood}\left(\mathrm{model},\mathrm{ts},\mathrm{table}\left(\left[\mathrm{\alpha }=0.05,\mathrm{l0}=2\right]\right)\right)$
 ${-8.562972403}$ (6)

This setting is substantially less likely. Now let's consider the version that optimization is initialized with.

 > $\mathrm{init}≔\mathrm{Initialize}\left(\mathrm{ExponentialSmoothingModel}\left(A,N,N\right),\mathrm{ts}\right)$
 ${\mathrm{init}}{≔}{table}{}\left(\left[{\mathrm{\alpha }}{=}\frac{{1}}{{2}}{,}{\mathrm{l0}}{=}{2.65000000000000}\right]\right)$ (7)
 > $\mathrm{LogLikelihood}\left(\mathrm{model},\mathrm{ts},\mathrm{init}\right)$
 ${-6.639012833}$ (8)

This setting is more likely than the previous one, but less likely than the optimized one.

References

 Hyndman, R.J. and Athanasopoulos, G. (2013) Forecasting: principles and practice. http://otexts.org/fpp/. Accessed on 2013-10-09.
 Hyndman, R.J., Koehler, A.B., Ord, J.K., and Snyder, R.D. (2008) Forecasting with Exponential Smoothing: The State Space Approach. Springer Series in Statistics. Springer-Verlag Berlin Heidelberg.

Compatibility

 • The TimeSeriesAnalysis[LogLikelihood] command was introduced in Maple 18.