Augmented Dynamic Adaptive Model

This vignette explains briefly how to use the function adam() and the related auto.adam() in smooth package. It does not aim at covering all aspects of the function, but focuses on the main ones.

ADAM is Augmented Dynamic Adaptive Model. It is a model that underlies ETS, ARIMA and regression, connecting them in a unified framework. The underlying model for ADAM is a Single Source of Error state space model, which is explained in detail separately in an online textbook.

The main philosophy of adam() function is to be agnostic of the provided data. This means that it will work with ts, msts, zoo, xts, data.frame, numeric and other classes of data. The specification of seasonality in the model is done using a separate parameter lags, so you are not obliged to transform the existing data to something specific, and can use it as is. If you provide a matrix, or a data.frame, or a data.table, or any other multivariate structure, then the function will use the first column for the response variable and the others for the explanatory ones. One thing that is currently assumed in the function is that the data is measured at a regular frequency. If this is not the case, you will need to introduce missing values manually.

In order to run the experiments in this vignette, we need to load the following packages:

require(greybox)
require(smooth)

ADAM ETS

First and foremost, ADAM implements ETS model, although in a more flexible way than (Hyndman et al. 2008): it supports different distributions for the error term, which are regulated via distribution parameter. By default, the additive error model relies on Normal distribution, while the multiplicative error one assumes Inverse Gaussian. If you want to reproduce the classical ETS, you would need to specify distribution="dnorm". Here is an example of ADAM ETS(MMM) with Normal distribution on AirPassengers data:

testModel <- adam(AirPassengers, "MMM", lags=c(1,12), distribution="dnorm",
                  h=12, holdout=TRUE)
summary(testModel)
#> 
#> Model estimated using adam() function: ETS(MMM)
#> Response variable: AirPassengers
#> Distribution used in the estimation: Normal
#> Loss function type: likelihood; Loss function value: 470.6091
#> Coefficients:
#>             Estimate Std. Error Lower 2.5% Upper 97.5%  
#> alpha         0.6661     0.0819     0.5039      0.8278 *
#> beta          0.0038     0.0226     0.0000      0.0485  
#> gamma         0.0298     0.0351     0.0000      0.0991  
#> level       111.4423     5.5993   100.3511    122.5066 *
#> trend         1.0098     0.0028     1.0042      1.0154 *
#> seasonal_1    0.8973     0.0058     0.8859      0.9227 *
#> seasonal_2    0.8991     0.0096     0.8876      0.9245 *
#> seasonal_3    1.0297     0.0107     1.0182      1.0550 *
#> seasonal_4    0.9957     0.0090     0.9843      1.0211 *
#> seasonal_5    1.0021     0.0070     0.9907      1.0275 *
#> seasonal_6    1.1352     0.0093     1.1237      1.1605 *
#> seasonal_7    1.2382     0.0128     1.2268      1.2636 *
#> seasonal_8    1.2237     0.0112     1.2122      1.2490 *
#> seasonal_9    1.0642     0.0107     1.0527      1.0895 *
#> seasonal_10   0.9236     0.0096     0.9121      0.9489 *
#> seasonal_11   0.8004     0.0083     0.7889      0.8257 *
#> 
#> Error standard deviation: 0.0379
#> Sample size: 132
#> Number of estimated parameters: 17
#> Number of degrees of freedom: 115
#> Information criteria:
#>       AIC      AICc       BIC      BICc 
#>  975.2182  980.5866 1024.2258 1037.3323
plot(forecast(testModel,h=12,interval="prediction"))

You might notice that the summary contains more than what is reported by other smooth functions. This one also produces standard errors for the estimated parameters based on Fisher Information calculation. Note that this is computationally expensive, so if you have a model with more than 30 variables, the calculation of standard errors might take plenty of time. As for the default print() method, it will produce a shorter summary from the model, without the standard errors (similar to what es() does):

testModel
#> Time elapsed: 0.14 seconds
#> Model estimated using adam() function: ETS(MMM)
#> With optimal initialisation
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 470.6091
#> Persistence vector g:
#>  alpha   beta  gamma 
#> 0.6661 0.0038 0.0298 
#> 
#> Sample size: 132
#> Number of estimated parameters: 17
#> Number of degrees of freedom: 115
#> Information criteria:
#>       AIC      AICc       BIC      BICc 
#>  975.2182  980.5866 1024.2258 1037.3323 
#> 
#> Forecast errors:
#> ME: -11.657; MAE: 16.154; RMSE: 22.504
#> sCE: -53.291%; Asymmetry: -70%; sMAE: 6.154%; sMSE: 0.735%
#> MASE: 0.671; RMSSE: 0.718; rMAE: 0.213; rRMSE: 0.219

Also, note that the prediction interval in case of multiplicative error models are approximate. It is advisable to use simulations instead (which is slower, but more accurate):

plot(forecast(testModel,h=18,interval="simulated"))

If you want to do the residuals diagnostics, then it is recommended to use plot function, something like this (you can select, which of the plots to produce):

par(mfcol=c(3,4))
plot(testModel,which=c(1:11))
par(mfcol=c(1,1))
plot(testModel,which=12)

By default ADAM will estimate models via maximising likelihood function. But there is also a parameter loss, which allows selecting from a list of already implemented loss functions (again, see documentation for adam() for the full list) or using a function written by a user. Here is how to do the latter on the example of BJsales:

lossFunction <- function(actual, fitted, B){
  return(sum(abs(actual-fitted)^3))
}
testModel <- adam(BJsales, "AAN", silent=FALSE, loss=lossFunction,
                  h=12, holdout=TRUE)
testModel
#> Time elapsed: 0.03 seconds
#> Model estimated using adam() function: ETS(AAN)
#> With optimal initialisation
#> Distribution assumed in the model: Normal
#> Loss function type: custom; Loss function value: 599.2241
#> Persistence vector g:
#>  alpha   beta 
#> 1.0000 0.2269 
#> 
#> Sample size: 138
#> Number of estimated parameters: 4
#> Number of degrees of freedom: 134
#> Information criteria are unavailable for the chosen loss & distribution.
#> 
#> Forecast errors:
#> ME: 3.015; MAE: 3.129; RMSE: 3.866
#> sCE: 15.918%; Asymmetry: 91.7%; sMAE: 1.376%; sMSE: 0.029%
#> MASE: 2.626; RMSSE: 2.52; rMAE: 1.009; rRMSE: 1.009

Note that you need to have parameters actual, fitted and B in the function, which correspond to the vector of actual values, vector of fitted values on each iteration and a vector of the optimised parameters.

loss and distribution parameters are independent, so in the example above, we have assumed that the error term follows Normal distribution, but we have estimated its parameters using a non-conventional loss because we can. Some of distributions assume that there is an additional parameter, which can either be estimated or provided by user. These include Asymmetric Laplace (distribution="dalaplace") with alpha, Generalised Normal and Log-Generalised normal (distribution=c("gnorm","dlgnorm")) with shape and Student’s T (distribution="dt") with nu:

testModel <- adam(BJsales, "MMN", silent=FALSE, distribution="dgnorm", shape=3,
                  h=12, holdout=TRUE)

The model selection in ADAM ETS relies on information criteria and works correctly only for the loss="likelihood". There are several options, how to select the model, see them in the description of the function: ?adam(). The default one uses branch-and-bound algorithm, similar to the one used in es(), but only considers additive trend models (the multiplicative trend ones are less stable and need more attention from a forecaster):

testModel <- adam(AirPassengers, "ZXZ", lags=c(1,12), silent=FALSE,
                  h=12, holdout=TRUE)
#> Forming the pool of models based on... ANN , ANA , MNM , MAM , Estimation progress:    71 %86 %100 %... Done!
testModel
#> Time elapsed: 0.69 seconds
#> Model estimated using adam() function: ETS(MAM)
#> With optimal initialisation
#> Distribution assumed in the model: Gamma
#> Loss function type: likelihood; Loss function value: 466.9086
#> Persistence vector g:
#>  alpha   beta  gamma 
#> 0.7807 0.0003 0.0002 
#> 
#> Sample size: 132
#> Number of estimated parameters: 17
#> Number of degrees of freedom: 115
#> Information criteria:
#>       AIC      AICc       BIC      BICc 
#>  967.8172  973.1857 1016.8249 1029.9313 
#> 
#> Forecast errors:
#> ME: 11.859; MAE: 22.322; RMSE: 26.996
#> sCE: 54.214%; Asymmetry: 68.4%; sMAE: 8.504%; sMSE: 1.058%
#> MASE: 0.927; RMSSE: 0.862; rMAE: 0.294; rRMSE: 0.262

Note that the function produces point forecasts if h>0, but it won’t generate prediction interval. This is why you need to use forecast() method (as shown in the first example in this vignette).

Similarly to es(), function supports combination of models, but it saves all the tested models in the output for a potential reuse. Here how it works:

testModel <- adam(AirPassengers, "CXC", lags=c(1,12),
                  h=12, holdout=TRUE)
testForecast <- forecast(testModel,h=18,interval="semiparametric", level=c(0.9,0.95))
testForecast
#>          Point forecast Lower bound (5%) Lower bound (2.5%) Upper bound (95%)
#> Jan 1960       411.7284         388.9060           384.6620          435.0953
#> Feb 1960       406.5590         377.9249           372.6446          436.0670
#> Mar 1960       466.6605         427.2715           420.0654          507.4993
#> Apr 1960       449.9171         408.8108           401.3205          492.6667
#> May 1960       450.5760         408.4541           400.7882          494.4225
#> Jun 1960       512.9144         462.9301           453.8543          565.0361
#> Jul 1960       569.9673         512.4185           501.9906          630.0703
#> Aug 1960       569.1866         510.7379           500.1577          630.2760
#> Sep 1960       498.4551         447.3865           438.1411          551.8259
#> Oct 1960       434.6231         389.7640           381.6464          481.5206
#> Nov 1960       378.8849         339.6652           332.5695          419.8925
#> Dec 1960       426.5414         381.4206           373.2682          473.7671
#> Jan 1961       433.3114         385.2139           376.5507          483.7699
#> Feb 1961       427.7775         377.6289           368.6300          480.5350
#> Mar 1961       490.9117         429.1080           418.0754          556.1836
#> Apr 1961       473.1945         410.6973           399.5832          539.3836
#> May 1961       473.7861         410.4375           399.1835          540.9272
#> Jun 1961       539.2199         466.5032           453.5946          616.3333
#>          Upper bound (97.5%)
#> Jan 1960            439.7020
#> Feb 1960            441.9294
#> Mar 1960            515.6710
#> Apr 1960            501.2512
#> May 1960            503.2368
#> Jun 1960            575.5352
#> Jul 1960            642.1989
#> Aug 1960            642.6145
#> Sep 1960            562.6044
#> Oct 1960            490.9955
#> Nov 1960            428.1788
#> Dec 1960            483.3211
#> Jan 1961            494.0053
#> Feb 1961            491.2710
#> Mar 1961            569.5253
#> Apr 1961            552.9558
#> May 1961            554.7064
#> Jun 1961            632.1691
plot(testForecast)

Yes, now we support vectors for the levels in case you want to produce several. In fact, we also support side for prediction interval, so you can extract specific quantiles without a hustle:

forecast(testModel,h=18,interval="semiparametric", level=c(0.9,0.95,0.99), side="upper")
#>          Point forecast Upper bound (90%) Upper bound (95%) Upper bound (99%)
#> Jan 1960       411.7284          429.8240          435.0953          445.0988
#> Feb 1960       406.5590          429.3724          436.0670          448.8109
#> Mar 1960       466.6605          498.1849          507.4993          525.2812
#> Apr 1960       449.9171          482.8908          492.6667          511.3560
#> May 1960       450.5760          484.3877          494.4225          513.6150
#> Jun 1960       512.9144          553.0895          565.0361          587.9036
#> Jul 1960       569.9673          616.2759          630.0703          656.4936
#> Aug 1960       569.1866          616.2460          630.2760          657.1600
#> Sep 1960       498.4551          539.5697          551.8259          575.3103
#> Oct 1960       434.6231          470.7477          481.5206          502.1658
#> Nov 1960       378.8849          410.4715          419.8925          437.9484
#> Dec 1960       426.5414          462.9081          473.7671          494.5886
#> Jan 1961       433.3114          472.1445          483.7699          506.0847
#> Feb 1961       427.7775          468.3510          480.5350          503.9517
#> Mar 1961       490.9117          541.0599          556.1836          585.3014
#> Apr 1961       473.1945          524.0113          539.3836          569.0174
#> May 1961       473.7861          525.3239          540.9272          571.0165
#> Jun 1961       539.2199          598.4040          616.3333          650.9165

A brand new thing in the function is the possibility to use several frequencies (double / triple / quadruple / … seasonal models). In order to show how it works, we will generate an artificial time series, inspired by half-hourly electricity demand using sim.gum() function:

ordersGUM <- c(1,1,1)
lagsGUM <- c(1,48,336)
initialGUM1 <- -25381.7
initialGUM2 <- c(23955.09, 24248.75, 24848.54, 25012.63, 24634.14, 24548.22, 24544.63, 24572.77,
                 24498.33, 24250.94, 24545.44, 25005.92, 26164.65, 27038.55, 28262.16, 28619.83,
                 28892.19, 28575.07, 28837.87, 28695.12, 28623.02, 28679.42, 28682.16, 28683.40,
                 28647.97, 28374.42, 28261.56, 28199.69, 28341.69, 28314.12, 28252.46, 28491.20,
                 28647.98, 28761.28, 28560.11, 28059.95, 27719.22, 27530.23, 27315.47, 27028.83,
                 26933.75, 26961.91, 27372.44, 27362.18, 27271.31, 26365.97, 25570.88, 25058.01)
initialGUM3 <- c(23920.16, 23026.43, 22812.23, 23169.52, 23332.56, 23129.27, 22941.20, 22692.40,
                 22607.53, 22427.79, 22227.64, 22580.72, 23871.99, 25758.34, 28092.21, 30220.46,
                 31786.51, 32699.80, 33225.72, 33788.82, 33892.25, 34112.97, 34231.06, 34449.53,
                 34423.61, 34333.93, 34085.28, 33948.46, 33791.81, 33736.17, 33536.61, 33633.48,
                 33798.09, 33918.13, 33871.41, 33403.75, 32706.46, 31929.96, 31400.48, 30798.24,
                 29958.04, 30020.36, 29822.62, 30414.88, 30100.74, 29833.49, 28302.29, 26906.72,
                 26378.64, 25382.11, 25108.30, 25407.07, 25469.06, 25291.89, 25054.11, 24802.21,
                 24681.89, 24366.97, 24134.74, 24304.08, 25253.99, 26950.23, 29080.48, 31076.33,
                 32453.20, 33232.81, 33661.61, 33991.21, 34017.02, 34164.47, 34398.01, 34655.21,
                 34746.83, 34596.60, 34396.54, 34236.31, 34153.32, 34102.62, 33970.92, 34016.13,
                 34237.27, 34430.08, 34379.39, 33944.06, 33154.67, 32418.62, 31781.90, 31208.69,
                 30662.59, 30230.67, 30062.80, 30421.11, 30710.54, 30239.27, 28949.56, 27506.96,
                 26891.75, 25946.24, 25599.88, 25921.47, 26023.51, 25826.29, 25548.72, 25405.78,
                 25210.45, 25046.38, 24759.76, 24957.54, 25815.10, 27568.98, 29765.24, 31728.25,
                 32987.51, 33633.74, 34021.09, 34407.19, 34464.65, 34540.67, 34644.56, 34756.59,
                 34743.81, 34630.05, 34506.39, 34319.61, 34110.96, 33961.19, 33876.04, 33969.95,
                 34220.96, 34444.66, 34474.57, 34018.83, 33307.40, 32718.90, 32115.27, 31663.53,
                 30903.82, 31013.83, 31025.04, 31106.81, 30681.74, 30245.70, 29055.49, 27582.68,
                 26974.67, 25993.83, 25701.93, 25940.87, 26098.63, 25771.85, 25468.41, 25315.74,
                 25131.87, 24913.15, 24641.53, 24807.15, 25760.85, 27386.39, 29570.03, 31634.00,
                 32911.26, 33603.94, 34020.90, 34297.65, 34308.37, 34504.71, 34586.78, 34725.81,
                 34765.47, 34619.92, 34478.54, 34285.00, 34071.90, 33986.48, 33756.85, 33799.37,
                 33987.95, 34047.32, 33924.48, 33580.82, 32905.87, 32293.86, 31670.02, 31092.57,
                 30639.73, 30245.42, 30281.61, 30484.33, 30349.51, 29889.23, 28570.31, 27185.55,
                 26521.85, 25543.84, 25187.82, 25371.59, 25410.07, 25077.67, 24741.93, 24554.62,
                 24427.19, 24127.21, 23887.55, 24028.40, 24981.34, 26652.32, 28808.00, 30847.09,
                 32304.13, 33059.02, 33562.51, 33878.96, 33976.68, 34172.61, 34274.50, 34328.71,
                 34370.12, 34095.69, 33797.46, 33522.96, 33169.94, 32883.32, 32586.24, 32380.84,
                 32425.30, 32532.69, 32444.24, 32132.49, 31582.39, 30926.58, 30347.73, 29518.04,
                 29070.95, 28586.20, 28416.94, 28598.76, 28529.75, 28424.68, 27588.76, 26604.13,
                 26101.63, 25003.82, 24576.66, 24634.66, 24586.21, 24224.92, 23858.42, 23577.32,
                 23272.28, 22772.00, 22215.13, 21987.29, 21948.95, 22310.79, 22853.79, 24226.06,
                 25772.55, 27266.27, 28045.65, 28606.14, 28793.51, 28755.83, 28613.74, 28376.47,
                 27900.76, 27682.75, 27089.10, 26481.80, 26062.94, 25717.46, 25500.27, 25171.05,
                 25223.12, 25634.63, 26306.31, 26822.46, 26787.57, 26571.18, 26405.21, 26148.41,
                 25704.47, 25473.10, 25265.97, 26006.94, 26408.68, 26592.04, 26224.64, 25407.27,
                 25090.35, 23930.21, 23534.13, 23585.75, 23556.93, 23230.25, 22880.24, 22525.52,
                 22236.71, 21715.08, 21051.17, 20689.40, 20099.18, 19939.71, 19722.69, 20421.58,
                 21542.03, 22962.69, 23848.69, 24958.84, 25938.72, 26316.56, 26742.61, 26990.79,
                 27116.94, 27168.78, 26464.41, 25703.23, 25103.56, 24891.27, 24715.27, 24436.51,
                 24327.31, 24473.02, 24893.89, 25304.13, 25591.77, 25653.00, 25897.55, 25859.32,
                 25918.32, 25984.63, 26232.01, 26810.86, 27209.70, 26863.50, 25734.54, 24456.96)
y <- sim.gum(orders=ordersGUM, lags=lagsGUM, nsim=1, frequency=336, obs=3360,
             measurement=rep(1,3), transition=diag(3), persistence=c(0.045,0.162,0.375),
             initial=cbind(initialGUM1,initialGUM2,initialGUM3))$data

We can then apply ADAM to this data:

testModel <- adam(y, "MMdM", lags=c(1,48,336), initial="backcasting",
                  silent=FALSE, h=336, holdout=TRUE)
testModel
#> Time elapsed: 0.97 seconds
#> Model estimated using adam() function: ETS(MMdM)[48, 336]
#> With backcasting initialisation
#> Distribution assumed in the model: Gamma
#> Loss function type: likelihood; Loss function value: 19589.51
#> Persistence vector g:
#>  alpha   beta gamma1 gamma2 
#> 0.0373 0.0000 0.0753 0.2196 
#> Damping parameter: 0
#> Sample size: 3024
#> Number of estimated parameters: 6
#> Number of degrees of freedom: 3018
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 39191.02 39191.05 39227.11 39227.22 
#> 
#> Forecast errors:
#> ME: -27.774; MAE: 145.766; RMSE: 180.439
#> sCE: -30.379%; Asymmetry: -17.7%; sMAE: 0.475%; sMSE: 0.003%
#> MASE: 0.198; RMSSE: 0.176; rMAE: 0.022; rRMSE: 0.023

Note that the more lags you have, the more initial seasonal components the function will need to estimate, which is a difficult task. This is why we used initial="backcasting" in the example above - this speeds up the estimation by reducing the number of parameters to estimate. Still, the optimiser might not get close to the optimal value, so we can help it. First, we can give more time for the calculation, increasing the number of iterations via maxeval (the default value is 40 iterations for each estimated parameter, e.g. 40 × 5 = 200 in our case):

testModel <- adam(y, "MMdM", lags=c(1,48,336), initial="backcasting",
                  silent=FALSE, h=336, holdout=TRUE, maxeval=10000)
testModel
#> Time elapsed: 1.64 seconds
#> Model estimated using adam() function: ETS(MMdM)[48, 336]
#> With backcasting initialisation
#> Distribution assumed in the model: Gamma
#> Loss function type: likelihood; Loss function value: 19530.71
#> Persistence vector g:
#>  alpha   beta gamma1 gamma2 
#> 0.0315 0.0000 0.1647 0.2333 
#> Damping parameter: 0
#> Sample size: 3024
#> Number of estimated parameters: 6
#> Number of degrees of freedom: 3018
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 39073.41 39073.44 39109.50 39109.61 
#> 
#> Forecast errors:
#> ME: -35.704; MAE: 140.074; RMSE: 174.306
#> sCE: -39.053%; Asymmetry: -24%; sMAE: 0.456%; sMSE: 0.003%
#> MASE: 0.19; RMSSE: 0.17; rMAE: 0.022; rRMSE: 0.022

This will take more time, but will typically lead to more refined parameters. You can control other parameters of the optimiser as well, such as algorithm, xtol_rel, print_level and others, which are explained in the documentation for nloptr function from nloptr package (run nloptr.print.options() for details). Second, we can give a different set of initial parameters for the optimiser, have a look at what the function saves:

testModel$B

and use this as a starting point for the reestimation (e.g. with a different algorithm):

testModel <- adam(y, "MMdM", lags=c(1,48,336), initial="backcasting",
                  silent=FALSE, h=336, holdout=TRUE, B=testModel$B)
testModel
#> Time elapsed: 0.43 seconds
#> Model estimated using adam() function: ETS(MMdM)[48, 336]
#> With backcasting initialisation
#> Distribution assumed in the model: Gamma
#> Loss function type: likelihood; Loss function value: 19530.71
#> Persistence vector g:
#>  alpha   beta gamma1 gamma2 
#> 0.0315 0.0000 0.1647 0.2330 
#> Damping parameter: 0.1209
#> Sample size: 3024
#> Number of estimated parameters: 6
#> Number of degrees of freedom: 3018
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 39073.41 39073.44 39109.50 39109.61 
#> 
#> Forecast errors:
#> ME: -35.74; MAE: 140.081; RMSE: 174.317
#> sCE: -39.093%; Asymmetry: -24%; sMAE: 0.456%; sMSE: 0.003%
#> MASE: 0.19; RMSSE: 0.17; rMAE: 0.022; rRMSE: 0.022

If you are ready to wait, you can change the initialisation to the initial="optimal", which in our case will take much more time because of the number of estimated parameters - 389 for the chosen model. The estimation process in this case might take 20 - 30 times more than in the example above.

In addition, you can specify some parts of the initial state vector or some parts of the persistence vector, here is an example:

testModel <- adam(y, "MMdM", lags=c(1,48,336), initial="backcasting",
                  silent=TRUE, h=336, holdout=TRUE, persistence=list(beta=0.1))
testModel
#> Time elapsed: 0.64 seconds
#> Model estimated using adam() function: ETS(MMdM)[48, 336]
#> With backcasting initialisation
#> Distribution assumed in the model: Gamma
#> Loss function type: likelihood; Loss function value: 19806.28
#> Persistence vector g:
#>  alpha   beta gamma1 gamma2 
#> 0.1024 0.1000 0.1641 0.6797 
#> Damping parameter: 0
#> Sample size: 3024
#> Number of estimated parameters: 5
#> Number of degrees of freedom: 3019
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 39622.56 39622.58 39652.63 39652.71 
#> 
#> Forecast errors:
#> ME: -16.687; MAE: 148.315; RMSE: 183.178
#> sCE: -18.252%; Asymmetry: -14.4%; sMAE: 0.483%; sMSE: 0.004%
#> MASE: 0.201; RMSSE: 0.179; rMAE: 0.023; rRMSE: 0.023

The function also handles intermittent data (the data with zeroes) and the data with missing values. This is partially covered in the vignette on the oes() function. Here is a simple example:

testModel <- adam(rpois(120,0.5), "MNN", silent=FALSE, h=12, holdout=TRUE,
                  occurrence="odds-ratio")
testModel
#> Time elapsed: 0.03 seconds
#> Model estimated using adam() function: iETS(MNN)[O]
#> With optimal initialisation
#> Occurrence model type: Odds ratio
#> Distribution assumed in the model: Mixture of Bernoulli and Gamma
#> Loss function type: likelihood; Loss function value: 71.5689
#> Persistence vector g:
#> alpha 
#>     0 
#> 
#> Sample size: 108
#> Number of estimated parameters: 5
#> Number of degrees of freedom: 103
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 302.5240 302.7548 315.9347 307.1106 
#> 
#> Forecast errors:
#> Asymmetry: -39.502%; sMSE: 33.879%; rRMSE: 0.861; sPIS: -105.448%; sCE: -165.995%

Finally, adam() is faster than es() function, because its code is more efficient and it uses a different optimisation algorithm with more finely tuned parameters by default. Let’s compare:

adamModel <- adam(AirPassengers, "CCC",
                  h=12, holdout=TRUE)
esModel <- es(AirPassengers, "CCC",
              h=12, holdout=TRUE)
"adam:"
#> [1] "adam:"
adamModel
#> Time elapsed: 2.39 seconds
#> Model estimated: ETS(CCC)
#> Loss function type: likelihood
#> 
#> Number of models combined: 30
#> Sample size: 132
#> Average number of estimated parameters: 18.0026
#> Average number of degrees of freedom: 113.9974
#> 
#> Forecast errors:
#> ME: 11.488; MAE: 22.026; RMSE: 26.785
#> sCE: 52.518%; sMAE: 8.391%; sMSE: 1.041%
#> MASE: 0.915; RMSSE: 0.855; rMAE: 0.29; rRMSE: 0.26
"es():"
#> [1] "es():"
esModel
#> Time elapsed: 2.27 seconds
#> Model estimated: ETS(CCC)
#> Loss function type: likelihood
#> 
#> Number of models combined: 30
#> Sample size: 132
#> Average number of estimated parameters: 18.0462
#> Average number of degrees of freedom: 113.9538
#> 
#> Forecast errors:
#> ME: 1.919; MAE: 16.001; RMSE: 22.328
#> sCE: 8.773%; sMAE: 6.096%; sMSE: 0.724%
#> MASE: 0.664; RMSSE: 0.713; rMAE: 0.211; rRMSE: 0.217

ADAM ARIMA

As mentioned above, ADAM does not only contain ETS, it also contains ARIMA model, which is regulated via orders parameter. If you want to have a pure ARIMA, you need to switch off ETS, which is done via model="NNN":

testModel <- adam(BJsales, "NNN", silent=FALSE, orders=c(0,2,2),
                  h=12, holdout=TRUE)
testModel
#> Time elapsed: 0.06 seconds
#> Model estimated using adam() function: ARIMA(0,2,2)
#> With optimal initialisation
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 240.5926
#> ARMA parameters of the model:
#>         Lag 1
#> MA(1) -0.7483
#> MA(2) -0.0150
#> 
#> Sample size: 138
#> Number of estimated parameters: 5
#> Number of degrees of freedom: 133
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 491.1852 491.6397 505.8214 506.9413 
#> 
#> Forecast errors:
#> ME: 2.959; MAE: 3.085; RMSE: 3.809
#> sCE: 15.619%; Asymmetry: 90.1%; sMAE: 1.357%; sMSE: 0.028%
#> MASE: 2.589; RMSSE: 2.483; rMAE: 0.995; rRMSE: 0.994

Given that both models are implemented in the same framework, they can be compared using information criteria.

The functionality of ADAM ARIMA is similar to the one of msarima function in smooth package, although there are several differences.

First, changing the distribution parameter will allow switching between additive / multiplicative models. For example, distribution="dlnorm" will create an ARIMA, equivalent to the one on logarithms of the data:

testModel <- adam(AirPassengers, "NNN", silent=FALSE, lags=c(1,12),
                  orders=list(ar=c(1,1),i=c(1,1),ma=c(2,2)), distribution="dlnorm",
                  h=12, holdout=TRUE)
testModel
#> Time elapsed: 0.58 seconds
#> Model estimated using adam() function: SARIMA(1,1,2)[1](1,1,2)[12]
#> With optimal initialisation
#> Distribution assumed in the model: Log-Normal
#> Loss function type: likelihood; Loss function value: 463.8379
#> ARMA parameters of the model:
#>        Lag 1  Lag 12
#> AR(1) -0.775 -0.6012
#>         Lag 1  Lag 12
#> MA(1)  0.3353  0.2309
#> MA(2) -0.2053 -0.2530
#> 
#> Sample size: 132
#> Number of estimated parameters: 33
#> Number of degrees of freedom: 99
#> Information criteria:
#>       AIC      AICc       BIC      BICc 
#>  993.6758 1016.5737 1088.8082 1144.7113 
#> 
#> Forecast errors:
#> ME: -17.97; MAE: 18.5; RMSE: 23.208
#> sCE: -82.15%; Asymmetry: -95.1%; sMAE: 7.048%; sMSE: 0.782%
#> MASE: 0.768; RMSSE: 0.741; rMAE: 0.243; rRMSE: 0.225

Second, if you want the model with intercept / drift, you can do it using constant parameter:

testModel <- adam(AirPassengers, "NNN", silent=FALSE, lags=c(1,12), constant=TRUE,
                  orders=list(ar=c(1,1),i=c(1,1),ma=c(2,2)), distribution="dnorm",
                  h=12, holdout=TRUE)
testModel
#> Time elapsed: 0.49 seconds
#> Model estimated using adam() function: SARIMA(1,1,2)[1](1,1,2)[12] with drift
#> With optimal initialisation
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 489.6563
#> Intercept/Drift value: 0.4986
#> ARMA parameters of the model:
#>         Lag 1  Lag 12
#> AR(1) -0.8542 -0.3225
#>         Lag 1 Lag 12
#> MA(1)  0.5112 0.2626
#> MA(2) -0.3057 0.1392
#> 
#> Sample size: 132
#> Number of estimated parameters: 34
#> Number of degrees of freedom: 98
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 1047.312 1071.849 1145.328 1205.230 
#> 
#> Forecast errors:
#> ME: -9.808; MAE: 14.792; RMSE: 19.051
#> sCE: -44.837%; Asymmetry: -65.6%; sMAE: 5.635%; sMSE: 0.527%
#> MASE: 0.614; RMSSE: 0.608; rMAE: 0.195; rRMSE: 0.185

If the model contains non-zero differences, then the constant acts as a drift. Third, you can specify parameters of ARIMA via the arma parameter in the following manner:

testModel <- adam(AirPassengers, "NNN", silent=FALSE, lags=c(1,12),
                  orders=list(ar=c(1,1),i=c(1,1),ma=c(2,2)), distribution="dnorm",
                  arma=list(ar=c(0.1,0.1), ma=c(-0.96, 0.03, -0.12, 0.03)),
                  h=12, holdout=TRUE)
testModel
#> Time elapsed: 0.2 seconds
#> Model estimated using adam() function: SARIMA(1,1,2)[1](1,1,2)[12]
#> With optimal initialisation
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 534.8602
#> ARMA parameters of the model:
#>       Lag 1 Lag 12
#> AR(1)   0.1    0.1
#>       Lag 1 Lag 12
#> MA(1) -0.96  -0.12
#> MA(2)  0.03   0.03
#> 
#> Sample size: 132
#> Number of estimated parameters: 27
#> Number of degrees of freedom: 105
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 1123.720 1138.259 1201.556 1237.050 
#> 
#> Forecast errors:
#> ME: 9.575; MAE: 17.082; RMSE: 19.148
#> sCE: 43.773%; Asymmetry: 61.9%; sMAE: 6.508%; sMSE: 0.532%
#> MASE: 0.709; RMSSE: 0.611; rMAE: 0.225; rRMSE: 0.186

Finally, the initials for the states can also be provided, although getting the correct ones might be a challenging task (you also need to know how many of them to provide; checking testModel$initial might help):

testModel <- adam(AirPassengers, "NNN", silent=FALSE, lags=c(1,12),
                  orders=list(ar=c(1,1),i=c(1,1),ma=c(2,0)), distribution="dnorm",
                  initial=list(arima=AirPassengers[1:24]),
                  h=12, holdout=TRUE)
testModel
#> Time elapsed: 0.4 seconds
#> Model estimated using adam() function: SARIMA(1,1,2)[1](1,1,0)[12]
#> With optimal initialisation
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 489.0635
#> ARMA parameters of the model:
#>         Lag 1  Lag 12
#> AR(1) -0.4129 -0.1071
#>        Lag 1
#> MA(1) 0.2143
#> MA(2) 0.0565
#> 
#> Sample size: 132
#> Number of estimated parameters: 31
#> Number of degrees of freedom: 101
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 1040.127 1059.967 1129.494 1177.931 
#> 
#> Forecast errors:
#> ME: -13.907; MAE: 16.641; RMSE: 21.651
#> sCE: -63.574%; Asymmetry: -81.5%; sMAE: 6.34%; sMSE: 0.68%
#> MASE: 0.691; RMSSE: 0.691; rMAE: 0.219; rRMSE: 0.21

If you work with ADAM ARIMA model, then there is no such thing as “usual” bounds for the parameters, so the function will use the bounds="admissible", checking the AR / MA polynomials in order to make sure that the model is stationary and invertible (aka stable).

Similarly to ETS, you can use different distributions and losses for the estimation. Note that the order selection for ARIMA is done in auto.adam() function, not in the adam()! However, if you do orders=list(..., select=TRUE) in adam(), it will call auto.adam() and do the selection.

Finally, ARIMA is typically slower than ETS, mainly because its initial states are more difficult to estimate due to an increased complexity of the model. If you want to speed things up, use initial="backcasting" and reduce the number of iterations via maxeval parameter.

ADAM ETSX / ARIMAX / ETSX+ARIMA

Another important feature of ADAM is introduction of explanatory variables. Unlike in es(), adam() expects a matrix for data and can work with a formula. If the latter is not provided, then it will use all explanatory variables. Here is a brief example:

BJData <- cbind(BJsales,BJsales.lead)
testModel <- adam(BJData, "AAN", h=18, silent=FALSE)

If you work with data.frame or similar structures, then you can use them directly, ADAM will extract the response variable either assuming that it is in the first column or from the provided formula (if you specify one via formula parameter). Here is an example, where we create a matrix with lags and leads of an explanatory variable:

BJData <- cbind(as.data.frame(BJsales),as.data.frame(xregExpander(BJsales.lead,c(-7:7))))
colnames(BJData)[1] <- "y"
testModel <- adam(BJData, "ANN", h=18, silent=FALSE, holdout=TRUE, formula=y~xLag1+xLag2+xLag3)
testModel
#> Time elapsed: 0.12 seconds
#> Model estimated using adam() function: ETSX(ANN)
#> With optimal initialisation
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 197.1386
#> Persistence vector g (excluding xreg):
#> alpha 
#>     1 
#> 
#> Sample size: 132
#> Number of estimated parameters: 6
#> Number of degrees of freedom: 126
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 406.2773 406.9493 423.5741 425.2147 
#> 
#> Forecast errors:
#> ME: 1.181; MAE: 1.618; RMSE: 2.247
#> sCE: 9.409%; Asymmetry: 50.7%; sMAE: 0.716%; sMSE: 0.01%
#> MASE: 1.326; RMSSE: 1.438; rMAE: 0.723; rRMSE: 0.895

Similarly to es(), there is a support for variables selection, but via the regressors parameter instead of xregDo, which will then use stepwise() function from greybox package on the residuals of the model:

testModel <- adam(BJData, "ANN", h=18, silent=FALSE, holdout=TRUE, regressors="select")

The same functionality is supported with ARIMA, so you can have, for example, ARIMAX(0,1,1), which is equivalent to ETSX(A,N,N):

testModel <- adam(BJData, "NNN", h=18, silent=FALSE, holdout=TRUE, regressors="select", orders=c(0,1,1))

The two models might differ because they have different initialisation in the optimiser and different bounds for parameters (ARIMA relies on invertibility condition, while ETS does the usual (0,1) bounds by default). It is possible to make them identical if the number of iterations is increased and the initial parameters are the same. Here is an example of what happens, when the two models have exactly the same parameters:

BJData <- BJData[,c("y",names(testModel$initial$xreg))];
testModel <- adam(BJData, "NNN", h=18, silent=TRUE, holdout=TRUE, orders=c(0,1,1),
                  initial=testModel$initial, arma=testModel$arma)
testModel
#> Time elapsed: 0 seconds
#> Model estimated using adam() function: ARIMAX(0,1,1)
#> With provided initialisation
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 513.2029
#> ARMA parameters of the model:
#>        Lag 1
#> MA(1) 0.2402
#> 
#> Sample size: 132
#> Number of estimated parameters: 1
#> Number of degrees of freedom: 131
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 1028.406 1028.437 1031.289 1031.364 
#> 
#> Forecast errors:
#> ME: 0.636; MAE: 0.636; RMSE: 0.872
#> sCE: 5.063%; Asymmetry: 100%; sMAE: 0.281%; sMSE: 0.001%
#> MASE: 0.521; RMSSE: 0.558; rMAE: 0.284; rRMSE: 0.347
names(testModel$initial)[1] <- names(testModel$initial)[[1]] <- "level"
testModel2 <- adam(BJData, "ANN", h=18, silent=TRUE, holdout=TRUE,
                   initial=testModel$initial, persistence=testModel$arma$ma+1)
testModel2
#> Time elapsed: 0 seconds
#> Model estimated using adam() function: ETSX(ANN)
#> With provided initialisation
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 1e+300
#> Persistence vector g (excluding xreg):
#>  alpha 
#> 1.2402 
#> 
#> Sample size: 132
#> Number of estimated parameters: 1
#> Number of degrees of freedom: 131
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 1028.406 1028.437 1031.289 1031.364 
#> 
#> Forecast errors:
#> ME: 0.636; MAE: 0.636; RMSE: 0.872
#> sCE: 5.063%; Asymmetry: 100%; sMAE: 0.281%; sMSE: 0.001%
#> MASE: 0.521; RMSSE: 0.558; rMAE: 0.284; rRMSE: 0.347

Another feature of ADAM is the time varying parameters in the SSOE framework, which can be switched on via regressors="adapt":

testModel <- adam(BJData, "ANN", h=18, silent=FALSE, holdout=TRUE, regressors="adapt")
testModel$persistence
#>        alpha       delta1       delta2       delta3       delta4       delta5 
#> 0.0004758253 0.3163533479 0.3151916749 0.3287991713 0.0026271472 0.3901947312

Note that the default number of iterations might not be sufficient in order to get close to the optimum of the function, so setting maxeval to something bigger might help. If you want to explore, why the optimisation stopped, you can provide print_level=41 parameter to the function, and it will print out the report from the optimiser. In the end, the default parameters are tuned in order to give a reasonable solution, but given the complexity of the model, they might not guarantee to give the best one all the time.

Finally, you can produce a mixture of ETS, ARIMA and regression, by using the respective parameters, like this:

testModel <- adam(BJData, "AAN", h=18, silent=FALSE, holdout=TRUE, orders=c(1,0,0))
summary(testModel)
#> 
#> Model estimated using adam() function: ETSX(AAN)+ARIMA(1,0,0)
#> Response variable: y
#> Distribution used in the estimation: Normal
#> Loss function type: likelihood; Loss function value: 48.844
#> Coefficients:
#>             Estimate Std. Error Lower 2.5% Upper 97.5%  
#> alpha         0.9979     0.1074     0.7852      1.0000 *
#> beta          0.2843     0.0800     0.1260      0.4424 *
#> phi1[1]      -0.1329     0.0290    -0.1902     -0.0756 *
#> level        85.4714     5.2323    75.1118     95.8134 *
#> trend        -0.0333     0.2609    -0.5498      0.4823  
#> ARIMAState1   2.5778     6.6721   -10.6325     15.7658  
#> xLag3         4.6433     0.1083     4.4289      4.8574 *
#> xLag7         0.4525     0.1194     0.2161      0.6884 *
#> xLag4         3.2369     0.1346     2.9705      3.5029 *
#> xLag6         1.1217     0.1439     0.8367      1.4062 *
#> xLag5         1.9471     0.1522     1.6458      2.2480 *
#> 
#> Error standard deviation: 0.3674
#> Sample size: 132
#> Number of estimated parameters: 12
#> Number of degrees of freedom: 120
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 121.6880 124.3098 156.2816 162.6826

This might be handy, when you explore a high frequency data, want to add calendar events, apply ETS and add AR/MA errors to it.

Finally, if you estimate ETSX or ARIMAX model and want to speed things up, it is recommended to use initial="backcasting", which will then initialise dynamic part of the model via backcasting and use optimisation for the parameters of the explanatory variables:

testModel <- adam(BJData, "AAN", h=18, silent=TRUE, holdout=TRUE, initial="backcasting")
summary(testModel)
#> 
#> Model estimated using adam() function: ETSX(AAN)
#> Response variable: y
#> Distribution used in the estimation: Normal
#> Loss function type: likelihood; Loss function value: 46.8982
#> Coefficients:
#>       Estimate Std. Error Lower 2.5% Upper 97.5%  
#> alpha   0.7692     0.0940     0.5831      0.9551 *
#> beta    0.4800     0.2973     0.0000      0.7692  
#> xLag3   4.5775     2.6406    -0.6491      9.7981  
#> xLag7   0.4197     2.6482    -4.8219      5.6555  
#> xLag4   3.1574     2.3485    -1.4909      7.8005  
#> xLag6   1.0822     2.3468    -3.5627      5.7220  
#> xLag5   1.8379     2.2317    -2.5793      6.2501  
#> 
#> Error standard deviation: 0.3562
#> Sample size: 132
#> Number of estimated parameters: 8
#> Number of degrees of freedom: 124
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 109.7964 110.9671 132.8588 135.7170

Auto ADAM

While the original adam() function allows selecting ETS components and explanatory variables, it does not allow selecting the most suitable distribution and / or ARIMA components. This is what auto.adam() function is for.

In order to do the selection of the most appropriate distribution, you need to provide a vector of those that you want to check:

testModel <- auto.adam(BJsales, "XXX", silent=FALSE,
                       distribution=c("dnorm","dlaplace","ds"),
                       h=12, holdout=TRUE)
#> Evaluating models with different distributions... dnorm ,  Selecting ARIMA orders... 
#> Selecting differences... 
#> Selecting ARMA... |
#> The best ARIMA is selected. dlaplace ,  Selecting ARIMA orders... 
#> Selecting differences... 
#> Selecting ARMA... |
#> The best ARIMA is selected. ds ,  Selecting ARIMA orders... 
#> Selecting differences... 
#> Selecting ARMA... |-
#> The best ARIMA is selected. Done!
testModel
#> Time elapsed: 0.55 seconds
#> Model estimated using auto.adam() function: ETS(AAdN) with drift
#> With optimal initialisation
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 236.8501
#> Intercept/Drift value: 0.6397
#> Persistence vector g:
#>  alpha   beta 
#> 0.9550 0.2839 
#> Damping parameter: 0.8551
#> Sample size: 138
#> Number of estimated parameters: 7
#> Number of degrees of freedom: 131
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 487.7002 488.5617 508.1910 510.3135 
#> 
#> Forecast errors:
#> ME: 0.297; MAE: 1.051; RMSE: 1.323
#> sCE: 1.57%; Asymmetry: 8.8%; sMAE: 0.462%; sMSE: 0.003%
#> MASE: 0.882; RMSSE: 0.862; rMAE: 0.339; rRMSE: 0.345

This process can also be done in parallel on either the automatically selected number of cores (e.g. parallel=TRUE) or on the specified by user (e.g. parallel=4):

testModel <- auto.adam(BJsales, "ZZZ", silent=FALSE, parallel=TRUE,
                       h=12, holdout=TRUE)

If you want to add ARIMA or regression components, you can do it in the exactly the same way as for the adam() function. Here is an example of ETS+ARIMA:

testModel <- auto.adam(BJsales, "AAN", orders=list(ar=2,i=0,ma=0), silent=TRUE,
                       distribution=c("dnorm","dlaplace","ds","dgnorm"),
                       h=12, holdout=TRUE)
testModel
#> Time elapsed: 0.42 seconds
#> Model estimated using auto.adam() function: ETS(AAN)+ARIMA(2,0,0)
#> With optimal initialisation
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 240.5239
#> Persistence vector g:
#>  alpha   beta 
#> 0.2789 0.2134 
#> 
#> ARMA parameters of the model:
#>        Lag 1
#> AR(1) 0.7589
#> AR(2) 0.2321
#> 
#> Sample size: 138
#> Number of estimated parameters: 9
#> Number of degrees of freedom: 129
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 499.0478 500.4540 525.3930 528.8575 
#> 
#> Forecast errors:
#> ME: 2.999; MAE: 3.119; RMSE: 3.858
#> sCE: 15.832%; Asymmetry: 90.6%; sMAE: 1.372%; sMSE: 0.029%
#> MASE: 2.618; RMSSE: 2.515; rMAE: 1.006; rRMSE: 1.007

However, this way the function will just use ARIMA(2,0,0) and fit it together with ETS(A,A,N). If you want it to select the most appropriate ARIMA orders from the provided (e.g. up to AR(2), I(1) and MA(2)), you need to add parameter select=TRUE to the list in orders:

testModel <- auto.adam(BJsales, "XXN", orders=list(ar=2,i=2,ma=2,select=TRUE),
                       distribution="default", silent=FALSE,
                       h=12, holdout=TRUE)
#> Evaluating models with different distributions... default ,  Selecting ARIMA orders... 
#> Selecting differences... 
#> Selecting ARMA... |
#> The best ARIMA is selected. Done!
testModel
#> Time elapsed: 0.16 seconds
#> Model estimated using auto.adam() function: ETS(AAdN) with drift
#> With optimal initialisation
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 236.8501
#> Intercept/Drift value: 0.6397
#> Persistence vector g:
#>  alpha   beta 
#> 0.9550 0.2839 
#> Damping parameter: 0.8551
#> Sample size: 138
#> Number of estimated parameters: 7
#> Number of degrees of freedom: 131
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 487.7002 488.5617 508.1910 510.3135 
#> 
#> Forecast errors:
#> ME: 0.297; MAE: 1.051; RMSE: 1.323
#> sCE: 1.57%; Asymmetry: 8.8%; sMAE: 0.462%; sMSE: 0.003%
#> MASE: 0.882; RMSSE: 0.862; rMAE: 0.339; rRMSE: 0.345

Knowing how to work with adam(), you can use similar principles, when dealing with auto.adam(). Just keep in mind that the provided persistence, phi, initial, arma and B won’t work, because this contradicts the idea of the model selection.

Finally, there is also the mechanism of automatic outliers detection, which extracts residuals from the best model, flags observations that lie outside the prediction interval of the width level in sample and then refits auto.adam() with the dummy variables for the outliers. Here how it works:

testModel <- auto.adam(AirPassengers, "PPP", silent=FALSE, outliers="use",
                       distribution="default",
                       h=12, holdout=TRUE)
#> Evaluating models with different distributions... default ,  Selecting ARIMA orders... 
#> Selecting differences... 
#> Selecting ARMA... |--
#> The best ARIMA is selected. 
#> Dealing with outliers...
testModel
#> Time elapsed: 6.02 seconds
#> Model estimated using auto.adam() function: ETSX(MMdM)
#> With optimal initialisation
#> Distribution assumed in the model: Gamma
#> Loss function type: likelihood; Loss function value: 463.744
#> Persistence vector g (excluding xreg):
#>  alpha   beta  gamma 
#> 0.7578 0.0001 0.0338 
#> Damping parameter: 0.9968
#> Sample size: 132
#> Number of estimated parameters: 19
#> Number of degrees of freedom: 113
#> Information criteria:
#>       AIC      AICc       BIC      BICc 
#>  965.4881  972.2738 1020.2613 1036.8280 
#> 
#> Forecast errors:
#> ME: -3.732; MAE: 15.105; RMSE: 21.797
#> sCE: -17.059%; Asymmetry: -14.2%; sMAE: 5.754%; sMSE: 0.69%
#> MASE: 0.627; RMSSE: 0.696; rMAE: 0.199; rRMSE: 0.212

If you specify outliers="select", the function will create leads and lags 1 of the outliers and then select the most appropriate ones via the regressors parameter of adam.

If you want to know more about ADAM, you are welcome to visit the online textbook (this is a work in progress at the moment).

Hyndman, Rob J, Anne B Koehler, J Keith Ord, and Ralph D Snyder. 2008. Forecasting with Exponential Smoothing. Springer Berlin Heidelberg.