Estimating or specifying parameters in state space models

In this notebook we show how to fix specific values of certain parameters in statsmodels’ state space models while estimating others.

In general, state space models allow users to:

  1. Estimate all parameters by maximum likelihood

  2. Fix some parameters and estimate the rest

  3. Fix all parameters (so that no parameters are estimated)

[1]:
%matplotlib inline

from importlib import reload
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt

from pandas_datareader.data import DataReader

To illustrate, we will use the Consumer Price Index for Apparel, which has a time-varying level and a strong seasonal component.

[2]:
endog = DataReader('CPIAPPNS', 'fred', start='1980').asfreq('MS')
endog.plot(figsize=(15, 3));
../../../_images/examples_notebooks_generated_statespace_fixed_params_3_0.png

It is well known (e.g. Harvey and Jaeger [1993]) that the HP filter output can be generated by an unobserved components model given certain restrictions on the parameters.

The unobserved components model is:

\[\begin{split}\begin{aligned} y_t & = \mu_t + \varepsilon_t & \varepsilon_t \sim N(0, \sigma_\varepsilon^2) \\ \mu_t &= \mu_{t-1} + \beta_{t-1} + \eta_t & \eta_t \sim N(0, \sigma_\eta^2) \\ \beta_t &= \beta_{t-1} + \zeta_t & \zeta_t \sim N(0, \sigma_\zeta^2) \\ \end{aligned}\end{split}\]

For the trend to match the output of the HP filter, the parameters must be set as follows:

\[\begin{split}\begin{aligned} \frac{\sigma_\varepsilon^2}{\sigma_\zeta^2} & = \lambda \\ \sigma_\eta^2 & = 0 \end{aligned}\end{split}\]

where \(\lambda\) is the parameter of the associated HP filter. For the monthly data that we use here, it is usually recommended that \(\lambda = 129600\).

[3]:
# Run the HP filter with lambda = 129600
hp_cycle, hp_trend = sm.tsa.filters.hpfilter(endog, lamb=129600)

# The unobserved components model above is the local linear trend, or "lltrend", specification
mod = sm.tsa.UnobservedComponents(endog, 'lltrend')
print(mod.param_names)
['sigma2.irregular', 'sigma2.level', 'sigma2.trend']

The parameters of the unobserved components model (UCM) are written as:

  • \(\sigma_\varepsilon^2 = \text{sigma2.irregular}\)

  • \(\sigma_\eta^2 = \text{sigma2.level}\)

  • \(\sigma_\zeta^2 = \text{sigma2.trend}\)

To satisfy the above restrictions, we will set \((\sigma_\varepsilon^2, \sigma_\eta^2, \sigma_\zeta^2) = (1, 0, 1 / 129600)\).

Since we are fixing all parameters here, we do not need to use the fit method at all, since that method is used to perform maximum likelihood estimation. Instead, we can directly run the Kalman filter and smoother at our chosen parameters using the smooth method.

[4]:
res = mod.smooth([1., 0, 1. / 129600])
print(res.summary())
                        Unobserved Components Results
==============================================================================
Dep. Variable:               CPIAPPNS   No. Observations:                  538
Model:             local linear trend   Log Likelihood               -3007.625
Date:                Thu, 14 Nov 2024   AIC                           6021.250
Time:                        17:19:56   BIC                           6034.102
Sample:                    01-01-1980   HQIC                          6026.278
                         - 10-01-2024
Covariance Type:                  opg
====================================================================================
                       coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------------
sigma2.irregular     1.0000      0.009    115.645      0.000       0.983       1.017
sigma2.level              0      0.000          0      1.000      -0.000       0.000
sigma2.trend      7.716e-06   1.96e-07     39.309      0.000    7.33e-06     8.1e-06
===================================================================================
Ljung-Box (L1) (Q):                 254.48   Jarque-Bera (JB):                 1.60
Prob(Q):                              0.00   Prob(JB):                         0.45
Heteroskedasticity (H):               2.11   Skew:                             0.04
Prob(H) (two-sided):                  0.00   Kurtosis:                         2.74
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).

The estimate that corresponds to the HP filter’s trend estimate is given by the smoothed estimate of the level (which is \(\mu_t\) in the notation above):

[5]:
ucm_trend = pd.Series(res.level.smoothed, index=endog.index)

It is easy to see that the estimate of the smoothed level from the UCM is equal to the output of the HP filter:

[6]:
fig, ax = plt.subplots(figsize=(15, 3))

ax.plot(hp_trend, label='HP estimate')
ax.plot(ucm_trend, label='UCM estimate')
ax.legend();
../../../_images/examples_notebooks_generated_statespace_fixed_params_11_0.png

Adding a seasonal component

However, unobserved components models are more flexible than the HP filter. For example, the data shown above is clearly seasonal, but with time-varying seasonal effects (the seasonality is much weaker at the beginning than at the end). One of the benefits of the unobserved components framework is that we can add a stochastic seasonal component. In this case, we will estimate the variance of the seasonal component by maximum likelihood while still including the restriction on the parameters implied above so that the trend corresponds to the HP filter concept.

Adding the stochastic seasonal component adds one new parameter, sigma2.seasonal.

[7]:
# Construct a local linear trend model with a stochastic seasonal component of period 1 year
mod = sm.tsa.UnobservedComponents(endog, 'lltrend', seasonal=12, stochastic_seasonal=True)
print(mod.param_names)
['sigma2.irregular', 'sigma2.level', 'sigma2.trend', 'sigma2.seasonal']

In this case, we will continue to restrict the first three parameters as described above, but we want to estimate the value of sigma2.seasonal by maximum likelihood. Therefore, we will use the fit method along with the fix_params context manager.

The fix_params method takes a dictionary of parameters names and associated values. Within the generated context, those parameters will be used in all cases. In the case of the fit method, only the parameters that were not fixed will be estimated.

[8]:
# Here we restrict the first three parameters to specific values
with mod.fix_params({'sigma2.irregular': 1, 'sigma2.level': 0, 'sigma2.trend': 1. / 129600}):
    # Now we fit any remaining parameters, which in this case
    # is just `sigma2.seasonal`
    res_restricted = mod.fit()
RUNNING THE L-BFGS-B CODE

           * * *

Machine precision = 2.220D-16
 N =            1     M =           10

At X0         0 variables are exactly at the bounds

At iterate    0    f=  3.87102D+00    |proj g|=  2.27574D-01
 This problem is unconstrained.

At iterate    5    f=  3.25352D+00    |proj g|=  4.19624D-06

           * * *

Tit   = total number of iterations
Tnf   = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip  = number of BFGS updates skipped
Nact  = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F     = final function value

           * * *

   N    Tit     Tnf  Tnint  Skip  Nact     Projg        F
    1      5     12      1     0     0   4.196D-06   3.254D+00
  F =   3.2535228522888620

CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL

Alternatively, we could have simply used the fit_constrained method, which also accepts a dictionary of constraints:

[9]:
res_restricted = mod.fit_constrained({'sigma2.irregular': 1, 'sigma2.level': 0, 'sigma2.trend': 1. / 129600})
RUNNING THE L-BFGS-B CODE

           * * *

Machine precision = 2.220D-16
 N =            1     M =           10

At X0         0 variables are exactly at the bounds

At iterate    0    f=  3.87102D+00    |proj g|=  2.27574D-01

At iterate    5    f=  3.25352D+00    |proj g|=  4.19624D-06

           * * *

Tit   = total number of iterations
Tnf   = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip  = number of BFGS updates skipped
Nact  = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F     = final function value

           * * *

   N    Tit     Tnf  Tnint  Skip  Nact     Projg        F
    1      5     12      1     0     0   4.196D-06   3.254D+00
  F =   3.2535228522888620

CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
 This problem is unconstrained.

The summary output includes all parameters, but indicates that the first three were fixed (and so were not estimated).

[10]:
print(res_restricted.summary())
                            Unobserved Components Results
=====================================================================================
Dep. Variable:                      CPIAPPNS   No. Observations:                  538
Model:                    local linear trend   Log Likelihood               -1750.395
                   + stochastic seasonal(12)   AIC                           3502.791
Date:                       Thu, 14 Nov 2024   BIC                           3507.054
Time:                               17:19:58   HQIC                          3504.460
Sample:                           01-01-1980
                                - 10-01-2024
Covariance Type:                         opg
============================================================================================
                               coef    std err          z      P>|z|      [0.025      0.975]
--------------------------------------------------------------------------------------------
sigma2.irregular (fixed)     1.0000        nan        nan        nan         nan         nan
sigma2.level (fixed)              0        nan        nan        nan         nan         nan
sigma2.trend (fixed)      7.716e-06        nan        nan        nan         nan         nan
sigma2.seasonal              0.0933      0.007     12.667      0.000       0.079       0.108
===================================================================================
Ljung-Box (L1) (Q):                 460.73   Jarque-Bera (JB):                38.76
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               2.39   Skew:                             0.30
Prob(H) (two-sided):                  0.00   Kurtosis:                         4.19
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).

For comparison, we construct the unrestricted maximum likelihood estimates (MLE). In this case, the estimate of the level will no longer correspond to the HP filter concept.

[11]:
res_unrestricted = mod.fit()
RUNNING THE L-BFGS-B CODE

           * * *

Machine precision = 2.220D-16
 N =            4     M =           10

At X0         0 variables are exactly at the bounds

At iterate    0    f=  3.63691D+00    |proj g|=  1.07281D-01
 This problem is unconstrained.

At iterate    5    f=  2.04080D+00    |proj g|=  4.17124D-01

At iterate   10    f=  1.54941D+00    |proj g|=  5.41227D-01

At iterate   15    f=  1.53452D+00    |proj g|=  8.08732D-02

At iterate   20    f=  1.52434D+00    |proj g|=  4.37223D-01

At iterate   25    f=  1.40067D+00    |proj g|=  2.38993D-01

At iterate   30    f=  1.39186D+00    |proj g|=  1.15189D-02

At iterate   35    f=  1.39185D+00    |proj g|=  5.74973D-05

           * * *

Tit   = total number of iterations
Tnf   = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip  = number of BFGS updates skipped
Nact  = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F     = final function value

           * * *

   N    Tit     Tnf  Tnint  Skip  Nact     Projg        F
    4     35     63      1     0     0   5.750D-05   1.392D+00
  F =   1.3918493989060698

CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH

Finally, we can retrieve the smoothed estimates of the trend and seasonal components.

[12]:
# Construct the smoothed level estimates
unrestricted_trend = pd.Series(res_unrestricted.level.smoothed, index=endog.index)
restricted_trend = pd.Series(res_restricted.level.smoothed, index=endog.index)

# Construct the smoothed estimates of the seasonal pattern
unrestricted_seasonal = pd.Series(res_unrestricted.seasonal.smoothed, index=endog.index)
restricted_seasonal = pd.Series(res_restricted.seasonal.smoothed, index=endog.index)

Comparing the estimated level, it is clear that the seasonal UCM with fixed parameters still produces a trend that corresponds very closely (although no longer exactly) to the HP filter output.

Meanwhile, the estimated level from the model with no parameter restrictions (the MLE model) is much less smooth than these.

[13]:
fig, ax = plt.subplots(figsize=(15, 3))

ax.plot(unrestricted_trend, label='MLE, with seasonal')
ax.plot(restricted_trend, label='Fixed parameters, with seasonal')
ax.plot(hp_trend, label='HP filter, no seasonal')
ax.legend();
../../../_images/examples_notebooks_generated_statespace_fixed_params_26_0.png

Finally, the UCM with the parameter restrictions is still able to pick up the time-varying seasonal component quite well.

[14]:
fig, ax = plt.subplots(figsize=(15, 3))

ax.plot(unrestricted_seasonal, label='MLE')
ax.plot(restricted_seasonal, label='Fixed parameters')
ax.legend();
../../../_images/examples_notebooks_generated_statespace_fixed_params_28_0.png

Last update: Nov 14, 2024