statsmodels.tsa.statespace.mlemodel.MLEModel.fit¶
-
MLEModel.
fit
(start_params=None, transformed=True, cov_type='opg', cov_kwds=None, method='lbfgs', maxiter=50, full_output=1, disp=5, callback=None, return_params=False, optim_score=None, optim_complex_step=None, optim_hessian=None, flags=None, **kwargs)[source]¶ Fits the model by maximum likelihood via Kalman filter.
Parameters: - start_params (array_like, optional) – Initial guess of the solution for the loglikelihood maximization. If None, the default is given by Model.start_params.
- transformed (boolean, optional) – Whether or not start_params is already transformed. Default is True.
- cov_type (str, optional) –
The cov_type keyword governs the method for calculating the covariance matrix of parameter estimates. Can be one of:
- ’opg’ for the outer product of gradient estimator
- ’oim’ for the observed information matrix estimator, calculated using the method of Harvey (1989)
- ’approx’ for the observed information matrix estimator, calculated using a numerical approximation of the Hessian matrix.
- ’robust’ for an approximate (quasi-maximum likelihood) covariance matrix that may be valid even in the presense of some misspecifications. Intermediate calculations use the ‘oim’ method.
- ’robust_approx’ is the same as ‘robust’ except that the intermediate calculations use the ‘approx’ method.
- ’none’ for no covariance matrix calculation.
- cov_kwds (dict or None, optional) –
A dictionary of arguments affecting covariance matrix computation.
opg, oim, approx, robust, robust_approx
- ’approx_complex_step’ : boolean, optional - If True, numerical approximations are computed using complex-step methods. If False, numerical approximations are computed using finite difference methods. Default is True.
- ’approx_centered’ : boolean, optional - If True, numerical approximations computed using finite difference methods use a centered approximation. Default is False.
- method (str, optional) –
The method determines which solver from scipy.optimize is used, and it can be chosen from among the following strings:
- ’newton’ for Newton-Raphson, ‘nm’ for Nelder-Mead
- ’bfgs’ for Broyden-Fletcher-Goldfarb-Shanno (BFGS)
- ’lbfgs’ for limited-memory BFGS with optional box constraints
- ’powell’ for modified Powell’s method
- ’cg’ for conjugate gradient
- ’ncg’ for Newton-conjugate gradient
- ’basinhopping’ for global basin-hopping solver
The explicit arguments in fit are passed to the solver, with the exception of the basin-hopping solver. Each solver has several optional arguments that are not the same across solvers. See the notes section below (or scipy.optimize) for the available arguments and for the list of explicit arguments that the basin-hopping solver supports.
- maxiter (int, optional) – The maximum number of iterations to perform.
- full_output (boolean, optional) – Set to True to have all available output in the Results object’s mle_retvals attribute. The output is dependent on the solver. See LikelihoodModelResults notes section for more information.
- disp (boolean, optional) – Set to True to print convergence messages.
- callback (callable callback(xk), optional) – Called after each iteration, as callback(xk), where xk is the current parameter vector.
- return_params (boolean, optional) – Whether or not to return only the array of maximizing parameters. Default is False.
- optim_score ({'harvey', 'approx'} or None, optional) – The method by which the score vector is calculated. ‘harvey’ uses the method from Harvey (1989), ‘approx’ uses either finite difference or complex step differentiation depending upon the value of optim_complex_step, and None uses the built-in gradient approximation of the optimizer. Default is None. This keyword is only relevant if the optimization method uses the score.
- optim_complex_step (bool, optional) – Whether or not to use complex step differentiation when approximating the score; if False, finite difference approximation is used. Default is True. This keyword is only relevant if optim_score is set to ‘harvey’ or ‘approx’.
- optim_hessian ({'opg','oim','approx'}, optional) – The method by which the Hessian is numerically approximated. ‘opg’ uses outer product of gradients, ‘oim’ uses the information matrix formula from Harvey (1989), and ‘approx’ uses numerical approximation. This keyword is only relevant if the optimization method uses the Hessian matrix.
- **kwargs – Additional keyword arguments to pass to the optimizer.
Returns: Return type: