statsmodels.robust.robust_linear_model.RLM

class statsmodels.robust.robust_linear_model.RLM(endog, exog, M=None, missing='none', **kwargs)[source]

Robust Linear Model

Estimate a robust linear model via iteratively reweighted least squares given a robust criterion estimator.

Parameters:
endogarray_like

A 1-d endogenous response variable. The dependent variable.

exogarray_like

A nobs x k array where nobs is the number of observations and k is the number of regressors. An intercept is not included by default and should be added by the user. See statsmodels.tools.add_constant.

Mstatsmodels.robust.norms.RobustNorm, optional

The robust criterion function for downweighting outliers. The current options are LeastSquares, HuberT, RamsayE, AndrewWave, TrimmedMean, Hampel, and TukeyBiweight. The default is HuberT(). See statsmodels.robust.norms for more information.

missingstr

Available options are ‘none’, ‘drop’, and ‘raise’. If ‘none’, no nan checking is done. If ‘drop’, any observations with nans are dropped. If ‘raise’, an error is raised. Default is ‘none’.

Attributes:
df_modelfloat

The degrees of freedom of the model. The number of regressors p less one for the intercept. Note that the reported model degrees of freedom does not count the intercept as a regressor, though the model is assumed to have an intercept.

df_residfloat

The residual degrees of freedom. The number of observations n less the number of regressors p. Note that here p does include the intercept as using a degree of freedom.

endogndarray

See above. Note that endog is a reference to the data so that if data is already an array and it is changed, then endog changes as well.

exogndarray

See above. Note that endog is a reference to the data so that if data is already an array and it is changed, then endog changes as well.

Mstatsmodels.robust.norms.RobustNorm

See above. Robust estimator instance instantiated.

nobsfloat

The number of observations n

pinv_wexogndarray

The pseudoinverse of the design / exogenous data array. Note that RLM has no whiten method, so this is just the pseudo inverse of the design.

normalized_cov_paramsndarray

The p x p normalized covariance of the design / exogenous data. This is approximately equal to (X.T X)^(-1)

Examples

>>> import statsmodels.api as sm
>>> data = sm.datasets.stackloss.load()
>>> data.exog = sm.add_constant(data.exog)
>>> rlm_model = sm.RLM(data.endog, data.exog,                            M=sm.robust.norms.HuberT())
>>> rlm_results = rlm_model.fit()
>>> rlm_results.params
array([  0.82938433,   0.92606597,  -0.12784672, -41.02649835])
>>> rlm_results.bse
array([ 0.11100521,  0.30293016,  0.12864961,  9.79189854])
>>> rlm_results_HC2 = rlm_model.fit(cov="H2")
>>> rlm_results_HC2.params
array([  0.82938433,   0.92606597,  -0.12784672, -41.02649835])
>>> rlm_results_HC2.bse
array([ 0.11945975,  0.32235497,  0.11796313,  9.08950419])
>>> mod = sm.RLM(data.endog, data.exog, M=sm.robust.norms.Hampel())
>>> rlm_hamp_hub = mod.fit(scale_est=sm.robust.scale.HuberScale())
>>> rlm_hamp_hub.params
array([  0.73175452,   1.25082038,  -0.14794399, -40.27122257])

Methods

deviance(tmp_results)

Returns the (unnormalized) log-likelihood from the M estimator.

fit([maxiter, tol, scale_est, init, cov, ...])

Fits the model using iteratively reweighted least squares.

from_formula(formula, data[, subset, drop_cols])

Create a Model from a formula and dataframe.

hessian(params)

The Hessian matrix of the model.

information(params)

Fisher information matrix of model.

initialize()

Initialize (possibly re-initialize) a Model instance.

loglike(params)

Log-likelihood of model.

predict(params[, exog])

Return linear predicted values from a design matrix.

score(params)

Score vector of model.

Properties

endog_names

Names of endogenous variables.

exog_names

Names of exogenous variables.


Last update: Nov 14, 2024