statsmodels.othermod.betareg.BetaResults.t_test_pairwise¶
-
BetaResults.t_test_pairwise(term_name, method=
'hs'
, alpha=0.05
, factor_labels=None
)¶ Perform pairwise t_test with multiple testing corrected p-values.
This uses the formula design_info encoding contrast matrix and should work for all encodings of a main effect.
- Parameters:¶
- term_name
str
The name of the term for which pairwise comparisons are computed. Term names for categorical effects are created by patsy and correspond to the main part of the exog names.
- method{
str
,list
[str
]} The multiple testing p-value correction to apply. The default is ‘hs’. See stats.multipletesting.
- alpha
float
The significance level for multiple testing reject decision.
- factor_labels{
list
[str
],None
} Labels for the factor levels used for pairwise labels. If not provided, then the labels from the formula design_info are used.
- term_name
- Returns:¶
MultiCompResult
The results are stored as attributes, the main attributes are the following two. Other attributes are added for debugging purposes or as background information.
result_frame : pandas DataFrame with t_test results and multiple testing corrected p-values.
contrasts : matrix of constraints of the null hypothesis in the t_test.
Notes
Status: experimental. Currently only checked for treatment coding with and without specified reference level.
Currently there are no multiple testing corrected confidence intervals available.
Examples
>>> res = ols("np.log(Days+1) ~ C(Weight) + C(Duration)", data).fit() >>> pw = res.t_test_pairwise("C(Weight)") >>> pw.result_frame coef std err t P>|t| Conf. Int. Low 2-1 0.632315 0.230003 2.749157 8.028083e-03 0.171563 3-1 1.302555 0.230003 5.663201 5.331513e-07 0.841803 3-2 0.670240 0.230003 2.914044 5.119126e-03 0.209488 Conf. Int. Upp. pvalue-hs reject-hs 2-1 1.093067 0.010212 True 3-1 1.763307 0.000002 True 3-2 1.130992 0.010212 True