Statistics stats
¶
This section collects various statistical tests and tools. Some can be used independently of any models, some are intended as extension to the models and model results.
API Warning: The functions and objects in this category are spread out in various modules and might still be moved around. We expect that in future the statistical tests will return class instances with more informative reporting instead of only the raw numbers.
Residual Diagnostics and Specification Tests¶
|
Calculates the Durbin-Watson statistic. |
|
The Jarque-Bera test of normality. |
|
Omnibus test for normality |
|
Calculate the medcouple robust measure of skew. |
|
Calculates the four skewness measures in Kim & White |
|
Calculates the four kurtosis measures in Kim & White |
|
Calculates the expected value of the robust kurtosis measures in Kim and White assuming the data are normally distributed. |
|
Breusch-Godfrey Lagrange Multiplier tests for residual autocorrelation. |
|
Ljung-Box test of autocorrelation in residuals. |
|
Lagrange Multiplier tests for autocorrelation. |
|
Cusum test for parameter stability based on ols residuals. |
|
Test for model stability, breaks in parameters for ols, Hansen 1992 |
|
Calculate recursive ols with residuals and Cusum test statistic |
|
Compute the Cox test for non-nested models |
|
Davidson-MacKinnon encompassing test for comparing non-nested models |
|
Compute the J-test for non-nested models |
|
Engle’s Test for Autoregressive Conditional Heteroscedasticity (ARCH). |
|
Breusch-Pagan Lagrange Multiplier test for heteroscedasticity |
|
Goldfeld-Quandt homoskedasticity test. |
|
White’s Lagrange Multiplier Test for Heteroscedasticity. |
|
White’s Two-Moment Specification Test |
|
Harvey Collier test for linearity |
|
Lagrange multiplier test for linearity against functional alternative |
|
Rainbow test for linearity |
|
Ramsey’s RESET test for neglected nonlinearity |
Outliers and influence measures¶
|
class to calculate outlier and influence measures for OLS result |
|
Influence and outlier measures (experimental) |
|
Local Influence and outlier measures (experimental) |
|
variance inflation factor, VIF, for one exogenous variable |
See also the notes on notes on regression diagnostics
Sandwich Robust Covariances¶
The following functions calculate covariance matrices and standard errors for the parameter estimates that are robust to heteroscedasticity and autocorrelation in the errors. Similar to the methods that are available for the LinearModelResults, these methods are designed for use with OLS.
|
heteroscedasticity and autocorrelation robust covariance matrix (Newey-West) |
|
Panel HAC robust covariance matrix |
|
Driscoll and Kraay Panel robust covariance matrix |
|
cluster robust covariance matrix |
cluster robust covariance matrix for two groups/clusters |
|
|
heteroscedasticity robust covariance matrix (White) |
The following are standalone versions of the heteroscedasticity robust standard errors attached to LinearModelResults
|
See statsmodels.RegressionResults |
|
See statsmodels.RegressionResults |
|
See statsmodels.RegressionResults |
|
See statsmodels.RegressionResults |
get standard deviation from covariance matrix |
Goodness of Fit Tests and Measures¶
some tests for goodness of fit for univariate distributions
|
Calculates power discrepancy, a class of goodness-of-fit tests as a measure of discrepancy between observed and expected data. |
|
perform chisquare test for random sample of a discrete distribution |
|
get bins for chisquare type gof tests for a discrete distribution |
|
effect size for a chisquare goodness-of-fit test |
|
Calculate the Anderson-Darling a2 statistic. |
|
Anderson-Darling test for normal distribution unknown mean and variance. |
|
Test assumed normal or exponential distribution using Lilliefors’ test. |
|
Test assumed normal or exponential distribution using Lilliefors’ test. |
|
Test assumed normal or exponential distribution using Lilliefors’ test. |
|
Test assumed normal or exponential distribution using Lilliefors’ test. |
Non-Parametric Tests¶
|
McNemar test |
|
Test for symmetry of a (k, k) square contingency table |
|
chisquare test for equality of median/location |
|
use runs test on binary discretized data above/below cutoff |
|
Wald-Wolfowitz runstest for two samples |
|
Cochran’s Q test for identical effect of k treatments |
|
class for runs in a binary sequence |
|
Signs test |
Descriptive Statistics¶
|
Extended descriptive statistics for data |
|
Extended descriptive statistics for data |
Interrater Reliability and Agreement¶
The main function that statsmodels has currently available for interrater agreement measures and tests is Cohen’s Kappa. Fleiss’ Kappa is currently only implemented as a measures but without associated results statistics.
|
Compute Cohen’s kappa with variance and equal-zero test |
|
Fleiss’ and Randolph’s kappa multi-rater agreement measure |
|
convert raw data with shape (subject, rater) to (rater1, rater2) |
|
convert raw data with shape (subject, rater) to (subject, cat_counts) |
Multiple Tests and Multiple Comparison Procedures¶
multipletests is a function for p-value correction, which also includes p-value correction based on fdr in fdrcorrection. tukeyhsd performs simultaneous testing for the comparison of (independent) means. These three functions are verified. GroupsStats and MultiComparison are convenience classes to multiple comparisons similar to one way ANOVA, but still in development
|
Test results and p-value correction for multiple tests |
|
pvalue correction for false discovery rate |
|
statistics by groups (another version) |
|
Tests for multiple comparisons |
|
Results from Tukey HSD test, with additional plot methods |
|
Calculate all pairwise comparisons with TukeyHSD confidence intervals |
|
Calculate local FDR values for a list of Z-scores. |
|
(iterated) two stage linear step-up procedure with estimation of number of true hypotheses |
|
Estimate a Gaussian distribution for the null Z-scores. |
|
Control FDR in a regression procedure. |
Marginal correlation effect sizes for FDR control. |
|
OLS regression for knockoff analysis. |
|
|
Forward selection effect sizes for FDR control. |
OLS regression for knockoff analysis. |
|
|
Use any regression model for Regression FDR analysis. |
The following functions are not (yet) public
|
correction factor for variance with unequal sample sizes for all pairs |
|
return joint variance from samples with unequal variances and unequal sample sizes for all pairs |
|
correction factor for variance with unequal sample sizes |
|
return joint variance from samples with unequal variances and unequal sample sizes |
|
a class for step down methods |
|
|
|
simple ordered sequential comparison of means |
|
pairwise distance matrix, outsourced from tukeyhsd |
|
no frills empirical cdf used in fdrcorrection |
|
return critical values for Tukey’s HSD (Q) |
|
recursively check all pairs of vals for minimum distance |
|
find all up zero crossings and return the index of the highest |
|
find all up zero crossings and return the index of the highest |
|
MonteCarlo to test fdrcorrection |
str(object=’’) -> str str(bytes_or_buffer[, encoding[, errors]]) -> str |
|
|
create random draws from equi-correlated multivariate normal distribution |
|
rankdata, equivalent to scipy.stats.rankdata |
|
reference line for rejection in multiple tests |
|
extract a partition from a list of tuples |
|
remove sets that are subsets of another set from a list of tuples |
|
should be equivalent of scipy.stats.tiecorrect |
Basic Statistics and t-Tests with frequency weights¶
Besides basic statistics, like mean, variance, covariance and correlation for data with case weights, the classes here provide one and two sample tests for means. The t-tests have more options than those in scipy.stats, but are more restrictive in the shape of the arrays. Confidence intervals for means are provided based on the same assumptions as the t-tests.
Additionally, tests for equivalence of means are available for one sample and for two, either paired or independent, samples. These tests are based on TOST, two one-sided tests, which have as null hypothesis that the means are not “close” to each other.
|
Descriptive statistics and tests with weights for case weights |
|
class for two sample comparison |
|
ttest independent sample |
|
test of (non-)equivalence for two independent samples |
|
test of (non-)equivalence for two dependent, paired sample |
|
test for mean based on normal distribution, one or two samples |
|
Equivalence test based on normal distribution |
|
confidence interval based on normal distribution z-test |
weightstats also contains tests and confidence intervals based on summary data
|
generic t-confint based on summary statistic |
|
generic ttest based on summary statistic |
|
generic normal-confint based on summary statistic |
|
generic (normal) z-test based on summary statistic |
|
generic (normal) z-test based on summary statistic |
Power and Sample Size Calculations¶
The power
module currently implements power and sample size calculations
for the t-tests, normal based test, F-tests and Chisquare goodness of fit test.
The implementation is class based, but the module also provides
three shortcut functions, tt_solve_power
, tt_ind_solve_power
and
zt_ind_solve_power
to solve for any one of the parameters of the power
equations.
|
Statistical Power calculations for t-test for two independent sample |
|
Statistical Power calculations for one sample or paired sample t-test |
|
Statistical Power calculations for one sample chisquare test |
|
Statistical Power calculations for z-test for two independent samples. |
|
Statistical Power calculations F-test for one factor balanced ANOVA |
|
Statistical Power calculations for generic F-test |
|
Calculate power of a normal distributed test statistic |
|
explicit sample size computation if only one tail is relevant |
|
solve for any one parameter of the power of a one sample t-test |
|
solve for any one parameter of the power of a two sample t-test |
|
solve for any one parameter of the power of a two sample z-test |
Proportion¶
Also available are hypothesis test, confidence intervals and effect size for proportions that can be used with NormalIndPower.
|
confidence interval for a binomial proportion |
|
Effect size for a test comparing two proportions |
|
Perform a test that the probability of success is p. |
|
rejection region for binomial test for one sample proportion |
|
exact TOST test for one proportion using binomial distribution |
|
rejection region for binomial TOST |
|
Confidence intervals for multinomial proportions. |
|
Test for proportions based on normal (z) test |
|
Equivalence test based on normal distribution |
|
test for proportions based on chisquare test |
|
chisquare test of proportions for all pairs of k samples |
|
chisquare test of proportions for pairs of k samples compared to control |
|
Effect size for a test comparing two proportions |
|
|
|
Power of proportions equivalence test based on normal distribution |
|
find sample size to get desired confidence interval length |
Statistics for two independent samples Status: experimental, API might change, added in 0.12
|
Hypothesis test for comparing two independent proportions |
|
Confidence intervals for comparing two independent proportions |
|
power for ztest that two independent proportions are equal |
|
Equivalence test based on two one-sided test_proportions_2indep |
required sample size assuming normal distribution based on one tail |
|
|
score_test for two independent proportions |
|
Compute score confidence interval by inverting score test |
Rates¶
Statistical functions for rates. This currently includes hypothesis tests for two independent samples.
Status: experimental, API might change, added in 0.12
|
test for ratio of two sample Poisson intensities |
|
E-test for ratio of two sample Poisson rates |
|
Equivalence test based on two one-sided test_proportions_2indep |
Multivariate¶
Statistical functions for multivariate samples.
This includes hypothesis test and confidence intervals for mean of sample of multivariate observations and hypothesis tests for the structure of a covariance matrix.
Status: experimental, API might change, added in 0.12
|
Hotellings test for multivariate mean in one sample |
|
Confidence interval for linear transformation of a multivariate mean |
|
Confidence interval for linear transformation of a multivariate mean |
|
Hotellings test for multivariate mean in two independent samples |
|
One sample hypothesis test for covariance equal to null covariance |
|
One sample hypothesis test that covariance is block diagonal. |
|
One sample hypothesis test that covariance matrix is diagonal matrix. |
|
Multiple sample hypothesis test that covariance matrices are equal. |
|
One sample hypothesis test that covariance matrix is spherical |
Oneway Anova¶
Hypothesis test, confidence intervals and effect size for oneway analysis of k samples.
Status: experimental, API might change, added in 0.12
|
Oneway Anova |
|
Oneway Anova based on summary statistics |
|
equivalence test for oneway anova (Wellek’s Anova) |
|
Equivalence test for oneway anova (Wellek and extensions) |
|
Power of oneway equivalence test |
|
Empirical power of oneway equivalence test |
|
Oneway Anova test for equal scale, variance or dispersion |
|
Oneway Anova test for equivalence of scale, variance or dispersion |
|
Confidence interval for effect size in oneway anova for F distribution |
|
Confidence interval for noncentrality parameter in F-test |
|
Convert squared effect sizes in f family |
|
Effect size corresponding to Cohen’s f = nc / nobs for oneway anova |
|
Convert Cohen’s f-squared to Wellek’s effect size (sqrt) |
|
Convert F statistic to wellek’s effect size eps squared |
|
Convert Wellek’s effect size (sqrt) to Cohen’s f-squared |
|
Compute anova effect size from F-statistic |
|
Transform data for variance comparison for Levene type tests |
|
Simulate Power for oneway equivalence test (Wellek’s Anova) |
Robust, Trimmed Statistics¶
Statistics for samples that are trimmed at a fixed fraction. This includes class TrimmedMean for one sample statistics. It is used in stats.oneway for trimmed “Yuen” Anova.
Status: experimental, API might change, added in 0.12
|
class for trimmed and winsorized one sample statistics |
|
Transform data for variance comparison for Levene type tests |
|
Return mean of array after trimming observations from both lower and upper tails. |
|
Slices off a proportion of items from both ends of an array. |
Moment Helpers¶
When there are missing values, then it is possible that a correlation or covariance matrix is not positive semi-definite. The following three functions can be used to find a correlation or covariance matrix that is positive definite and close to the original matrix.
|
Find a near correlation matrix that is positive semi-definite |
|
Find the nearest correlation matrix that is positive semi-definite. |
|
Find the nearest correlation matrix with factor structure to a given square matrix. |
|
Construct a sparse matrix containing the thresholded row-wise correlation matrix from a data array. |
|
Find the nearest covariance matrix that is positive (semi-) definite |
|
Approximate an arbitrary square matrix with a factor-structured matrix of the form k*I + XX’. |
|
Representation of a positive semidefinite matrix in factored form. |
|
Use kernel averaging to estimate a multivariate covariance function. |
These are utility functions to convert between central and non-central moments, skew, kurtosis and cummulants.
|
convert non-central moments to cumulants recursive formula produces as many cumulants as moments |
|
convert central to non-central moments, uses recursive formula optionally adjusts first moment to return mean |
|
convert central moments to mean, variance, skew, kurtosis |
|
convert non-central moments to cumulants recursive formula produces as many cumulants as moments |
|
convert non-central to central moments, uses recursive formula optionally adjusts first moment to return mean |
|
convert central moments to mean, variance, skew, kurtosis |
|
convert mean, variance, skew, kurtosis to central moments |
|
convert mean, variance, skew, kurtosis to non-central moments |
|
convert covariance matrix to correlation matrix |
|
convert correlation matrix to covariance matrix given standard deviation |
|
get standard deviation from covariance matrix |
Mediation Analysis¶
Mediation analysis focuses on the relationships among three key variables: an ‘outcome’, a ‘treatment’, and a ‘mediator’. Since mediation analysis is a form of causal inference, there are several assumptions involved that are difficult or impossible to verify. Ideally, mediation analysis is conducted in the context of an experiment such as this one in which the treatment is randomly assigned. It is also common for people to conduct mediation analyses using observational data in which the treatment may be thought of as an ‘exposure’. The assumptions behind mediation analysis are even more difficult to verify in an observational setting.
|
Conduct a mediation analysis. |
|
A class for holding the results of a mediation analysis. |
Oaxaca-Blinder Decomposition¶
The Oaxaca-Blinder, or Blinder-Oaxaca as some call it, decomposition attempts to explain gaps in means of groups. It uses the linear models of two given regression equations to show what is explained by regression coefficients and known data and what is unexplained using the same data. There are two types of Oaxaca-Blinder decompositions, the two-fold and the three-fold, both of which can and are used in Economics Literature to discuss differences in groups. This method helps classify discrimination or unobserved effects. This function attempts to port the functionality of the oaxaca command in STATA to Python.
|
Class to perform Oaxaca-Blinder Decomposition. |
|
This class summarizes the fit of the OaxacaBlinder model. |
Distance Dependence Measures¶
Distance dependence measures and the Distance Covariance (dCov) test.
|
The Distance Covariance (dCov) test |
|
Calculate various distance dependence statistics. |
|
Distance correlation. |
|
Distance covariance. |
Distance variance. |
Meta-Analysis¶
Functions for basic meta-analysis of a collection of sample statistics.
Examples can be found in the notebook
Status: experimental, API might change, added in 0.12
|
combining effect sizes for effect sizes using meta-analysis |
|
Effects sizes for two sample binomial proportions |
|
effect sizes for mean difference for use in meta-analysis |
|
Results from combined estimate of means or effect sizes |
The module also includes internal functions to compute random effects variance.
|
iterated method of moment estimate of between random effect variance |
|
Paule-Mandel iterative estimate of between random effect variance |
|
one-step method of moment estimate of between random effect variance |