algorithms.statistics.models.glm

Module: algorithms.statistics.models.glm

Inheritance diagram for nipy.algorithms.statistics.models.glm:

digraph inheritance54b72dce5b { rankdir=LR; size="8.0, 12.0"; "models.glm.Model" [URL="#nipy.algorithms.statistics.models.glm.Model",fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5)",target="_top"]; "models.regression.WLSModel" -> "models.glm.Model" [arrowsize=0.5,style="setlinewidth(0.5)"]; "models.model.LikelihoodModel" [URL="nipy.algorithms.statistics.models.model.html#nipy.algorithms.statistics.models.model.LikelihoodModel",fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5)",target="_top"]; "models.model.Model" -> "models.model.LikelihoodModel" [arrowsize=0.5,style="setlinewidth(0.5)"]; "models.model.Model" [URL="nipy.algorithms.statistics.models.model.html#nipy.algorithms.statistics.models.model.Model",fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5)",target="_top",tooltip="A (predictive) statistical model."]; "models.regression.OLSModel" [URL="nipy.algorithms.statistics.models.regression.html#nipy.algorithms.statistics.models.regression.OLSModel",fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5)",target="_top",tooltip="A simple ordinary least squares model."]; "models.model.LikelihoodModel" -> "models.regression.OLSModel" [arrowsize=0.5,style="setlinewidth(0.5)"]; "models.regression.WLSModel" [URL="nipy.algorithms.statistics.models.regression.html#nipy.algorithms.statistics.models.regression.WLSModel",fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5)",target="_top",tooltip="A regression model with diagonal but non-identity covariance structure."]; "models.regression.OLSModel" -> "models.regression.WLSModel" [arrowsize=0.5,style="setlinewidth(0.5)"]; }

General linear models

Model

class nipy.algorithms.statistics.models.glm.Model(design, family=<nipy.algorithms.statistics.models.family.family.Gaussian object>)[source]

Bases: nipy.algorithms.statistics.models.regression.WLSModel, object

__init__(design, family=<nipy.algorithms.statistics.models.family.family.Gaussian object>)[source]
Parameters

design : array-like

This is your design matrix. Data are assumed to be column ordered with observations in rows.

niter = 10
deviance(Y=None, results=None, scale=1.0)[source]

Return (unnormalized) log-likelihood for GLM.

Note that self.scale is interpreted as a variance in old_model, so we divide the residuals by its sqrt.

cont(tol=1e-05)[source]

Continue iterating, or has convergence been obtained?

estimate_scale(Y=None, results=None)[source]

Return Pearson’s X^2 estimate of scale.

fit(Y)[source]

Fit model to data Y

Full fit of the model including estimate of covariance matrix, (whitened) residuals and scale.

Parameters

Y : array-like

The dependent variable for the Least Squares problem.

Returns

fit : RegressionResults

has_intercept()

Check if column of 1s is in column space of design

information(beta, nuisance=None)

Returns the information matrix at (beta, Y, nuisance).

See logL for details.

Parameters

beta : ndarray

The parameter estimates. Must be of length df_model.

nuisance : dict

A dict with key ‘sigma’, which is an estimate of sigma. If None, defaults to its maximum likelihood estimate (with beta fixed) as sum((Y - X*beta)**2) / n where n=Y.shape[0], X=self.design.

Returns

info : array

The information matrix, the negative of the inverse of the Hessian of the of the log-likelihood function evaluated at (theta, Y, nuisance).

initialize(design)

Initialize (possibly re-initialize) a Model instance.

For instance, the design matrix of a linear model may change and some things must be recomputed.

logL(beta, Y, nuisance=None)

Returns the value of the loglikelihood function at beta.

Given the whitened design matrix, the loglikelihood is evaluated at the parameter vector, beta, for the dependent variable, Y and the nuisance parameter, sigma.

Parameters

beta : ndarray

The parameter estimates. Must be of length df_model.

Y : ndarray

The dependent variable

nuisance : dict, optional

A dict with key ‘sigma’, which is an optional estimate of sigma. If None, defaults to its maximum likelihood estimate (with beta fixed) as sum((Y - X*beta)**2) / n, where n=Y.shape[0], X=self.design.

Returns

loglf : float

The value of the loglikelihood function.

Notes

The log-Likelihood Function is defined as

\[\ell(\beta,\sigma,Y)= -\frac{n}{2}\log(2\pi\sigma^2) - \|Y-X\beta\|^2/(2\sigma^2)\]

The parameter \(\sigma\) above is what is sometimes referred to as a nuisance parameter. That is, the likelihood is considered as a function of \(\beta\), but to evaluate it, a value of \(\sigma\) is needed.

If \(\sigma\) is not provided, then its maximum likelihood estimate:

\[\hat{\sigma}(\beta) = \frac{\text{SSE}(\beta)}{n}\]

is plugged in. This likelihood is now a function of only \(\beta\) and is technically referred to as a profile-likelihood.

References

R1
  1. Green. “Econometric Analysis,” 5th ed., Pearson, 2003.

predict(design=None)[source]

After a model has been fit, results are (assumed to be) stored in self.results, which itself should have a predict method.

rank()

Compute rank of design matrix

score(beta, Y, nuisance=None)

Gradient of the loglikelihood function at (beta, Y, nuisance).

The graient of the loglikelihood function at (beta, Y, nuisance) is the score function.

See logL() for details.

Parameters

beta : ndarray

The parameter estimates. Must be of length df_model.

Y : ndarray

The dependent variable.

nuisance : dict, optional

A dict with key ‘sigma’, which is an optional estimate of sigma. If None, defaults to its maximum likelihood estimate (with beta fixed) as sum((Y - X*beta)**2) / n, where n=Y.shape[0], X=self.design.

Returns

The gradient of the loglikelihood function.

whiten(X)

Whitener for WLS model, multiplies by sqrt(self.weights)