algorithms.optimize¶
Module: algorithms.optimize
¶
-
nipy.algorithms.optimize.
fmin_steepest
(f, x0, fprime=None, xtol=0.0001, ftol=0.0001, maxiter=None, epsilon=1.4901161193847656e-08, callback=None, disp=True)[source]¶ Minimize a function using a steepest gradient descent algorithm. This complements the collection of minimization routines provided in scipy.optimize. Steepest gradient iterations are cheaper than in the conjugate gradient or Newton methods, hence convergence may sometimes turn out faster algthough more iterations are typically needed.
- Parameters
f : callable
Function to be minimized
x0 : array
Starting point
fprime : callable
Function that computes the gradient of f
xtol : float
Relative tolerance on step sizes in line searches
ftol : float
Relative tolerance on function variations
maxiter : int
Maximum number of iterations
epsilon : float or ndarray
If fprime is approximated, use this value for the step
size (can be scalar or vector).
callback : callable
Optional function called after each iteration is complete
disp : bool
Print convergence message if True
- Returns
x : array
Gradient descent fix point, local minimizer of f