package sklearn

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
val get_py : string -> Py.Object.t

Get an attribute of this module as a Py.Object.t. This is useful to pass a Python function to another function.

val line_search_wolfe1 : ?gfk:[> `ArrayLike ] Np.Obj.t -> ?old_fval:float -> ?old_old_fval:float -> ?args:Py.Object.t -> ?c1:Py.Object.t -> ?c2:Py.Object.t -> ?amax:Py.Object.t -> ?amin:Py.Object.t -> ?xtol:Py.Object.t -> f:Py.Object.t -> fprime:Py.Object.t -> xk:[> `ArrayLike ] Np.Obj.t -> pk:[> `ArrayLike ] Np.Obj.t -> unit -> [> `ArrayLike ] Np.Obj.t

As `scalar_search_wolfe1` but do a line search to direction `pk`

Parameters ---------- f : callable Function `f(x)` fprime : callable Gradient of `f` xk : array_like Current point pk : array_like Search direction

gfk : array_like, optional Gradient of `f` at point `xk` old_fval : float, optional Value of `f` at point `xk` old_old_fval : float, optional Value of `f` at point preceding `xk`

The rest of the parameters are the same as for `scalar_search_wolfe1`.

Returns ------- stp, f_count, g_count, fval, old_fval As in `line_search_wolfe1` gval : array Gradient of `f` at the final point

val line_search_wolfe2 : ?gfk:[> `ArrayLike ] Np.Obj.t -> ?old_fval:float -> ?old_old_fval:float -> ?args:Py.Object.t -> ?c1:float -> ?c2:float -> ?amax:float -> ?extra_condition:Py.Object.t -> ?maxiter:int -> f:Py.Object.t -> myfprime:Py.Object.t -> xk:[> `ArrayLike ] Np.Obj.t -> pk:[> `ArrayLike ] Np.Obj.t -> unit -> float option * int * int * float option * float * float option

Find alpha that satisfies strong Wolfe conditions.

Parameters ---------- f : callable f(x,*args) Objective function. myfprime : callable f'(x,*args) Objective function gradient. xk : ndarray Starting point. pk : ndarray Search direction. gfk : ndarray, optional Gradient value for x=xk (xk being the current parameter estimate). Will be recomputed if omitted. old_fval : float, optional Function value for x=xk. Will be recomputed if omitted. old_old_fval : float, optional Function value for the point preceding x=xk. args : tuple, optional Additional arguments passed to objective function. c1 : float, optional Parameter for Armijo condition rule. c2 : float, optional Parameter for curvature condition rule. amax : float, optional Maximum step size extra_condition : callable, optional A callable of the form ``extra_condition(alpha, x, f, g)`` returning a boolean. Arguments are the proposed step ``alpha`` and the corresponding ``x``, ``f`` and ``g`` values. The line search accepts the value of ``alpha`` only if this callable returns ``True``. If the callable returns ``False`` for the step length, the algorithm will continue with new iterates. The callable is only called for iterates satisfying the strong Wolfe conditions. maxiter : int, optional Maximum number of iterations to perform.

Returns ------- alpha : float or None Alpha for which ``x_new = x0 + alpha * pk``, or None if the line search algorithm did not converge. fc : int Number of function evaluations made. gc : int Number of gradient evaluations made. new_fval : float or None New function value ``f(x_new)=f(x0+alpha*pk)``, or None if the line search algorithm did not converge. old_fval : float Old function value ``f(x0)``. new_slope : float or None The local slope along the search direction at the new value ``<myfprime(x_new), pk>``, or None if the line search algorithm did not converge.

Notes ----- Uses the line search algorithm to enforce strong Wolfe conditions. See Wright and Nocedal, 'Numerical Optimization', 1999, pp. 59-61.

Examples -------- >>> from scipy.optimize import line_search

A objective function and its gradient are defined.

>>> def obj_func(x): ... return (x0)**2+(x1)**2 >>> def obj_grad(x): ... return 2*x[0], 2*x[1]

We can find alpha that satisfies strong Wolfe conditions.

>>> start_point = np.array(1.8, 1.7) >>> search_gradient = np.array(-1.0, -1.0) >>> line_search(obj_func, obj_grad, start_point, search_gradient) (1.0, 2, 1, 1.1300000000000001, 6.13, 1.6, 1.4)

val newton_cg : ?args:Py.Object.t -> ?tol:Py.Object.t -> ?maxiter:Py.Object.t -> ?maxinner:Py.Object.t -> ?line_search:Py.Object.t -> ?warn:Py.Object.t -> grad_hess:Py.Object.t -> func:Py.Object.t -> grad:Py.Object.t -> x0:Py.Object.t -> unit -> Py.Object.t

DEPRECATED: newton_cg is deprecated in version 0.22 and will be removed in version 0.24.