package sklearn

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
val get_py : string -> Py.Object.t

Get an attribute of this module as a Py.Object.t. This is useful to pass a Python function to another function.

module LooseVersion : sig ... end
val lobpcg : ?b:[ `PyObject of Py.Object.t | `Spmatrix of [> `Spmatrix ] Np.Obj.t ] -> ?m:[ `PyObject of Py.Object.t | `Spmatrix of [> `Spmatrix ] Np.Obj.t ] -> ?y:[ `PyObject of Py.Object.t | `Arr of [> `ArrayLike ] Np.Obj.t ] -> ?tol:[ `S of string | `Bool of bool | `I of int | `F of float ] -> ?maxiter:int -> ?largest:bool -> ?verbosityLevel:int -> ?retLambdaHistory:bool -> ?retResidualNormsHistory:bool -> a:[ `Spmatrix of [> `Spmatrix ] Np.Obj.t | `PyObject of Py.Object.t ] -> x:[ `Arr of [> `ArrayLike ] Np.Obj.t | `PyObject of Py.Object.t ] -> unit -> [> `ArrayLike ] Np.Obj.t * [> `ArrayLike ] Np.Obj.t * Np.Numpy.Ndarray.List.t * Np.Numpy.Ndarray.List.t

Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG)

LOBPCG is a preconditioned eigensolver for large symmetric positive definite (SPD) generalized eigenproblems.

Parameters ---------- A : sparse matrix, dense matrix, LinearOperator The symmetric linear operator of the problem, usually a sparse matrix. Often called the 'stiffness matrix'. X : ndarray, float32 or float64 Initial approximation to the ``k`` eigenvectors (non-sparse). If `A` has ``shape=(n,n)`` then `X` should have shape ``shape=(n,k)``. B : dense matrix, sparse matrix, LinearOperator, optional The right hand side operator in a generalized eigenproblem. By default, ``B = Identity``. Often called the 'mass matrix'. M : dense matrix, sparse matrix, LinearOperator, optional Preconditioner to `A`; by default ``M = Identity``. `M` should approximate the inverse of `A`. Y : ndarray, float32 or float64, optional n-by-sizeY matrix of constraints (non-sparse), sizeY < n The iterations will be performed in the B-orthogonal complement of the column-space of Y. Y must be full rank. tol : scalar, optional Solver tolerance (stopping criterion). The default is ``tol=n*sqrt(eps)``. maxiter : int, optional Maximum number of iterations. The default is ``maxiter = 20``. largest : bool, optional When True, solve for the largest eigenvalues, otherwise the smallest. verbosityLevel : int, optional Controls solver output. The default is ``verbosityLevel=0``. retLambdaHistory : bool, optional Whether to return eigenvalue history. Default is False. retResidualNormsHistory : bool, optional Whether to return history of residual norms. Default is False.

Returns ------- w : ndarray Array of ``k`` eigenvalues v : ndarray An array of ``k`` eigenvectors. `v` has the same shape as `X`. lambdas : list of ndarray, optional The eigenvalue history, if `retLambdaHistory` is True. rnorms : list of ndarray, optional The history of residual norms, if `retResidualNormsHistory` is True.

Notes ----- If both ``retLambdaHistory`` and ``retResidualNormsHistory`` are True, the return tuple has the following format ``(lambda, V, lambda history, residual norms history)``.

In the following ``n`` denotes the matrix size and ``m`` the number of required eigenvalues (smallest or largest).

The LOBPCG code internally solves eigenproblems of the size ``3m`` on every iteration by calling the 'standard' dense eigensolver, so if ``m`` is not small enough compared to ``n``, it does not make sense to call the LOBPCG code, but rather one should use the 'standard' eigensolver, e.g. numpy or scipy function in this case. If one calls the LOBPCG algorithm for ``5m > n``, it will most likely break internally, so the code tries to call the standard function instead.

It is not that ``n`` should be large for the LOBPCG to work, but rather the ratio ``n / m`` should be large. It you call LOBPCG with ``m=1`` and ``n=10``, it works though ``n`` is small. The method is intended for extremely large ``n / m``, see e.g., reference 28 in https://arxiv.org/abs/0705.2626

The convergence speed depends basically on two factors:

1. How well relatively separated the seeking eigenvalues are from the rest of the eigenvalues. One can try to vary ``m`` to make this better.

2. How well conditioned the problem is. This can be changed by using proper preconditioning. For example, a rod vibration test problem (under tests directory) is ill-conditioned for large ``n``, so convergence will be slow, unless efficient preconditioning is used. For this specific problem, a good simple preconditioner function would be a linear solve for `A`, which is easy to code since A is tridiagonal.

References ---------- .. 1 A. V. Knyazev (2001), Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method. SIAM Journal on Scientific Computing 23, no. 2, pp. 517-541. http://dx.doi.org/10.1137/S1064827500366124

.. 2 A. V. Knyazev, I. Lashuk, M. E. Argentati, and E. Ovchinnikov (2007), Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) in hypre and PETSc. https://arxiv.org/abs/0705.2626

.. 3 A. V. Knyazev's C and MATLAB implementations: https://bitbucket.org/joseroman/blopex

Examples --------

Solve ``A x = lambda x`` with constraints and preconditioning.

>>> import numpy as np >>> from scipy.sparse import spdiags, issparse >>> from scipy.sparse.linalg import lobpcg, LinearOperator >>> n = 100 >>> vals = np.arange(1, n + 1) >>> A = spdiags(vals, 0, n, n) >>> A.toarray() array([ 1., 0., 0., ..., 0., 0., 0.], [ 0., 2., 0., ..., 0., 0., 0.], [ 0., 0., 3., ..., 0., 0., 0.], ..., [ 0., 0., 0., ..., 98., 0., 0.], [ 0., 0., 0., ..., 0., 99., 0.], [ 0., 0., 0., ..., 0., 0., 100.])

Constraints:

>>> Y = np.eye(n, 3)

Initial guess for eigenvectors, should have linearly independent columns. Column dimension = number of requested eigenvalues.

>>> X = np.random.rand(n, 3)

Preconditioner in the inverse of A in this example:

>>> invA = spdiags(1./vals, 0, n, n)

The preconditiner must be defined by a function:

>>> def precond( x ): ... return invA @ x

The argument x of the preconditioner function is a matrix inside `lobpcg`, thus the use of matrix-matrix product ``@``.

The preconditioner function is passed to lobpcg as a `LinearOperator`:

>>> M = LinearOperator(matvec=precond, matmat=precond, ... shape=(n, n), dtype=float)

Let us now solve the eigenvalue problem for the matrix A:

>>> eigenvalues, _ = lobpcg(A, X, Y=Y, M=M, largest=False) >>> eigenvalues array(4., 5., 6.)

Note that the vectors passed in Y are the eigenvectors of the 3 smallest eigenvalues. The results returned are orthogonal to those.

val loguniform : ?loc:Py.Object.t -> ?scale:Py.Object.t -> a:Py.Object.t -> b:Py.Object.t -> unit -> Py.Object.t

A loguniform or reciprocal continuous random variable.

As an instance of the `rv_continuous` class, `Distribution` object inherits from it a collection of generic methods (see below for the full list), and completes them with details specific for this particular distribution.

Methods ------- rvs(a, b, loc=0, scale=1, size=1, random_state=None) Random variates. pdf(x, a, b, loc=0, scale=1) Probability density function. logpdf(x, a, b, loc=0, scale=1) Log of the probability density function. cdf(x, a, b, loc=0, scale=1) Cumulative distribution function. logcdf(x, a, b, loc=0, scale=1) Log of the cumulative distribution function. sf(x, a, b, loc=0, scale=1) Survival function (also defined as ``1 - cdf``, but `sf` is sometimes more accurate). logsf(x, a, b, loc=0, scale=1) Log of the survival function. ppf(q, a, b, loc=0, scale=1) Percent point function (inverse of ``cdf`` --- percentiles). isf(q, a, b, loc=0, scale=1) Inverse survival function (inverse of ``sf``). moment(n, a, b, loc=0, scale=1) Non-central moment of order n stats(a, b, loc=0, scale=1, moments='mv') Mean('m'), variance('v'), skew('s'), and/or kurtosis('k'). entropy(a, b, loc=0, scale=1) (Differential) entropy of the RV. fit(data) Parameter estimates for generic data. See `scipy.stats.rv_continuous.fit <https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.fit.html#scipy.stats.rv_continuous.fit>`__ for detailed documentation of the keyword arguments. expect(func, args=(a, b), loc=0, scale=1, lb=None, ub=None, conditional=False, **kwds) Expected value of a function (of one argument) with respect to the distribution. median(a, b, loc=0, scale=1) Median of the distribution. mean(a, b, loc=0, scale=1) Mean of the distribution. var(a, b, loc=0, scale=1) Variance of the distribution. std(a, b, loc=0, scale=1) Standard deviation of the distribution. interval(alpha, a, b, loc=0, scale=1) Endpoints of the range that contains alpha percent of the distribution

Notes ----- The probability density function for this class is:

.. math::

f(x, a, b) = \frac

x \log(b/a)

for :math:`a \le x \le b`, :math:`b > a > 0`. This class takes :math:`a` and :math:`b` as shape parameters. The probability density above is defined in the 'standardized' form. To shift and/or scale the distribution use the ``loc`` and ``scale`` parameters. Specifically, ``Distribution.pdf(x, a, b, loc, scale)`` is identically equivalent to ``Distribution.pdf(y, a, b) / scale`` with ``y = (x - loc) / scale``.

Examples -------- >>> from scipy.stats import Distribution >>> import matplotlib.pyplot as plt >>> fig, ax = plt.subplots(1, 1)

Calculate a few first moments:

>>> a, b = >>> mean, var, skew, kurt = Distribution.stats(a, b, moments='mvsk')

Display the probability density function (``pdf``):

>>> x = np.linspace(Distribution.ppf(0.01, a, b), ... Distribution.ppf(0.99, a, b), 100) >>> ax.plot(x, Distribution.pdf(x, a, b), ... 'r-', lw=5, alpha=0.6, label='Distribution pdf')

Alternatively, the distribution object can be called (as a function) to fix the shape, location and scale parameters. This returns a 'frozen' RV object holding the given parameters fixed.

Freeze the distribution and display the frozen ``pdf``:

>>> rv = Distribution(a, b) >>> ax.plot(x, rv.pdf(x), 'k-', lw=2, label='frozen pdf')

Check accuracy of ``cdf`` and ``ppf``:

>>> vals = Distribution.ppf(0.001, 0.5, 0.999, a, b) >>> np.allclose(0.001, 0.5, 0.999, Distribution.cdf(vals, a, b)) True

Generate random numbers:

>>> r = Distribution.rvs(a, b, size=1000)

And compare the histogram:

>>> ax.hist(r, density=True, histtype='stepfilled', alpha=0.2) >>> ax.legend(loc='best', frameon=False) >>> plt.show()

This doesn't show the equal probability of ``0.01``, ``0.1`` and ``1``. This is best when the x-axis is log-scaled:

>>> import numpy as np >>> fig, ax = plt.subplots(1, 1) >>> ax.hist(np.log10(r)) >>> ax.set_ylabel('Frequency') >>> ax.set_xlabel('Value of random variable') >>> ax.xaxis.set_major_locator(plt.FixedLocator(-2, -1, 0)) >>> ticks = '$10^{{ {} }}$'.format(i) for i in [-2, -1, 0] >>> ax.set_xticklabels(ticks) # doctest: +SKIP >>> plt.show()

This random variable will be log-uniform regardless of the base chosen for ``a`` and ``b``. Let's specify with base ``2`` instead:

>>> rvs = Distribution(2**-2, 2**0).rvs(size=1000)

Values of ``1/4``, ``1/2`` and ``1`` are equally likely with this random variable. Here's the histogram:

>>> fig, ax = plt.subplots(1, 1) >>> ax.hist(np.log2(rvs)) >>> ax.set_ylabel('Frequency') >>> ax.set_xlabel('Value of random variable') >>> ax.xaxis.set_major_locator(plt.FixedLocator(-2, -1, 0)) >>> ticks = '$2^{{ {} }}$'.format(i) for i in [-2, -1, 0] >>> ax.set_xticklabels(ticks) # doctest: +SKIP >>> plt.show()

val parse_version : Py.Object.t -> Py.Object.t

None

val sparse_lsqr : ?damp:float -> ?atol:Py.Object.t -> ?btol:Py.Object.t -> ?conlim:float -> ?iter_lim:int -> ?show:bool -> ?calc_var:bool -> ?x0:[> `ArrayLike ] Np.Obj.t -> a:[ `Arr of [> `ArrayLike ] Np.Obj.t | `LinearOperator of Py.Object.t ] -> b:[> `ArrayLike ] Np.Obj.t -> unit -> Py.Object.t * int * int * float * float * float * float * float * float * Py.Object.t

Find the least-squares solution to a large, sparse, linear system of equations.

The function solves ``Ax = b`` or ``min ||Ax - b||^2`` or ``min ||Ax - b||^2 + d^2 ||x||^2``.

The matrix A may be square or rectangular (over-determined or under-determined), and may have any rank.

::

1. Unsymmetric equations -- solve A*x = b

2. Linear least squares -- solve A*x = b in the least-squares sense

3. Damped least squares -- solve ( A )*x = ( b ) ( damp*I ) ( 0 ) in the least-squares sense

Parameters ---------- A : sparse matrix, ndarray, LinearOperator Representation of an m-by-n matrix. Alternatively, ``A`` can be a linear operator which can produce ``Ax`` and ``A^T x`` using, e.g., ``scipy.sparse.linalg.LinearOperator``. b : array_like, shape (m,) Right-hand side vector ``b``. damp : float Damping coefficient. atol, btol : float, optional Stopping tolerances. If both are 1.0e-9 (say), the final residual norm should be accurate to about 9 digits. (The final x will usually have fewer correct digits, depending on cond(A) and the size of damp.) conlim : float, optional Another stopping tolerance. lsqr terminates if an estimate of ``cond(A)`` exceeds `conlim`. For compatible systems ``Ax = b``, `conlim` could be as large as 1.0e+12 (say). For least-squares problems, conlim should be less than 1.0e+8. Maximum precision can be obtained by setting ``atol = btol = conlim = zero``, but the number of iterations may then be excessive. iter_lim : int, optional Explicit limitation on number of iterations (for safety). show : bool, optional Display an iteration log. calc_var : bool, optional Whether to estimate diagonals of ``(A'A + damp^2*I)^

1

}

``. x0 : array_like, shape (n,), optional Initial guess of x, if None zeros are used.

.. versionadded:: 1.0.0

Returns ------- x : ndarray of float The final solution. istop : int Gives the reason for termination. 1 means x is an approximate solution to Ax = b. 2 means x approximately solves the least-squares problem. itn : int Iteration number upon termination. r1norm : float ``norm(r)``, where ``r = b - Ax``. r2norm : float ``sqrt( norm(r)^2 + damp^2 * norm(x)^2 )``. Equal to `r1norm` if ``damp == 0``. anorm : float Estimate of Frobenius norm of ``Abar = [A]; [damp*I]``. acond : float Estimate of ``cond(Abar)``. arnorm : float Estimate of ``norm(A'*r - damp^2*x)``. xnorm : float ``norm(x)`` var : ndarray of float If ``calc_var`` is True, estimates all diagonals of ``(A'A)^

1

}

`` (if ``damp == 0``) or more generally ``(A'A + damp^2*I)^

1

}

``. This is well defined if A has full column rank or ``damp > 0``. (Not sure what var means if ``rank(A) < n`` and ``damp = 0.``)

Notes ----- LSQR uses an iterative method to approximate the solution. The number of iterations required to reach a certain accuracy depends strongly on the scaling of the problem. Poor scaling of the rows or columns of A should therefore be avoided where possible.

For example, in problem 1 the solution is unaltered by row-scaling. If a row of A is very small or large compared to the other rows of A, the corresponding row of ( A b ) should be scaled up or down.

In problems 1 and 2, the solution x is easily recovered following column-scaling. Unless better information is known, the nonzero columns of A should be scaled so that they all have the same Euclidean norm (e.g., 1.0).

In problem 3, there is no freedom to re-scale if damp is nonzero. However, the value of damp should be assigned only after attention has been paid to the scaling of A.

The parameter damp is intended to help regularize ill-conditioned systems, by preventing the true solution from being very large. Another aid to regularization is provided by the parameter acond, which may be used to terminate iterations before the computed solution becomes very large.

If some initial estimate ``x0`` is known and if ``damp == 0``, one could proceed as follows:

1. Compute a residual vector ``r0 = b - A*x0``. 2. Use LSQR to solve the system ``A*dx = r0``. 3. Add the correction dx to obtain a final solution ``x = x0 + dx``.

This requires that ``x0`` be available before and after the call to LSQR. To judge the benefits, suppose LSQR takes k1 iterations to solve A*x = b and k2 iterations to solve A*dx = r0. If x0 is 'good', norm(r0) will be smaller than norm(b). If the same stopping tolerances atol and btol are used for each system, k1 and k2 will be similar, but the final solution x0 + dx should be more accurate. The only way to reduce the total work is to use a larger stopping tolerance for the second system. If some value btol is suitable for A*x = b, the larger value btol*norm(b)/norm(r0) should be suitable for A*dx = r0.

Preconditioning is another way to reduce the number of iterations. If it is possible to solve a related system ``M*x = b`` efficiently, where M approximates A in some helpful way (e.g. M - A has low rank or its elements are small relative to those of A), LSQR may converge more rapidly on the system ``A*M(inverse)*z = b``, after which x can be recovered by solving M*x = z.

If A is symmetric, LSQR should not be used!

Alternatives are the symmetric conjugate-gradient method (cg) and/or SYMMLQ. SYMMLQ is an implementation of symmetric cg that applies to any symmetric A and will converge more rapidly than LSQR. If A is positive definite, there are other implementations of symmetric cg that require slightly less work per iteration than SYMMLQ (but will take the same number of iterations).

References ---------- .. 1 C. C. Paige and M. A. Saunders (1982a). 'LSQR: An algorithm for sparse linear equations and sparse least squares', ACM TOMS 8(1), 43-71. .. 2 C. C. Paige and M. A. Saunders (1982b). 'Algorithm 583. LSQR: Sparse linear equations and least squares problems', ACM TOMS 8(2), 195-209. .. 3 M. A. Saunders (1995). 'Solution of sparse rectangular systems using LSQR and CRAIG', BIT 35, 588-604.

Examples -------- >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import lsqr >>> A = csc_matrix([1., 0.], [1., 1.], [0., 1.], dtype=float)

The first example has the trivial solution `0, 0`

>>> b = np.array(0., 0., 0., dtype=float) >>> x, istop, itn, normr = lsqr(A, b):4 The exact solution is x = 0 >>> istop 0 >>> x array( 0., 0.)

The stopping code `istop=0` returned indicates that a vector of zeros was found as a solution. The returned solution `x` indeed contains `0., 0.`. The next example has a non-trivial solution:

>>> b = np.array(1., 0., -1., dtype=float) >>> x, istop, itn, r1norm = lsqr(A, b):4 >>> istop 1 >>> x array( 1., -1.) >>> itn 1 >>> r1norm 4.440892098500627e-16

As indicated by `istop=1`, `lsqr` found a solution obeying the tolerance limits. The given solution `1., -1.` obviously solves the equation. The remaining return values include information about the number of iterations (`itn=1`) and the remaining difference of left and right side of the solved equation. The final example demonstrates the behavior in the case where there is no solution for the equation:

>>> b = np.array(1., 0.01, -1., dtype=float) >>> x, istop, itn, r1norm = lsqr(A, b):4 >>> istop 2 >>> x array( 1.00333333, -0.99666667) >>> A.dot(x)-b array( 0.00333333, -0.00333333, 0.00333333) >>> r1norm 0.005773502691896255

`istop` indicates that the system is inconsistent and thus `x` is rather an approximate solution to the corresponding least-squares problem. `r1norm` contains the norm of the minimal residual that was found.

OCaml

Innovation. Community. Security.