package sklearn

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
module LooseVersion : sig ... end
module MaskedArray : sig ... end
val comb : ?exact:Py.Object.t -> ?repetition:Py.Object.t -> n:[ `Int of int | `Ndarray of Ndarray.t ] -> k:Py.Object.t -> unit -> Py.Object.t

The number of combinations of N things taken k at a time.

This is often expressed as "N choose k".

Parameters ---------- N : int, ndarray Number of things. k : int, ndarray Number of elements taken. exact : bool, optional If `exact` is False, then floating point precision is used, otherwise exact long integer is computed. repetition : bool, optional If `repetition` is True, then the number of combinations with repetition is computed.

Returns ------- val : int, float, ndarray The total number of combinations.

See Also -------- binom : Binomial coefficient ufunc

Notes -----

  • Array arguments accepted only for exact=False case.
  • If N < 0, or k < 0, then 0 is returned.
  • If k > N and repetition=False, then 0 is returned.

Examples -------- >>> from scipy.special import comb >>> k = np.array(3, 4) >>> n = np.array(10, 10) >>> comb(n, k, exact=False) array( 120., 210.) >>> comb(10, 3, exact=True) 120L >>> comb(10, 3, exact=True, repetition=True) 220L

val lobpcg : ?b:Py.Object.t -> ?m:Py.Object.t -> ?y:Py.Object.t -> ?tol:Py.Object.t -> ?maxiter:Py.Object.t -> ?largest:Py.Object.t -> ?verbosityLevel:Py.Object.t -> ?retLambdaHistory:Py.Object.t -> ?retResidualNormsHistory:Py.Object.t -> a: [ `SparseMatrix of Csr_matrix.t | `LinearOperator of Py.Object.t | `PyObject of Py.Object.t ] -> x:Py.Object.t -> unit -> Ndarray.t

Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG)

LOBPCG is a preconditioned eigensolver for large symmetric positive definite (SPD) generalized eigenproblems.

Parameters ---------- A : sparse matrix, dense matrix, LinearOperator The symmetric linear operator of the problem, usually a sparse matrix. Often called the "stiffness matrix". X : ndarray, float32 or float64 Initial approximation to the ``k`` eigenvectors (non-sparse). If `A` has ``shape=(n,n)`` then `X` should have shape ``shape=(n,k)``. B : dense matrix, sparse matrix, LinearOperator, optional The right hand side operator in a generalized eigenproblem. By default, ``B = Identity``. Often called the "mass matrix". M : dense matrix, sparse matrix, LinearOperator, optional Preconditioner to `A`; by default ``M = Identity``. `M` should approximate the inverse of `A`. Y : ndarray, float32 or float64, optional n-by-sizeY matrix of constraints (non-sparse), sizeY < n The iterations will be performed in the B-orthogonal complement of the column-space of Y. Y must be full rank. tol : scalar, optional Solver tolerance (stopping criterion). The default is ``tol=n*sqrt(eps)``. maxiter : int, optional Maximum number of iterations. The default is ``maxiter = 20``. largest : bool, optional When True, solve for the largest eigenvalues, otherwise the smallest. verbosityLevel : int, optional Controls solver output. The default is ``verbosityLevel=0``. retLambdaHistory : bool, optional Whether to return eigenvalue history. Default is False. retResidualNormsHistory : bool, optional Whether to return history of residual norms. Default is False.

Returns ------- w : ndarray Array of ``k`` eigenvalues v : ndarray An array of ``k`` eigenvectors. `v` has the same shape as `X`. lambdas : list of ndarray, optional The eigenvalue history, if `retLambdaHistory` is True. rnorms : list of ndarray, optional The history of residual norms, if `retResidualNormsHistory` is True.

Notes ----- If both ``retLambdaHistory`` and ``retResidualNormsHistory`` are True, the return tuple has the following format ``(lambda, V, lambda history, residual norms history)``.

In the following ``n`` denotes the matrix size and ``m`` the number of required eigenvalues (smallest or largest).

The LOBPCG code internally solves eigenproblems of the size ``3m`` on every iteration by calling the "standard" dense eigensolver, so if ``m`` is not small enough compared to ``n``, it does not make sense to call the LOBPCG code, but rather one should use the "standard" eigensolver, e.g. numpy or scipy function in this case. If one calls the LOBPCG algorithm for ``5m > n``, it will most likely break internally, so the code tries to call the standard function instead.

It is not that ``n`` should be large for the LOBPCG to work, but rather the ratio ``n / m`` should be large. It you call LOBPCG with ``m=1`` and ``n=10``, it works though ``n`` is small. The method is intended for extremely large ``n / m``, see e.g., reference 28 in https://arxiv.org/abs/0705.2626

The convergence speed depends basically on two factors:

1. How well relatively separated the seeking eigenvalues are from the rest of the eigenvalues. One can try to vary ``m`` to make this better.

2. How well conditioned the problem is. This can be changed by using proper preconditioning. For example, a rod vibration test problem (under tests directory) is ill-conditioned for large ``n``, so convergence will be slow, unless efficient preconditioning is used. For this specific problem, a good simple preconditioner function would be a linear solve for `A`, which is easy to code since A is tridiagonal.

References ---------- .. 1 A. V. Knyazev (2001), Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method. SIAM Journal on Scientific Computing 23, no. 2, pp. 517-541. http://dx.doi.org/10.1137/S1064827500366124

.. 2 A. V. Knyazev, I. Lashuk, M. E. Argentati, and E. Ovchinnikov (2007), Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) in hypre and PETSc. https://arxiv.org/abs/0705.2626

.. 3 A. V. Knyazev's C and MATLAB implementations: https://bitbucket.org/joseroman/blopex

Examples --------

Solve ``A x = lambda x`` with constraints and preconditioning.

>>> import numpy as np >>> from scipy.sparse import spdiags, issparse >>> from scipy.sparse.linalg import lobpcg, LinearOperator >>> n = 100 >>> vals = np.arange(1, n + 1) >>> A = spdiags(vals, 0, n, n) >>> A.toarray() array([ 1., 0., 0., ..., 0., 0., 0.], [ 0., 2., 0., ..., 0., 0., 0.], [ 0., 0., 3., ..., 0., 0., 0.], ..., [ 0., 0., 0., ..., 98., 0., 0.], [ 0., 0., 0., ..., 0., 99., 0.], [ 0., 0., 0., ..., 0., 0., 100.])

Constraints:

>>> Y = np.eye(n, 3)

Initial guess for eigenvectors, should have linearly independent columns. Column dimension = number of requested eigenvalues.

>>> X = np.random.rand(n, 3)

Preconditioner in the inverse of A in this example:

>>> invA = spdiags(1./vals, 0, n, n)

The preconditiner must be defined by a function:

>>> def precond( x ): ... return invA @ x

The argument x of the preconditioner function is a matrix inside `lobpcg`, thus the use of matrix-matrix product ``@``.

The preconditioner function is passed to lobpcg as a `LinearOperator`:

>>> M = LinearOperator(matvec=precond, matmat=precond, ... shape=(n, n), dtype=float)

Let us now solve the eigenvalue problem for the matrix A:

>>> eigenvalues, _ = lobpcg(A, X, Y=Y, M=M, largest=False) >>> eigenvalues array(4., 5., 6.)

Note that the vectors passed in Y are the eigenvectors of the 3 smallest eigenvalues. The results returned are orthogonal to those.

val logsumexp : ?axis:Py.Object.t -> ?b:Ndarray.t -> ?keepdims:bool -> ?return_sign:bool -> a:Ndarray.t -> unit -> Ndarray.t

Compute the log of the sum of exponentials of input elements.

Parameters ---------- a : array_like Input array. axis : None or int or tuple of ints, optional Axis or axes over which the sum is taken. By default `axis` is None, and all elements are summed.

.. versionadded:: 0.11.0 keepdims : bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array.

.. versionadded:: 0.15.0 b : array-like, optional Scaling factor for exp(`a`) must be of the same shape as `a` or broadcastable to `a`. These values may be negative in order to implement subtraction.

.. versionadded:: 0.12.0 return_sign : bool, optional If this is set to True, the result will be a pair containing sign information; if False, results that are negative will be returned as NaN. Default is False (no sign information).

.. versionadded:: 0.16.0

Returns ------- res : ndarray The result, ``np.log(np.sum(np.exp(a)))`` calculated in a numerically more stable way. If `b` is given then ``np.log(np.sum(b*np.exp(a)))`` is returned. sgn : ndarray If return_sign is True, this will be an array of floating-point numbers matching res and +1, 0, or -1 depending on the sign of the result. If False, only one result is returned.

See Also -------- numpy.logaddexp, numpy.logaddexp2

Notes ----- NumPy has a logaddexp function which is very similar to `logsumexp`, but only handles two arguments. `logaddexp.reduce` is similar to this function, but may be less stable.

Examples -------- >>> from scipy.special import logsumexp >>> a = np.arange(10) >>> np.log(np.sum(np.exp(a))) 9.4586297444267107 >>> logsumexp(a) 9.4586297444267107

With weights

>>> a = np.arange(10) >>> b = np.arange(10, 0, -1) >>> logsumexp(a, b=b) 9.9170178533034665 >>> np.log(np.sum(b*np.exp(a))) 9.9170178533034647

Returning a sign flag

>>> logsumexp(1,2,b=1,-1,return_sign=True) (1.5413248546129181, -1.0)

Notice that `logsumexp` does not directly support masked arrays. To use it on a masked array, convert the mask into zero weights:

>>> a = np.ma.array(np.log(2), 2, np.log(3), ... mask=False, True, False) >>> b = (~a.mask).astype(int) >>> logsumexp(a.data, b=b), np.log(5) 1.6094379124341005, 1.6094379124341005

val loguniform : ?kwds:(string * Py.Object.t) list -> Py.Object.t list -> Py.Object.t

A loguniform or reciprocal continuous random variable.

As an instance of the `rv_continuous` class, `Distribution` object inherits from it a collection of generic methods (see below for the full list), and completes them with details specific for this particular distribution.

Methods ------- rvs(a, b, loc=0, scale=1, size=1, random_state=None) Random variates. pdf(x, a, b, loc=0, scale=1) Probability density function. logpdf(x, a, b, loc=0, scale=1) Log of the probability density function. cdf(x, a, b, loc=0, scale=1) Cumulative distribution function. logcdf(x, a, b, loc=0, scale=1) Log of the cumulative distribution function. sf(x, a, b, loc=0, scale=1) Survival function (also defined as ``1 - cdf``, but `sf` is sometimes more accurate). logsf(x, a, b, loc=0, scale=1) Log of the survival function. ppf(q, a, b, loc=0, scale=1) Percent point function (inverse of ``cdf`` --- percentiles). isf(q, a, b, loc=0, scale=1) Inverse survival function (inverse of ``sf``). moment(n, a, b, loc=0, scale=1) Non-central moment of order n stats(a, b, loc=0, scale=1, moments='mv') Mean('m'), variance('v'), skew('s'), and/or kurtosis('k'). entropy(a, b, loc=0, scale=1) (Differential) entropy of the RV. fit(data, a, b, loc=0, scale=1) Parameter estimates for generic data. expect(func, args=(a, b), loc=0, scale=1, lb=None, ub=None, conditional=False, **kwds) Expected value of a function (of one argument) with respect to the distribution. median(a, b, loc=0, scale=1) Median of the distribution. mean(a, b, loc=0, scale=1) Mean of the distribution. var(a, b, loc=0, scale=1) Variance of the distribution. std(a, b, loc=0, scale=1) Standard deviation of the distribution. interval(alpha, a, b, loc=0, scale=1) Endpoints of the range that contains alpha percent of the distribution

Notes ----- The probability density function for this class is:

.. math::

f(x, a, b) = \frac

x \log(b/a)

for :math:`a \le x \le b`, :math:`b > a > 0`. This class takes :math:`a` and :math:`b` as shape parameters. The probability density above is defined in the "standardized" form. To shift and/or scale the distribution use the ``loc`` and ``scale`` parameters. Specifically, ``Distribution.pdf(x, a, b, loc, scale)`` is identically equivalent to ``Distribution.pdf(y, a, b) / scale`` with ``y = (x - loc) / scale``.

Examples -------- >>> from scipy.stats import Distribution >>> import matplotlib.pyplot as plt >>> fig, ax = plt.subplots(1, 1)

Calculate a few first moments:

>>> a, b = >>> mean, var, skew, kurt = Distribution.stats(a, b, moments='mvsk')

Display the probability density function (``pdf``):

>>> x = np.linspace(Distribution.ppf(0.01, a, b), ... Distribution.ppf(0.99, a, b), 100) >>> ax.plot(x, Distribution.pdf(x, a, b), ... 'r-', lw=5, alpha=0.6, label='Distribution pdf')

Alternatively, the distribution object can be called (as a function) to fix the shape, location and scale parameters. This returns a "frozen" RV object holding the given parameters fixed.

Freeze the distribution and display the frozen ``pdf``:

>>> rv = Distribution(a, b) >>> ax.plot(x, rv.pdf(x), 'k-', lw=2, label='frozen pdf')

Check accuracy of ``cdf`` and ``ppf``:

>>> vals = Distribution.ppf(0.001, 0.5, 0.999, a, b) >>> np.allclose(0.001, 0.5, 0.999, Distribution.cdf(vals, a, b)) True

Generate random numbers:

>>> r = Distribution.rvs(a, b, size=1000)

And compare the histogram:

>>> ax.hist(r, density=True, histtype='stepfilled', alpha=0.2) >>> ax.legend(loc='best', frameon=False) >>> plt.show()

This doesn't show the equal probability of ``0.01``, ``0.1`` and ``1``. This is best when the x-axis is log-scaled:

>>> import numpy as np >>> fig, ax = plt.subplots(1, 1) >>> ax.hist(np.log10(r)) >>> ax.set_ylabel("Frequency") >>> ax.set_xlabel("Value of random variable") >>> ax.xaxis.set_major_locator(plt.FixedLocator(-2, -1, 0)) >>> ticks = "$10^{{ {} }}$".format(i) for i in [-2, -1, 0] >>> ax.set_xticklabels(ticks) # doctest: +SKIP >>> plt.show()

This random variable will be log-uniform regardless of the base chosen for ``a`` and ``b``. Let's specify with base ``2`` instead:

>>> rvs = Distribution(2**-2, 2**0).rvs(size=1000)

Values of ``1/4``, ``1/2`` and ``1`` are equally likely with this random variable. Here's the histogram:

>>> fig, ax = plt.subplots(1, 1) >>> ax.hist(np.log2(rvs)) >>> ax.set_ylabel("Frequency") >>> ax.set_xlabel("Value of random variable") >>> ax.xaxis.set_major_locator(plt.FixedLocator(-2, -1, 0)) >>> ticks = "$2^{{ {} }}$".format(i) for i in [-2, -1, 0] >>> ax.set_xticklabels(ticks) # doctest: +SKIP >>> plt.show()

val pinvh : ?cond:Py.Object.t -> ?rcond:Py.Object.t -> ?lower:bool -> ?return_rank:bool -> ?check_finite:bool -> a:Py.Object.t -> unit -> Py.Object.t

Compute the (Moore-Penrose) pseudo-inverse of a Hermitian matrix.

Copied in from scipy==1.2.2, in order to preserve the default choice of the `cond` and `above_cutoff` values which determine which values of the matrix inversion lie below threshold and are so set to zero. Changes in scipy 1.3 resulted in a smaller default threshold and thus slower convergence of dependent algorithms in some cases (see Sklearn github issue #14055).

Calculate a generalized inverse of a Hermitian or real symmetric matrix using its eigenvalue decomposition and including all eigenvalues with 'large' absolute value.

Parameters ---------- a : (N, N) array_like Real symmetric or complex hermetian matrix to be pseudo-inverted cond, rcond : float or None Cutoff for 'small' eigenvalues. Singular values smaller than rcond * largest_eigenvalue are considered zero.

If None or -1, suitable machine precision is used. lower : bool, optional Whether the pertinent array data is taken from the lower or upper triangle of a. (Default: lower) return_rank : bool, optional if True, return the effective rank of the matrix check_finite : bool, optional Whether to check that the input matrix contains only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs.

Returns ------- B : (N, N) ndarray The pseudo-inverse of matrix `a`. rank : int The effective rank of the matrix. Returned if return_rank == True

Raises ------ LinAlgError If eigenvalue does not converge

Examples -------- >>> from scipy.linalg import pinvh >>> a = np.random.randn(9, 6) >>> a = np.dot(a, a.T) >>> B = pinvh(a) >>> np.allclose(a, np.dot(a, np.dot(B, a))) True >>> np.allclose(B, np.dot(B, np.dot(a, B))) True

val sparse_lsqr : ?damp:Py.Object.t -> ?atol:Py.Object.t -> ?btol:Py.Object.t -> ?conlim:Py.Object.t -> ?iter_lim:Py.Object.t -> ?show:Py.Object.t -> ?calc_var:Py.Object.t -> ?x0:Py.Object.t -> a: [ `SparseMatrix of Csr_matrix.t | `Ndarray of Ndarray.t | `LinearOperator of Py.Object.t ] -> b:Py.Object.t -> unit -> Py.Object.t

Find the least-squares solution to a large, sparse, linear system of equations.

The function solves ``Ax = b`` or ``min ||b - Ax||^2`` or ``min ||Ax - b||^2 + d^2 ||x||^2``.

The matrix A may be square or rectangular (over-determined or under-determined), and may have any rank.

::

1. Unsymmetric equations -- solve A*x = b

2. Linear least squares -- solve A*x = b in the least-squares sense

3. Damped least squares -- solve ( A )*x = ( b ) ( damp*I ) ( 0 ) in the least-squares sense

Parameters ---------- A : sparse matrix, ndarray, LinearOperator Representation of an m-by-n matrix. Alternatively, ``A`` can be a linear operator which can produce ``Ax`` and ``A^T x`` using, e.g., ``scipy.sparse.linalg.LinearOperator``. b : array_like, shape (m,) Right-hand side vector ``b``. damp : float Damping coefficient. atol, btol : float, optional Stopping tolerances. If both are 1.0e-9 (say), the final residual norm should be accurate to about 9 digits. (The final x will usually have fewer correct digits, depending on cond(A) and the size of damp.) conlim : float, optional Another stopping tolerance. lsqr terminates if an estimate of ``cond(A)`` exceeds `conlim`. For compatible systems ``Ax = b``, `conlim` could be as large as 1.0e+12 (say). For least-squares problems, conlim should be less than 1.0e+8. Maximum precision can be obtained by setting ``atol = btol = conlim = zero``, but the number of iterations may then be excessive. iter_lim : int, optional Explicit limitation on number of iterations (for safety). show : bool, optional Display an iteration log. calc_var : bool, optional Whether to estimate diagonals of ``(A'A + damp^2*I)^

1

}

``. x0 : array_like, shape (n,), optional Initial guess of x, if None zeros are used.

.. versionadded:: 1.0.0

Returns ------- x : ndarray of float The final solution. istop : int Gives the reason for termination. 1 means x is an approximate solution to Ax = b. 2 means x approximately solves the least-squares problem. itn : int Iteration number upon termination. r1norm : float ``norm(r)``, where ``r = b - Ax``. r2norm : float ``sqrt( norm(r)^2 + damp^2 * norm(x)^2 )``. Equal to `r1norm` if ``damp == 0``. anorm : float Estimate of Frobenius norm of ``Abar = [A]; [damp*I]``. acond : float Estimate of ``cond(Abar)``. arnorm : float Estimate of ``norm(A'*r - damp^2*x)``. xnorm : float ``norm(x)`` var : ndarray of float If ``calc_var`` is True, estimates all diagonals of ``(A'A)^

1

}

`` (if ``damp == 0``) or more generally ``(A'A + damp^2*I)^

1

}

``. This is well defined if A has full column rank or ``damp > 0``. (Not sure what var means if ``rank(A) < n`` and ``damp = 0.``)

Notes ----- LSQR uses an iterative method to approximate the solution. The number of iterations required to reach a certain accuracy depends strongly on the scaling of the problem. Poor scaling of the rows or columns of A should therefore be avoided where possible.

For example, in problem 1 the solution is unaltered by row-scaling. If a row of A is very small or large compared to the other rows of A, the corresponding row of ( A b ) should be scaled up or down.

In problems 1 and 2, the solution x is easily recovered following column-scaling. Unless better information is known, the nonzero columns of A should be scaled so that they all have the same Euclidean norm (e.g., 1.0).

In problem 3, there is no freedom to re-scale if damp is nonzero. However, the value of damp should be assigned only after attention has been paid to the scaling of A.

The parameter damp is intended to help regularize ill-conditioned systems, by preventing the true solution from being very large. Another aid to regularization is provided by the parameter acond, which may be used to terminate iterations before the computed solution becomes very large.

If some initial estimate ``x0`` is known and if ``damp == 0``, one could proceed as follows:

1. Compute a residual vector ``r0 = b - A*x0``. 2. Use LSQR to solve the system ``A*dx = r0``. 3. Add the correction dx to obtain a final solution ``x = x0 + dx``.

This requires that ``x0`` be available before and after the call to LSQR. To judge the benefits, suppose LSQR takes k1 iterations to solve A*x = b and k2 iterations to solve A*dx = r0. If x0 is "good", norm(r0) will be smaller than norm(b). If the same stopping tolerances atol and btol are used for each system, k1 and k2 will be similar, but the final solution x0 + dx should be more accurate. The only way to reduce the total work is to use a larger stopping tolerance for the second system. If some value btol is suitable for A*x = b, the larger value btol*norm(b)/norm(r0) should be suitable for A*dx = r0.

Preconditioning is another way to reduce the number of iterations. If it is possible to solve a related system ``M*x = b`` efficiently, where M approximates A in some helpful way (e.g. M - A has low rank or its elements are small relative to those of A), LSQR may converge more rapidly on the system ``A*M(inverse)*z = b``, after which x can be recovered by solving M*x = z.

If A is symmetric, LSQR should not be used!

Alternatives are the symmetric conjugate-gradient method (cg) and/or SYMMLQ. SYMMLQ is an implementation of symmetric cg that applies to any symmetric A and will converge more rapidly than LSQR. If A is positive definite, there are other implementations of symmetric cg that require slightly less work per iteration than SYMMLQ (but will take the same number of iterations).

References ---------- .. 1 C. C. Paige and M. A. Saunders (1982a). "LSQR: An algorithm for sparse linear equations and sparse least squares", ACM TOMS 8(1), 43-71. .. 2 C. C. Paige and M. A. Saunders (1982b). "Algorithm 583. LSQR: Sparse linear equations and least squares problems", ACM TOMS 8(2), 195-209. .. 3 M. A. Saunders (1995). "Solution of sparse rectangular systems using LSQR and CRAIG", BIT 35, 588-604.

Examples -------- >>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import lsqr >>> A = csc_matrix([1., 0.], [1., 1.], [0., 1.], dtype=float)

The first example has the trivial solution `0, 0`

>>> b = np.array(0., 0., 0., dtype=float) >>> x, istop, itn, normr = lsqr(A, b):4 The exact solution is x = 0 >>> istop 0 >>> x array( 0., 0.)

The stopping code `istop=0` returned indicates that a vector of zeros was found as a solution. The returned solution `x` indeed contains `0., 0.`. The next example has a non-trivial solution:

>>> b = np.array(1., 0., -1., dtype=float) >>> x, istop, itn, r1norm = lsqr(A, b):4 >>> istop 1 >>> x array( 1., -1.) >>> itn 1 >>> r1norm 4.440892098500627e-16

As indicated by `istop=1`, `lsqr` found a solution obeying the tolerance limits. The given solution `1., -1.` obviously solves the equation. The remaining return values include information about the number of iterations (`itn=1`) and the remaining difference of left and right side of the solved equation. The final example demonstrates the behavior in the case where there is no solution for the equation:

>>> b = np.array(1., 0.01, -1., dtype=float) >>> x, istop, itn, r1norm = lsqr(A, b):4 >>> istop 2 >>> x array( 1.00333333, -0.99666667) >>> A.dot(x)-b array( 0.00333333, -0.00333333, 0.00333333) >>> r1norm 0.005773502691896255

`istop` indicates that the system is inconsistent and thus `x` is rather an approximate solution to the corresponding least-squares problem. `r1norm` contains the norm of the minimal residual that was found.

OCaml

Innovation. Community. Security.