package sklearn

  1. Overview
  2. Docs
Legend:
Page
Library
Module
Module type
Parameter
Class
Class type
Source

Module Discriminant_analysis.LinearDiscriminantAnalysisSource

Sourcetype tag = [
  1. | `LinearDiscriminantAnalysis
]
Sourcetype t = [ `BaseEstimator | `ClassifierMixin | `LinearClassifierMixin | `LinearDiscriminantAnalysis | `Object | `TransformerMixin ] Obj.t
Sourceval of_pyobject : Py.Object.t -> t
Sourceval to_pyobject : [> tag ] Obj.t -> Py.Object.t
Sourceval as_estimator : t -> [ `BaseEstimator ] Obj.t
Sourceval as_classifier : t -> [ `ClassifierMixin ] Obj.t
Sourceval as_transformer : t -> [ `TransformerMixin ] Obj.t
Sourceval as_linear_classifier : t -> [ `LinearClassifierMixin ] Obj.t
Sourceval create : ?solver:[ `Svd | `Lsqr | `Eigen ] -> ?shrinkage:[ `Auto | `F of float ] -> ?priors:[> `ArrayLike ] Np.Obj.t -> ?n_components:int -> ?store_covariance:bool -> ?tol:float -> unit -> t

Linear Discriminant Analysis

A classifier with a linear decision boundary, generated by fitting class conditional densities to the data and using Bayes' rule.

The model fits a Gaussian density to each class, assuming that all classes share the same covariance matrix.

The fitted model can also be used to reduce the dimensionality of the input by projecting it to the most discriminative directions, using the `transform` method.

.. versionadded:: 0.17 *LinearDiscriminantAnalysis*.

Read more in the :ref:`User Guide <lda_qda>`.

Parameters ---------- solver : 'svd', 'lsqr', 'eigen', default='svd' Solver to use, possible values:

  • 'svd': Singular value decomposition (default). Does not compute the covariance matrix, therefore this solver is recommended for data with a large number of features.
  • 'lsqr': Least squares solution, can be combined with shrinkage.
  • 'eigen': Eigenvalue decomposition, can be combined with shrinkage.

shrinkage : 'auto' or float, default=None Shrinkage parameter, possible values:

  • None: no shrinkage (default).
  • 'auto': automatic shrinkage using the Ledoit-Wolf lemma.
  • float between 0 and 1: fixed shrinkage parameter.

Note that shrinkage works only with 'lsqr' and 'eigen' solvers.

priors : array-like of shape (n_classes,), default=None The class prior probabilities. By default, the class proportions are inferred from the training data.

n_components : int, default=None Number of components (<= min(n_classes - 1, n_features)) for dimensionality reduction. If None, will be set to min(n_classes - 1, n_features). This parameter only affects the `transform` method.

store_covariance : bool, default=False If True, explicitely compute the weighted within-class covariance matrix when solver is 'svd'. The matrix is always computed and stored for the other solvers.

.. versionadded:: 0.17

tol : float, default=1.0e-4 Absolute threshold for a singular value of X to be considered significant, used to estimate the rank of X. Dimensions whose singular values are non-significant are discarded. Only used if solver is 'svd'.

.. versionadded:: 0.17

Attributes ---------- coef_ : ndarray of shape (n_features,) or (n_classes, n_features) Weight vector(s).

intercept_ : ndarray of shape (n_classes,) Intercept term.

covariance_ : array-like of shape (n_features, n_features) Weighted within-class covariance matrix. It corresponds to `sum_k prior_k * C_k` where `C_k` is the covariance matrix of the samples in class `k`. The `C_k` are estimated using the (potentially shrunk) biased estimator of covariance. If solver is 'svd', only exists when `store_covariance` is True.

explained_variance_ratio_ : ndarray of shape (n_components,) Percentage of variance explained by each of the selected components. If ``n_components`` is not set then all components are stored and the sum of explained variances is equal to 1.0. Only available when eigen or svd solver is used.

means_ : array-like of shape (n_classes, n_features) Class-wise means.

priors_ : array-like of shape (n_classes,) Class priors (sum to 1).

scalings_ : array-like of shape (rank, n_classes - 1) Scaling of the features in the space spanned by the class centroids. Only available for 'svd' and 'eigen' solvers.

xbar_ : array-like of shape (n_features,) Overall mean. Only present if solver is 'svd'.

classes_ : array-like of shape (n_classes,) Unique class labels.

See also -------- sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis: Quadratic Discriminant Analysis

Examples -------- >>> import numpy as np >>> from sklearn.discriminant_analysis import LinearDiscriminantAnalysis >>> X = np.array([-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]) >>> y = np.array(1, 1, 1, 2, 2, 2) >>> clf = LinearDiscriminantAnalysis() >>> clf.fit(X, y) LinearDiscriminantAnalysis() >>> print(clf.predict([-0.8, -1])) 1

Sourceval decision_function : x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Apply decision function to an array of samples.

The decision function is equal (up to a constant factor) to the log-posterior of the model, i.e. `log p(y = k | x)`. In a binary classification setting this instead corresponds to the difference `log p(y = 1 | x) - log p(y = 0 | x)`. See :ref:`lda_qda_math`.

Parameters ---------- X : array-like of shape (n_samples, n_features) Array of samples (test vectors).

Returns ------- C : ndarray of shape (n_samples,) or (n_samples, n_classes) Decision function values related to each class, per sample. In the two-class case, the shape is (n_samples,), giving the log likelihood ratio of the positive class.

Sourceval fit : x:[> `ArrayLike ] Np.Obj.t -> y:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> t

Fit LinearDiscriminantAnalysis model according to the given training data and parameters.

.. versionchanged:: 0.19 *store_covariance* has been moved to main constructor.

.. versionchanged:: 0.19 *tol* has been moved to main constructor.

Parameters ---------- X : array-like of shape (n_samples, n_features) Training data.

y : array-like of shape (n_samples,) Target values.

Sourceval fit_transform : ?y:[> `ArrayLike ] Np.Obj.t -> ?fit_params:(string * Py.Object.t) list -> x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters ---------- X : array-like, sparse matrix, dataframe of shape (n_samples, n_features)

y : ndarray of shape (n_samples,), default=None Target values.

**fit_params : dict Additional fit parameters.

Returns ------- X_new : ndarray array of shape (n_samples, n_features_new) Transformed array.

Sourceval get_params : ?deep:bool -> [> tag ] Obj.t -> Dict.t

Get parameters for this estimator.

Parameters ---------- deep : bool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns ------- params : mapping of string to any Parameter names mapped to their values.

Sourceval predict : x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Predict class labels for samples in X.

Parameters ---------- X : array_like or sparse matrix, shape (n_samples, n_features) Samples.

Returns ------- C : array, shape n_samples Predicted class label per sample.

Sourceval predict_log_proba : x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Estimate log probability.

Parameters ---------- X : array-like of shape (n_samples, n_features) Input data.

Returns ------- C : ndarray of shape (n_samples, n_classes) Estimated log probabilities.

Sourceval predict_proba : x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Estimate probability.

Parameters ---------- X : array-like of shape (n_samples, n_features) Input data.

Returns ------- C : ndarray of shape (n_samples, n_classes) Estimated probabilities.

Sourceval score : ?sample_weight:[> `ArrayLike ] Np.Obj.t -> x:[> `ArrayLike ] Np.Obj.t -> y:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> float

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters ---------- X : array-like of shape (n_samples, n_features) Test samples.

y : array-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X.

sample_weight : array-like of shape (n_samples,), default=None Sample weights.

Returns ------- score : float Mean accuracy of self.predict(X) wrt. y.

Sourceval set_params : ?params:(string * Py.Object.t) list -> [> tag ] Obj.t -> t

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form ``<component>__<parameter>`` so that it's possible to update each component of a nested object.

Parameters ---------- **params : dict Estimator parameters.

Returns ------- self : object Estimator instance.

Sourceval transform : x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Project data to maximize class separation.

Parameters ---------- X : array-like of shape (n_samples, n_features) Input data.

Returns ------- X_new : ndarray of shape (n_samples, n_components) Transformed data.

Sourceval coef_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute coef_: get value or raise Not_found if None.

Sourceval coef_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute coef_: get value as an option.

Sourceval intercept_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute intercept_: get value or raise Not_found if None.

Sourceval intercept_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute intercept_: get value as an option.

Sourceval covariance_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute covariance_: get value or raise Not_found if None.

Sourceval covariance_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute covariance_: get value as an option.

Sourceval explained_variance_ratio_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute explained_variance_ratio_: get value or raise Not_found if None.

Sourceval explained_variance_ratio_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute explained_variance_ratio_: get value as an option.

Sourceval means_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute means_: get value or raise Not_found if None.

Sourceval means_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute means_: get value as an option.

Sourceval priors_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute priors_: get value or raise Not_found if None.

Sourceval priors_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute priors_: get value as an option.

Sourceval scalings_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute scalings_: get value or raise Not_found if None.

Sourceval scalings_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute scalings_: get value as an option.

Sourceval xbar_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute xbar_: get value or raise Not_found if None.

Sourceval xbar_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute xbar_: get value as an option.

Sourceval classes_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute classes_: get value or raise Not_found if None.

Sourceval classes_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute classes_: get value as an option.

Sourceval to_string : t -> string

Print the object to a human-readable representation.

Sourceval show : t -> string

Print the object to a human-readable representation.

Sourceval pp : Format.formatter -> t -> unit

Pretty-print the object to a formatter.

OCaml

Innovation. Community. Security.