package sklearn

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
type tag = [
  1. | `NuSVC
]
type t = [ `BaseEstimator | `BaseLibSVM | `BaseSVC | `ClassifierMixin | `NuSVC | `Object ] Obj.t
val of_pyobject : Py.Object.t -> t
val to_pyobject : [> tag ] Obj.t -> Py.Object.t
val as_classifier : t -> [ `ClassifierMixin ] Obj.t
val as_estimator : t -> [ `BaseEstimator ] Obj.t
val as_lib_svm : t -> [ `BaseLibSVM ] Obj.t
val as_svc : t -> [ `BaseSVC ] Obj.t
val create : ?nu:float -> ?kernel:[ `Linear | `Poly | `Rbf | `Sigmoid | `Precomputed ] -> ?degree:int -> ?gamma:[ `Scale | `Auto | `F of float ] -> ?coef0:float -> ?shrinking:bool -> ?probability:bool -> ?tol:float -> ?cache_size:float -> ?class_weight:[ `Balanced | `DictIntToFloat of (int * float) list ] -> ?verbose:int -> ?max_iter:int -> ?decision_function_shape:[ `Ovo | `Ovr ] -> ?break_ties:bool -> ?random_state:int -> unit -> t

Nu-Support Vector Classification.

Similar to SVC but uses a parameter to control the number of support vectors.

The implementation is based on libsvm.

Read more in the :ref:`User Guide <svm_classification>`.

Parameters ---------- nu : float, default=0.5 An upper bound on the fraction of margin errors (see :ref:`User Guide <nu_svc>`) and a lower bound of the fraction of support vectors. Should be in the interval (0, 1].

kernel : 'linear', 'poly', 'rbf', 'sigmoid', 'precomputed', default='rbf' Specifies the kernel type to be used in the algorithm. It must be one of 'linear', 'poly', 'rbf', 'sigmoid', 'precomputed' or a callable. If none is given, 'rbf' will be used. If a callable is given it is used to precompute the kernel matrix.

degree : int, default=3 Degree of the polynomial kernel function ('poly'). Ignored by all other kernels.

gamma : 'scale', 'auto' or float, default='scale' Kernel coefficient for 'rbf', 'poly' and 'sigmoid'.

  • if ``gamma='scale'`` (default) is passed then it uses 1 / (n_features * X.var()) as value of gamma,
  • if 'auto', uses 1 / n_features.

.. versionchanged:: 0.22 The default value of ``gamma`` changed from 'auto' to 'scale'.

coef0 : float, default=0.0 Independent term in kernel function. It is only significant in 'poly' and 'sigmoid'.

shrinking : bool, default=True Whether to use the shrinking heuristic. See the :ref:`User Guide <shrinking_svm>`.

probability : bool, default=False Whether to enable probability estimates. This must be enabled prior to calling `fit`, will slow down that method as it internally uses 5-fold cross-validation, and `predict_proba` may be inconsistent with `predict`. Read more in the :ref:`User Guide <scores_probabilities>`.

tol : float, default=1e-3 Tolerance for stopping criterion.

cache_size : float, default=200 Specify the size of the kernel cache (in MB).

class_weight : dict, 'balanced', default=None Set the parameter C of class i to class_weighti*C for SVC. If not given, all classes are supposed to have weight one. The 'balanced' mode uses the values of y to automatically adjust weights inversely proportional to class frequencies as ``n_samples / (n_classes * np.bincount(y))``

verbose : bool, default=False Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in libsvm that, if enabled, may not work properly in a multithreaded context.

max_iter : int, default=-1 Hard limit on iterations within solver, or -1 for no limit.

decision_function_shape : 'ovo', 'ovr', default='ovr' Whether to return a one-vs-rest ('ovr') decision function of shape (n_samples, n_classes) as all other classifiers, or the original one-vs-one ('ovo') decision function of libsvm which has shape (n_samples, n_classes * (n_classes - 1) / 2). However, one-vs-one ('ovo') is always used as multi-class strategy. The parameter is ignored for binary classification.

.. versionchanged:: 0.19 decision_function_shape is 'ovr' by default.

.. versionadded:: 0.17 *decision_function_shape='ovr'* is recommended.

.. versionchanged:: 0.17 Deprecated *decision_function_shape='ovo' and None*.

break_ties : bool, default=False If true, ``decision_function_shape='ovr'``, and number of classes > 2, :term:`predict` will break ties according to the confidence values of :term:`decision_function`; otherwise the first class among the tied classes is returned. Please note that breaking ties comes at a relatively high computational cost compared to a simple predict.

.. versionadded:: 0.22

random_state : int or RandomState instance, default=None Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when `probability` is False. Pass an int for reproducible output across multiple function calls. See :term:`Glossary <random_state>`.

Attributes ---------- support_ : ndarray of shape (n_SV,) Indices of support vectors.

support_vectors_ : ndarray of shape (n_SV, n_features) Support vectors.

n_support_ : ndarray of shape (n_class), dtype=int32 Number of support vectors for each class.

dual_coef_ : ndarray of shape (n_class-1, n_SV) Dual coefficients of the support vector in the decision function (see :ref:`sgd_mathematical_formulation`), multiplied by their targets. For multiclass, coefficient for all 1-vs-1 classifiers. The layout of the coefficients in the multiclass case is somewhat non-trivial. See the :ref:`multi-class section of the User Guide <svm_multi_class>` for details.

coef_ : ndarray of shape (n_class * (n_class-1) / 2, n_features) Weights assigned to the features (coefficients in the primal problem). This is only available in the case of a linear kernel.

`coef_` is readonly property derived from `dual_coef_` and `support_vectors_`.

intercept_ : ndarray of shape (n_class * (n_class-1) / 2,) Constants in decision function.

classes_ : ndarray of shape (n_classes,) The unique classes labels.

fit_status_ : int 0 if correctly fitted, 1 if the algorithm did not converge.

probA_ : ndarray of shape (n_class * (n_class-1) / 2,) probB_ : ndarray of shape (n_class * (n_class-1) / 2,) If `probability=True`, it corresponds to the parameters learned in Platt scaling to produce probability estimates from decision values. If `probability=False`, it's an empty array. Platt scaling uses the logistic function ``1 / (1 + exp(decision_value * probA_ + probB_))`` where ``probA_`` and ``probB_`` are learned from the dataset 2_. For more information on the multiclass case and training procedure see section 8 of 1_.

class_weight_ : ndarray of shape (n_class,) Multipliers of parameter C of each class. Computed based on the ``class_weight`` parameter.

shape_fit_ : tuple of int of shape (n_dimensions_of_X,) Array dimensions of training vector ``X``.

Examples -------- >>> import numpy as np >>> X = np.array([-1, -1], [-2, -1], [1, 1], [2, 1]) >>> y = np.array(1, 1, 2, 2) >>> from sklearn.pipeline import make_pipeline >>> from sklearn.preprocessing import StandardScaler >>> from sklearn.svm import NuSVC >>> clf = make_pipeline(StandardScaler(), NuSVC()) >>> clf.fit(X, y) Pipeline(steps=('standardscaler', StandardScaler()), ('nusvc', NuSVC())) >>> print(clf.predict([-0.8, -1])) 1

See also -------- SVC Support Vector Machine for classification using libsvm.

LinearSVC Scalable linear Support Vector Machine for classification using liblinear.

References ---------- .. 1 `LIBSVM: A Library for Support Vector Machines <http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf>`_

.. 2 `Platt, John (1999). 'Probabilistic outputs for support vector machines and comparison to regularizedlikelihood methods.' <http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.1639>`_

val decision_function : x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Evaluates the decision function for the samples in X.

Parameters ---------- X : array-like of shape (n_samples, n_features)

Returns ------- X : ndarray of shape (n_samples, n_classes * (n_classes-1) / 2) Returns the decision function of the sample for each class in the model. If decision_function_shape='ovr', the shape is (n_samples, n_classes).

Notes ----- If decision_function_shape='ovo', the function values are proportional to the distance of the samples X to the separating hyperplane. If the exact distances are required, divide the function values by the norm of the weight vector (``coef_``). See also `this question <https://stats.stackexchange.com/questions/14876/ interpreting-distance-from-hyperplane-in-svm>`_ for further details. If decision_function_shape='ovr', the decision function is a monotonic transformation of ovo decision function.

val fit : ?sample_weight:[> `ArrayLike ] Np.Obj.t -> x:[> `ArrayLike ] Np.Obj.t -> y:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> t

Fit the SVM model according to the given training data.

Parameters ---------- X : array-like, sparse matrix of shape (n_samples, n_features) or (n_samples, n_samples) Training vectors, where n_samples is the number of samples and n_features is the number of features. For kernel='precomputed', the expected shape of X is (n_samples, n_samples).

y : array-like of shape (n_samples,) Target values (class labels in classification, real numbers in regression)

sample_weight : array-like of shape (n_samples,), default=None Per-sample weights. Rescale C per sample. Higher weights force the classifier to put more emphasis on these points.

Returns ------- self : object

Notes ----- If X and y are not C-ordered and contiguous arrays of np.float64 and X is not a scipy.sparse.csr_matrix, X and/or y may be copied.

If X is a dense array, then the other methods will not support sparse matrices as input.

val get_params : ?deep:bool -> [> tag ] Obj.t -> Dict.t

Get parameters for this estimator.

Parameters ---------- deep : bool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns ------- params : mapping of string to any Parameter names mapped to their values.

val predict : x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Perform classification on samples in X.

For an one-class model, +1 or -1 is returned.

Parameters ---------- X : array-like, sparse matrix of shape (n_samples, n_features) or (n_samples_test, n_samples_train) For kernel='precomputed', the expected shape of X is (n_samples_test, n_samples_train).

Returns ------- y_pred : ndarray of shape (n_samples,) Class labels for samples in X.

val score : ?sample_weight:[> `ArrayLike ] Np.Obj.t -> x:[> `ArrayLike ] Np.Obj.t -> y:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> float

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters ---------- X : array-like of shape (n_samples, n_features) Test samples.

y : array-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X.

sample_weight : array-like of shape (n_samples,), default=None Sample weights.

Returns ------- score : float Mean accuracy of self.predict(X) wrt. y.

val set_params : ?params:(string * Py.Object.t) list -> [> tag ] Obj.t -> t

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form ``<component>__<parameter>`` so that it's possible to update each component of a nested object.

Parameters ---------- **params : dict Estimator parameters.

Returns ------- self : object Estimator instance.

val support_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute support_: get value or raise Not_found if None.

val support_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute support_: get value as an option.

val support_vectors_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute support_vectors_: get value or raise Not_found if None.

val support_vectors_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute support_vectors_: get value as an option.

val n_support_ : t -> Py.Object.t

Attribute n_support_: get value or raise Not_found if None.

val n_support_opt : t -> Py.Object.t option

Attribute n_support_: get value as an option.

val dual_coef_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute dual_coef_: get value or raise Not_found if None.

val dual_coef_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute dual_coef_: get value as an option.

val coef_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute coef_: get value or raise Not_found if None.

val coef_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute coef_: get value as an option.

val intercept_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute intercept_: get value or raise Not_found if None.

val intercept_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute intercept_: get value as an option.

val classes_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute classes_: get value or raise Not_found if None.

val classes_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute classes_: get value as an option.

val fit_status_ : t -> int

Attribute fit_status_: get value or raise Not_found if None.

val fit_status_opt : t -> int option

Attribute fit_status_: get value as an option.

val probA_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute probA_: get value or raise Not_found if None.

val probA_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute probA_: get value as an option.

val class_weight_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute class_weight_: get value or raise Not_found if None.

val class_weight_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute class_weight_: get value as an option.

val shape_fit_ : t -> Py.Object.t

Attribute shape_fit_: get value or raise Not_found if None.

val shape_fit_opt : t -> Py.Object.t option

Attribute shape_fit_: get value as an option.

val to_string : t -> string

Print the object to a human-readable representation.

val show : t -> string

Print the object to a human-readable representation.

val pp : Format.formatter -> t -> unit

Pretty-print the object to a formatter.

OCaml

Innovation. Community. Security.