package sklearn

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
type tag = [
  1. | `KFold
]
type t = [ `BaseCrossValidator | `KFold | `Object ] Obj.t
val of_pyobject : Py.Object.t -> t
val to_pyobject : [> tag ] Obj.t -> Py.Object.t
val as_cross_validator : t -> [ `BaseCrossValidator ] Obj.t
val create : ?n_splits:int -> ?shuffle:bool -> ?random_state:int -> unit -> t

K-Folds cross-validator

Provides train/test indices to split data in train/test sets. Split dataset into k consecutive folds (without shuffling by default).

Each fold is then used once as a validation while the k - 1 remaining folds form the training set.

Read more in the :ref:`User Guide <cross_validation>`.

Parameters ---------- n_splits : int, default=5 Number of folds. Must be at least 2.

.. versionchanged:: 0.22 ``n_splits`` default value changed from 3 to 5.

shuffle : bool, default=False Whether to shuffle the data before splitting into batches. Note that the samples within each split will not be shuffled.

random_state : int or RandomState instance, default=None When `shuffle` is True, `random_state` affects the ordering of the indices, which controls the randomness of each fold. Otherwise, this parameter has no effect. Pass an int for reproducible output across multiple function calls. See :term:`Glossary <random_state>`.

Examples -------- >>> import numpy as np >>> from sklearn.model_selection import KFold >>> X = np.array([1, 2], [3, 4], [1, 2], [3, 4]) >>> y = np.array(1, 2, 3, 4) >>> kf = KFold(n_splits=2) >>> kf.get_n_splits(X) 2 >>> print(kf) KFold(n_splits=2, random_state=None, shuffle=False) >>> for train_index, test_index in kf.split(X): ... print('TRAIN:', train_index, 'TEST:', test_index) ... X_train, X_test = Xtrain_index, Xtest_index ... y_train, y_test = ytrain_index, ytest_index TRAIN: 2 3 TEST: 0 1 TRAIN: 0 1 TEST: 2 3

Notes ----- The first ``n_samples % n_splits`` folds have size ``n_samples // n_splits + 1``, other folds have size ``n_samples // n_splits``, where ``n_samples`` is the number of samples.

Randomized CV splitters may return different results for each call of split. You can make the results identical by setting `random_state` to an integer.

See also -------- StratifiedKFold Takes group information into account to avoid building folds with imbalanced class distributions (for binary or multiclass classification tasks).

GroupKFold: K-fold iterator variant with non-overlapping groups.

RepeatedKFold: Repeats K-Fold n times.

val get_n_splits : ?x:Py.Object.t -> ?y:Py.Object.t -> ?groups:Py.Object.t -> [> tag ] Obj.t -> int

Returns the number of splitting iterations in the cross-validator

Parameters ---------- X : object Always ignored, exists for compatibility.

y : object Always ignored, exists for compatibility.

groups : object Always ignored, exists for compatibility.

Returns ------- n_splits : int Returns the number of splitting iterations in the cross-validator.

val split : ?y:[> `ArrayLike ] Np.Obj.t -> ?groups:[> `ArrayLike ] Np.Obj.t -> x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> ([> `ArrayLike ] Np.Obj.t * [> `ArrayLike ] Np.Obj.t) Stdlib.Seq.t

Generate indices to split data into training and test set.

Parameters ---------- X : array-like of shape (n_samples, n_features) Training data, where n_samples is the number of samples and n_features is the number of features.

y : array-like of shape (n_samples,), default=None The target variable for supervised learning problems.

groups : array-like of shape (n_samples,), default=None Group labels for the samples used while splitting the dataset into train/test set.

Yields ------ train : ndarray The training set indices for that split.

test : ndarray The testing set indices for that split.

val to_string : t -> string

Print the object to a human-readable representation.

val show : t -> string

Print the object to a human-readable representation.

val pp : Stdlib.Format.formatter -> t -> unit

Pretty-print the object to a formatter.