package sklearn

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
type tag = [
  1. | `LeavePGroupsOut
]
type t = [ `BaseCrossValidator | `LeavePGroupsOut | `Object ] Obj.t
val of_pyobject : Py.Object.t -> t
val to_pyobject : [> tag ] Obj.t -> Py.Object.t
val as_cross_validator : t -> [ `BaseCrossValidator ] Obj.t
val create : int -> t

Leave P Group(s) Out cross-validator

Provides train/test indices to split data according to a third-party provided group. This group information can be used to encode arbitrary domain specific stratifications of the samples as integers.

For instance the groups could be the year of collection of the samples and thus allow for cross-validation against time-based splits.

The difference between LeavePGroupsOut and LeaveOneGroupOut is that the former builds the test sets with all the samples assigned to ``p`` different values of the groups while the latter uses samples all assigned the same groups.

Read more in the :ref:`User Guide <cross_validation>`.

Parameters ---------- n_groups : int Number of groups (``p``) to leave out in the test split.

Examples -------- >>> import numpy as np >>> from sklearn.model_selection import LeavePGroupsOut >>> X = np.array([1, 2], [3, 4], [5, 6]) >>> y = np.array(1, 2, 1) >>> groups = np.array(1, 2, 3) >>> lpgo = LeavePGroupsOut(n_groups=2) >>> lpgo.get_n_splits(X, y, groups) 3 >>> lpgo.get_n_splits(groups=groups) # 'groups' is always required 3 >>> print(lpgo) LeavePGroupsOut(n_groups=2) >>> for train_index, test_index in lpgo.split(X, y, groups): ... print('TRAIN:', train_index, 'TEST:', test_index) ... X_train, X_test = Xtrain_index, Xtest_index ... y_train, y_test = ytrain_index, ytest_index ... print(X_train, X_test, y_train, y_test) TRAIN: 2 TEST: 0 1 [5 6] [1 2] [3 4] 1 1 2 TRAIN: 1 TEST: 0 2 [3 4] [1 2] [5 6] 2 1 1 TRAIN: 0 TEST: 1 2 [1 2] [3 4] [5 6] 1 2 1

See also -------- GroupKFold: K-fold iterator variant with non-overlapping groups.

val get_n_splits : ?x:Py.Object.t -> ?y:Py.Object.t -> ?groups:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> int

Returns the number of splitting iterations in the cross-validator

Parameters ---------- X : object Always ignored, exists for compatibility.

y : object Always ignored, exists for compatibility.

groups : array-like of shape (n_samples,) Group labels for the samples used while splitting the dataset into train/test set. This 'groups' parameter must always be specified to calculate the number of splits, though the other parameters can be omitted.

Returns ------- n_splits : int Returns the number of splitting iterations in the cross-validator.

val split : ?y:[> `ArrayLike ] Np.Obj.t -> ?groups:[> `ArrayLike ] Np.Obj.t -> x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> ([> `ArrayLike ] Np.Obj.t * [> `ArrayLike ] Np.Obj.t) Stdlib.Seq.t

Generate indices to split data into training and test set.

Parameters ---------- X : array-like of shape (n_samples, n_features) Training data, where n_samples is the number of samples and n_features is the number of features.

y : array-like of shape (n_samples,), default=None The target variable for supervised learning problems.

groups : array-like of shape (n_samples,) Group labels for the samples used while splitting the dataset into train/test set.

Yields ------ train : ndarray The training set indices for that split.

test : ndarray The testing set indices for that split.

val to_string : t -> string

Print the object to a human-readable representation.

val show : t -> string

Print the object to a human-readable representation.

val pp : Stdlib.Format.formatter -> t -> unit

Pretty-print the object to a formatter.