package sklearn

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
type tag = [
  1. | `StratifiedShuffleSplit
]
type t = [ `BaseShuffleSplit | `Object | `StratifiedShuffleSplit ] Obj.t
val of_pyobject : Py.Object.t -> t
val to_pyobject : [> tag ] Obj.t -> Py.Object.t
val as_shuffle_split : t -> [ `BaseShuffleSplit ] Obj.t
val create : ?n_splits:int -> ?test_size:[ `I of int | `F of float ] -> ?train_size:[ `I of int | `F of float ] -> ?random_state:int -> unit -> t

Stratified ShuffleSplit cross-validator

Provides train/test indices to split data in train/test sets.

This cross-validation object is a merge of StratifiedKFold and ShuffleSplit, which returns stratified randomized folds. The folds are made by preserving the percentage of samples for each class.

Note: like the ShuffleSplit strategy, stratified random splits do not guarantee that all folds will be different, although this is still very likely for sizeable datasets.

Read more in the :ref:`User Guide <cross_validation>`.

Parameters ---------- n_splits : int, default=10 Number of re-shuffling & splitting iterations.

test_size : float or int, default=None If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples. If None, the value is set to the complement of the train size. If ``train_size`` is also None, it will be set to 0.1.

train_size : float or int, default=None If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the train split. If int, represents the absolute number of train samples. If None, the value is automatically set to the complement of the test size.

random_state : int or RandomState instance, default=None Controls the randomness of the training and testing indices produced. Pass an int for reproducible output across multiple function calls. See :term:`Glossary <random_state>`.

Examples -------- >>> import numpy as np >>> from sklearn.model_selection import StratifiedShuffleSplit >>> X = np.array([1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]) >>> y = np.array(0, 0, 0, 1, 1, 1) >>> sss = StratifiedShuffleSplit(n_splits=5, test_size=0.5, random_state=0) >>> sss.get_n_splits(X, y) 5 >>> print(sss) StratifiedShuffleSplit(n_splits=5, random_state=0, ...) >>> for train_index, test_index in sss.split(X, y): ... print('TRAIN:', train_index, 'TEST:', test_index) ... X_train, X_test = Xtrain_index, Xtest_index ... y_train, y_test = ytrain_index, ytest_index TRAIN: 5 2 3 TEST: 4 1 0 TRAIN: 5 1 4 TEST: 0 2 3 TRAIN: 5 0 2 TEST: 4 3 1 TRAIN: 4 1 0 TEST: 2 3 5 TRAIN: 0 5 1 TEST: 3 4 2

val get_n_splits : ?x:Py.Object.t -> ?y:Py.Object.t -> ?groups:Py.Object.t -> [> tag ] Obj.t -> int

Returns the number of splitting iterations in the cross-validator

Parameters ---------- X : object Always ignored, exists for compatibility.

y : object Always ignored, exists for compatibility.

groups : object Always ignored, exists for compatibility.

Returns ------- n_splits : int Returns the number of splitting iterations in the cross-validator.

val split : ?groups:Py.Object.t -> x:[> `ArrayLike ] Np.Obj.t -> y:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> ([> `ArrayLike ] Np.Obj.t * [> `ArrayLike ] Np.Obj.t) Stdlib.Seq.t

Generate indices to split data into training and test set.

Parameters ---------- X : array-like of shape (n_samples, n_features) Training data, where n_samples is the number of samples and n_features is the number of features.

Note that providing ``y`` is sufficient to generate the splits and hence ``np.zeros(n_samples)`` may be used as a placeholder for ``X`` instead of actual training data.

y : array-like of shape (n_samples,) or (n_samples, n_labels) The target variable for supervised learning problems. Stratification is done based on the y labels.

groups : object Always ignored, exists for compatibility.

Yields ------ train : ndarray The training set indices for that split.

test : ndarray The testing set indices for that split.

Notes ----- Randomized CV splitters may return different results for each call of split. You can make the results identical by setting `random_state` to an integer.

val to_string : t -> string

Print the object to a human-readable representation.

val show : t -> string

Print the object to a human-readable representation.

val pp : Stdlib.Format.formatter -> t -> unit

Pretty-print the object to a formatter.