package sklearn
Install
dune-project
Dependency
Authors
Maintainers
Sources
md5=9069738571ec0c95f5a4b805b5e4f528
sha512=fbd3f6cc249bbb8e7620f3731471f794f1eae0b1b34152c3b7f3a9a3245e34cdcf15501f09c799c98b085472eb0d7a24151b3178940295fd9a2f7063ee8b5499
doc/sklearn/Sklearn/Model_selection/LeaveOneOut/index.html
Module Model_selection.LeaveOneOutSource
Leave-One-Out cross-validator
Provides train/test indices to split data in train/test sets. Each sample is used once as a test set (singleton) while the remaining samples form the training set.
Note: ``LeaveOneOut()`` is equivalent to ``KFold(n_splits=n)`` and ``LeavePOut(p=1)`` where ``n`` is the number of samples.
Due to the high number of test sets (which is the same as the number of samples) this cross-validation method can be very costly. For large datasets one should favor :class:`KFold`, :class:`ShuffleSplit` or :class:`StratifiedKFold`.
Read more in the :ref:`User Guide <cross_validation>`.
Examples -------- >>> import numpy as np >>> from sklearn.model_selection import LeaveOneOut >>> X = np.array([1, 2], [3, 4]) >>> y = np.array(1, 2) >>> loo = LeaveOneOut() >>> loo.get_n_splits(X) 2 >>> print(loo) LeaveOneOut() >>> for train_index, test_index in loo.split(X): ... print("TRAIN:", train_index, "TEST:", test_index) ... X_train, X_test = Xtrain_index, Xtest_index ... y_train, y_test = ytrain_index, ytest_index ... print(X_train, X_test, y_train, y_test) TRAIN: 1 TEST: 0 [3 4] [1 2] 2 1 TRAIN: 0 TEST: 1 [1 2] [3 4] 1 2
See also -------- LeaveOneGroupOut For splitting the data according to explicit, domain-specific stratification of the dataset.
GroupKFold: K-fold iterator variant with non-overlapping groups.
Returns the number of splitting iterations in the cross-validator
Parameters ---------- X : array-like, shape (n_samples, n_features) Training data, where n_samples is the number of samples and n_features is the number of features.
y : object Always ignored, exists for compatibility.
groups : object Always ignored, exists for compatibility.
Returns ------- n_splits : int Returns the number of splitting iterations in the cross-validator.
val split :
?y:Ndarray.t ->
?groups:[ `Ndarray of Ndarray.t | `PyObject of Py.Object.t ] ->
x:Ndarray.t ->
t ->
Py.Object.tGenerate indices to split data into training and test set.
Parameters ---------- X : array-like, shape (n_samples, n_features) Training data, where n_samples is the number of samples and n_features is the number of features.
y : array-like, of length n_samples The target variable for supervised learning problems.
groups : array-like, with shape (n_samples,), optional Group labels for the samples used while splitting the dataset into train/test set.
Yields ------ train : ndarray The training set indices for that split.
test : ndarray The testing set indices for that split.
Pretty-print the object to a formatter.