Leave-One-Out cross-validator
Provides train/test indices to split data in train/test sets. Each sample is used once as a test set (singleton) while the remaining samples form the training set.
Note: ``LeaveOneOut()`` is equivalent to ``KFold(n_splits=n)`` and ``LeavePOut(p=1)`` where ``n`` is the number of samples.
Due to the high number of test sets (which is the same as the number of samples) this cross-validation method can be very costly. For large datasets one should favor :class:`KFold`, :class:`ShuffleSplit` or :class:`StratifiedKFold`.
Read more in the :ref:`User Guide <cross_validation>`.
Examples -------- >>> import numpy as np >>> from sklearn.model_selection import LeaveOneOut >>> X = np.array([1, 2], [3, 4]) >>> y = np.array(1, 2) >>> loo = LeaveOneOut() >>> loo.get_n_splits(X) 2 >>> print(loo) LeaveOneOut() >>> for train_index, test_index in loo.split(X): ... print("TRAIN:", train_index, "TEST:", test_index) ... X_train, X_test = Xtrain_index, Xtest_index ... y_train, y_test = ytrain_index, ytest_index ... print(X_train, X_test, y_train, y_test) TRAIN: 1 TEST: 0 [3 4] [1 2] 2 1 TRAIN: 0 TEST: 1 [1 2] [3 4] 1 2
See also -------- LeaveOneGroupOut For splitting the data according to explicit, domain-specific stratification of the dataset.
GroupKFold: K-fold iterator variant with non-overlapping groups.