Leave-P-Out cross-validator
Provides train/test indices to split data in train/test sets. This results in testing on all distinct samples of size p, while the remaining n - p samples form the training set in each iteration.
Note: ``LeavePOut(p)`` is NOT equivalent to ``KFold(n_splits=n_samples // p)`` which creates non-overlapping test sets.
Due to the high number of iterations which grows combinatorically with the number of samples this cross-validation method can be very costly. For large datasets one should favor :class:`KFold`, :class:`StratifiedKFold` or :class:`ShuffleSplit`.
Read more in the :ref:`User Guide <cross_validation>`.
Parameters ---------- p : int Size of the test sets. Must be strictly less than the number of samples.
Examples -------- >>> import numpy as np >>> from sklearn.model_selection import LeavePOut >>> X = np.array([1, 2], [3, 4], [5, 6], [7, 8]) >>> y = np.array(1, 2, 3, 4) >>> lpo = LeavePOut(2) >>> lpo.get_n_splits(X) 6 >>> print(lpo) LeavePOut(p=2) >>> for train_index, test_index in lpo.split(X): ... print("TRAIN:", train_index, "TEST:", test_index) ... X_train, X_test = Xtrain_index, Xtest_index ... y_train, y_test = ytrain_index, ytest_index TRAIN: 2 3 TEST: 0 1 TRAIN: 1 3 TEST: 0 2 TRAIN: 1 2 TEST: 0 3 TRAIN: 0 3 TEST: 1 2 TRAIN: 0 2 TEST: 1 3 TRAIN: 0 1 TEST: 2 3