Pipeline of transforms with a final estimator.
Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using ``memory`` argument.
The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a '__', as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to 'passthrough' or ``None``.
Read more in the :ref:`User Guide <pipeline>`.
.. versionadded:: 0.5
Parameters ---------- steps : list List of (name, transform) tuples (implementing fit/transform) that are chained, in the order in which they are chained, with the last object an estimator.
memory : None, str or object with the joblib.Memory interface, optional Used to cache the fitted transformers of the pipeline. By default, no caching is performed. If a string is given, it is the path to the caching directory. Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. Use the attribute ``named_steps`` or ``steps`` to inspect estimators within the pipeline. Caching the transformers is advantageous when fitting is time consuming.
verbose : bool, default=False If True, the time elapsed while fitting each step will be printed as it is completed.
Attributes ---------- named_steps : bunch object, a dictionary with attribute access Read-only attribute to access any step parameter by user given name. Keys are step names and values are steps parameters.
See Also -------- sklearn.pipeline.make_pipeline : Convenience function for simplified pipeline construction.
Examples -------- >>> from sklearn import svm >>> from sklearn.datasets import make_classification >>> from sklearn.feature_selection import SelectKBest >>> from sklearn.feature_selection import f_regression >>> from sklearn.pipeline import Pipeline >>> # generate some data to play with >>> X, y = make_classification( ... n_informative=5, n_redundant=0, random_state=42) >>> # ANOVA SVM-C >>> anova_filter = SelectKBest(f_regression, k=5) >>> clf = svm.SVC(kernel='linear') >>> anova_svm = Pipeline(('anova', anova_filter), ('svc', clf)
) >>> # You can set the parameters using the names issued >>> # For instance, fit using a k of 10 in the SelectKBest >>> # and a parameter 'C' of the svm >>> anova_svm.set_params(anova__k=10, svc__C=.1).fit(X, y) Pipeline(steps=('anova', SelectKBest(...)), ('svc', SVC(...))
) >>> prediction = anova_svm.predict(X) >>> anova_svm.score(X, y) 0.83 >>> # getting the selected features chosen by anova_filter >>> anova_svm'anova'
.get_support() array(False, False, True, True, False, False, True, True, False,
True, False, True, True, False, True, False, True, True,
False, False
) >>> # Another way to get selected features chosen by anova_filter >>> anova_svm.named_steps.anova.get_support() array(False, False, True, True, False, False, True, True, False,
True, False, True, True, False, True, False, True, True,
False, False
) >>> # Indexing can also be used to extract a sub-pipeline. >>> sub_pipeline = anova_svm:1
>>> sub_pipeline Pipeline(steps=('anova', SelectKBest(...))
) >>> coef = anova_svm-1
.coef_ >>> anova_svm'svc'
is anova_svm-1
True >>> coef.shape (1, 10) >>> sub_pipeline.inverse_transform(coef).shape (1, 20)