package sklearn

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
type tag = [
  1. | `VotingRegressor
]
type t = [ `BaseEstimator | `MetaEstimatorMixin | `Object | `RegressorMixin | `TransformerMixin | `VotingRegressor ] Obj.t
val of_pyobject : Py.Object.t -> t
val to_pyobject : [> tag ] Obj.t -> Py.Object.t
val as_transformer : t -> [ `TransformerMixin ] Obj.t
val as_meta_estimator : t -> [ `MetaEstimatorMixin ] Obj.t
val as_regressor : t -> [ `RegressorMixin ] Obj.t
val as_estimator : t -> [ `BaseEstimator ] Obj.t
val create : ?weights:[> `ArrayLike ] Np.Obj.t -> ?n_jobs:int -> ?verbose:int -> estimators:(string * [> `BaseEstimator ] Np.Obj.t) list -> unit -> t

Prediction voting regressor for unfitted estimators.

.. versionadded:: 0.21

A voting regressor is an ensemble meta-estimator that fits several base regressors, each on the whole dataset. Then it averages the individual predictions to form a final prediction.

Read more in the :ref:`User Guide <voting_regressor>`.

Parameters ---------- estimators : list of (str, estimator) tuples Invoking the ``fit`` method on the ``VotingRegressor`` will fit clones of those original estimators that will be stored in the class attribute ``self.estimators_``. An estimator can be set to ``'drop'`` using ``set_params``.

.. versionchanged:: 0.21 ``'drop'`` is accepted.

.. deprecated:: 0.22 Using ``None`` to drop an estimator is deprecated in 0.22 and support will be dropped in 0.24. Use the string ``'drop'`` instead.

weights : array-like of shape (n_regressors,), default=None Sequence of weights (`float` or `int`) to weight the occurrences of predicted values before averaging. Uses uniform weights if `None`.

n_jobs : int, default=None The number of jobs to run in parallel for ``fit``. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. See :term:`Glossary <n_jobs>` for more details.

verbose : bool, default=False If True, the time elapsed while fitting will be printed as it is completed.

Attributes ---------- estimators_ : list of regressors The collection of fitted sub-estimators as defined in ``estimators`` that are not 'drop'.

named_estimators_ : Bunch Attribute to access any fitted sub-estimators by name.

.. versionadded:: 0.20

See Also -------- VotingClassifier: Soft Voting/Majority Rule classifier.

Examples -------- >>> import numpy as np >>> from sklearn.linear_model import LinearRegression >>> from sklearn.ensemble import RandomForestRegressor >>> from sklearn.ensemble import VotingRegressor >>> r1 = LinearRegression() >>> r2 = RandomForestRegressor(n_estimators=10, random_state=1) >>> X = np.array([1, 1], [2, 4], [3, 9], [4, 16], [5, 25], [6, 36]) >>> y = np.array(2, 6, 12, 20, 30, 42) >>> er = VotingRegressor(('lr', r1), ('rf', r2)) >>> print(er.fit(X, y).predict(X)) 3.3 5.7 11.8 19.7 28. 40.3

val fit : ?sample_weight:[> `ArrayLike ] Np.Obj.t -> x:[> `ArrayLike ] Np.Obj.t -> y:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> t

Fit the estimators.

Parameters ---------- X : array-like, sparse matrix of shape (n_samples, n_features) Training vectors, where n_samples is the number of samples and n_features is the number of features.

y : array-like of shape (n_samples,) Target values.

sample_weight : array-like of shape (n_samples,), default=None Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights.

Returns ------- self : object Fitted estimator.

val fit_transform : ?y:[> `ArrayLike ] Np.Obj.t -> ?fit_params:(string * Py.Object.t) list -> x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters ---------- X : array-like, sparse matrix, dataframe of shape (n_samples, n_features)

y : ndarray of shape (n_samples,), default=None Target values.

**fit_params : dict Additional fit parameters.

Returns ------- X_new : ndarray array of shape (n_samples, n_features_new) Transformed array.

val get_params : ?deep:bool -> [> tag ] Obj.t -> Py.Object.t

Get the parameters of an estimator from the ensemble.

Parameters ---------- deep : bool, default=True Setting it to True gets the various classifiers and the parameters of the classifiers as well.

val predict : x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Predict regression target for X.

The predicted regression target of an input sample is computed as the mean predicted regression targets of the estimators in the ensemble.

Parameters ---------- X : array-like, sparse matrix of shape (n_samples, n_features) The input samples.

Returns ------- y : ndarray of shape (n_samples,) The predicted values.

val score : ?sample_weight:[> `ArrayLike ] Np.Obj.t -> x:[> `ArrayLike ] Np.Obj.t -> y:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> float

Return the coefficient of determination R^2 of the prediction.

The coefficient R^2 is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.

Parameters ---------- X : array-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead, shape = (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

y : array-like of shape (n_samples,) or (n_samples, n_outputs) True values for X.

sample_weight : array-like of shape (n_samples,), default=None Sample weights.

Returns ------- score : float R^2 of self.predict(X) wrt. y.

Notes ----- The R2 score used when calling ``score`` on a regressor uses ``multioutput='uniform_average'`` from version 0.23 to keep consistent with default value of :func:`~sklearn.metrics.r2_score`. This influences the ``score`` method of all the multioutput regressors (except for :class:`~sklearn.multioutput.MultiOutputRegressor`).

val set_params : ?params:(string * Py.Object.t) list -> [> tag ] Obj.t -> t

Set the parameters of an estimator from the ensemble.

Valid parameter keys can be listed with `get_params()`.

Parameters ---------- **params : keyword arguments Specific parameters using e.g. `set_params(parameter_name=new_value)`. In addition, to setting the parameters of the stacking estimator, the individual estimator of the stacking estimators can also be set, or can be removed by setting them to 'drop'.

val transform : x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Return predictions for X for each estimator.

Parameters ---------- X : array-like, sparse matrix of shape (n_samples, n_features) The input samples.

Returns ------- predictions: ndarray of shape (n_samples, n_classifiers) Values predicted by each regressor.

val estimators_ : t -> [ `Object | `RegressorMixin ] Np.Obj.t list

Attribute estimators_: get value or raise Not_found if None.

val estimators_opt : t -> [ `Object | `RegressorMixin ] Np.Obj.t list option

Attribute estimators_: get value as an option.

val named_estimators_ : t -> Dict.t

Attribute named_estimators_: get value or raise Not_found if None.

val named_estimators_opt : t -> Dict.t option

Attribute named_estimators_: get value as an option.

val to_string : t -> string

Print the object to a human-readable representation.

val show : t -> string

Print the object to a human-readable representation.

val pp : Stdlib.Format.formatter -> t -> unit

Pretty-print the object to a formatter.