package sklearn

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
type tag = [
  1. | `RandomTreesEmbedding
]
type t = [ `BaseEnsemble | `BaseEstimator | `BaseForest | `MetaEstimatorMixin | `MultiOutputMixin | `Object | `RandomTreesEmbedding ] Obj.t
val of_pyobject : Py.Object.t -> t
val to_pyobject : [> tag ] Obj.t -> Py.Object.t
val as_multi_output : t -> [ `MultiOutputMixin ] Obj.t
val as_meta_estimator : t -> [ `MetaEstimatorMixin ] Obj.t
val as_forest : t -> [ `BaseForest ] Obj.t
val as_estimator : t -> [ `BaseEstimator ] Obj.t
val as_ensemble : t -> [ `BaseEnsemble ] Obj.t
val create : ?n_estimators:int -> ?max_depth:int -> ?min_samples_split:[ `I of int | `F of float ] -> ?min_samples_leaf:[ `I of int | `F of float ] -> ?min_weight_fraction_leaf:float -> ?max_leaf_nodes:int -> ?min_impurity_decrease:float -> ?min_impurity_split:float -> ?sparse_output:bool -> ?n_jobs:int -> ?random_state:int -> ?verbose:int -> ?warm_start:bool -> unit -> t

An ensemble of totally random trees.

An unsupervised transformation of a dataset to a high-dimensional sparse representation. A datapoint is coded according to which leaf of each tree it is sorted into. Using a one-hot encoding of the leaves, this leads to a binary coding with as many ones as there are trees in the forest.

The dimensionality of the resulting representation is ``n_out <= n_estimators * max_leaf_nodes``. If ``max_leaf_nodes == None``, the number of leaf nodes is at most ``n_estimators * 2 ** max_depth``.

Read more in the :ref:`User Guide <random_trees_embedding>`.

Parameters ---------- n_estimators : int, default=100 Number of trees in the forest.

.. versionchanged:: 0.22 The default value of ``n_estimators`` changed from 10 to 100 in 0.22.

max_depth : int, default=5 The maximum depth of each tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.

min_samples_split : int or float, default=2 The minimum number of samples required to split an internal node:

  • If int, then consider `min_samples_split` as the minimum number.
  • If float, then `min_samples_split` is a fraction and `ceil(min_samples_split * n_samples)` is the minimum number of samples for each split.

.. versionchanged:: 0.18 Added float values for fractions.

min_samples_leaf : int or float, default=1 The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least ``min_samples_leaf`` training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.

  • If int, then consider `min_samples_leaf` as the minimum number.
  • If float, then `min_samples_leaf` is a fraction and `ceil(min_samples_leaf * n_samples)` is the minimum number of samples for each node.

.. versionchanged:: 0.18 Added float values for fractions.

min_weight_fraction_leaf : float, default=0.0 The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.

max_leaf_nodes : int, default=None Grow trees with ``max_leaf_nodes`` in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.

min_impurity_decrease : float, default=0.0 A node will be split if this split induces a decrease of the impurity greater than or equal to this value.

The weighted impurity decrease equation is the following::

N_t / N * (impurity - N_t_R / N_t * right_impurity

  • N_t_L / N_t * left_impurity)

where ``N`` is the total number of samples, ``N_t`` is the number of samples at the current node, ``N_t_L`` is the number of samples in the left child, and ``N_t_R`` is the number of samples in the right child.

``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum, if ``sample_weight`` is passed.

.. versionadded:: 0.19

min_impurity_split : float, default=None Threshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf.

.. deprecated:: 0.19 ``min_impurity_split`` has been deprecated in favor of ``min_impurity_decrease`` in 0.19. The default value of ``min_impurity_split`` has changed from 1e-7 to 0 in 0.23 and it will be removed in 0.25. Use ``min_impurity_decrease`` instead.

sparse_output : bool, default=True Whether or not to return a sparse CSR matrix, as default behavior, or to return a dense array compatible with dense pipeline operators.

n_jobs : int, default=None The number of jobs to run in parallel. :meth:`fit`, :meth:`transform`, :meth:`decision_path` and :meth:`apply` are all parallelized over the trees. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. See :term:`Glossary <n_jobs>` for more details.

random_state : int or RandomState, default=None Controls the generation of the random `y` used to fit the trees and the draw of the splits for each feature at the trees' nodes. See :term:`Glossary <random_state>` for details.

verbose : int, default=0 Controls the verbosity when fitting and predicting.

warm_start : bool, default=False When set to ``True``, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. See :term:`the Glossary <warm_start>`.

Attributes ---------- estimators_ : list of DecisionTreeClassifier The collection of fitted sub-estimators.

References ---------- .. 1 P. Geurts, D. Ernst., and L. Wehenkel, 'Extremely randomized trees', Machine Learning, 63(1), 3-42, 2006. .. 2 Moosmann, F. and Triggs, B. and Jurie, F. 'Fast discriminative visual codebooks using randomized clustering forests' NIPS 2007

Examples -------- >>> from sklearn.ensemble import RandomTreesEmbedding >>> X = [0,0], [1,0], [0,1], [-1,0], [0,-1] >>> random_trees = RandomTreesEmbedding( ... n_estimators=5, random_state=0, max_depth=1).fit(X) >>> X_sparse_embedding = random_trees.transform(X) >>> X_sparse_embedding.toarray() array([0., 1., 1., 0., 1., 0., 0., 1., 1., 0.], [0., 1., 1., 0., 1., 0., 0., 1., 1., 0.], [0., 1., 0., 1., 0., 1., 0., 1., 0., 1.], [1., 0., 1., 0., 1., 0., 1., 0., 1., 0.], [0., 1., 1., 0., 1., 0., 0., 1., 1., 0.])

val get_item : index:Py.Object.t -> [> tag ] Obj.t -> Py.Object.t

Return the index'th estimator in the ensemble.

val iter : [> tag ] Obj.t -> Dict.t Stdlib.Seq.t

Return iterator over estimators in the ensemble.

val apply : x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Apply trees in the forest to X, return leaf indices.

Parameters ---------- X : array-like, sparse matrix of shape (n_samples, n_features) The input samples. Internally, its dtype will be converted to ``dtype=np.float32``. If a sparse matrix is provided, it will be converted into a sparse ``csr_matrix``.

Returns ------- X_leaves : ndarray of shape (n_samples, n_estimators) For each datapoint x in X and for each tree in the forest, return the index of the leaf x ends up in.

val decision_path : x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [ `ArrayLike | `Object | `Spmatrix ] Np.Obj.t * [> `ArrayLike ] Np.Obj.t

Return the decision path in the forest.

.. versionadded:: 0.18

Parameters ---------- X : array-like, sparse matrix of shape (n_samples, n_features) The input samples. Internally, its dtype will be converted to ``dtype=np.float32``. If a sparse matrix is provided, it will be converted into a sparse ``csr_matrix``.

Returns ------- indicator : sparse matrix of shape (n_samples, n_nodes) Return a node indicator matrix where non zero elements indicates that the samples goes through the nodes. The matrix is of CSR format.

n_nodes_ptr : ndarray of shape (n_estimators + 1,) The columns from indicatorn_nodes_ptr[i]:n_nodes_ptr[i+1] gives the indicator value for the i-th estimator.

val fit : ?y:Py.Object.t -> ?sample_weight:[> `ArrayLike ] Np.Obj.t -> x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> t

Fit estimator.

Parameters ---------- X : array-like, sparse matrix of shape (n_samples, n_features) The input samples. Use ``dtype=np.float32`` for maximum efficiency. Sparse matrices are also supported, use sparse ``csc_matrix`` for maximum efficiency.

y : Ignored Not used, present for API consistency by convention.

sample_weight : array-like of shape (n_samples,), default=None Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node.

Returns ------- self : object

val fit_transform : ?y:Py.Object.t -> ?sample_weight:[> `ArrayLike ] Np.Obj.t -> x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Fit estimator and transform dataset.

Parameters ---------- X : array-like, sparse matrix of shape (n_samples, n_features) Input data used to build forests. Use ``dtype=np.float32`` for maximum efficiency.

y : Ignored Not used, present for API consistency by convention.

sample_weight : array-like of shape (n_samples,), default=None Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node.

Returns ------- X_transformed : sparse matrix of shape (n_samples, n_out) Transformed dataset.

val get_params : ?deep:bool -> [> tag ] Obj.t -> Dict.t

Get parameters for this estimator.

Parameters ---------- deep : bool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns ------- params : mapping of string to any Parameter names mapped to their values.

val set_params : ?params:(string * Py.Object.t) list -> [> tag ] Obj.t -> t

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form ``<component>__<parameter>`` so that it's possible to update each component of a nested object.

Parameters ---------- **params : dict Estimator parameters.

Returns ------- self : object Estimator instance.

val transform : x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Transform dataset.

Parameters ---------- X : array-like, sparse matrix of shape (n_samples, n_features) Input data to be transformed. Use ``dtype=np.float32`` for maximum efficiency. Sparse matrices are also supported, use sparse ``csr_matrix`` for maximum efficiency.

Returns ------- X_transformed : sparse matrix of shape (n_samples, n_out) Transformed dataset.

val estimators_ : t -> Py.Object.t

Attribute estimators_: get value or raise Not_found if None.

val estimators_opt : t -> Py.Object.t option

Attribute estimators_: get value as an option.

val to_string : t -> string

Print the object to a human-readable representation.

val show : t -> string

Print the object to a human-readable representation.

val pp : Stdlib.Format.formatter -> t -> unit

Pretty-print the object to a formatter.