Encode categorical features as an integer array.
The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features. The features are converted to ordinal integers. This results in a single column of integers (0 to n_categories - 1) per feature.
Read more in the :ref:`User Guide <preprocessing_categorical_features>`.
.. versionadded:: 0.20
Parameters ---------- categories : 'auto' or a list of array-like, default='auto' Categories (unique values) per feature:
- 'auto' : Determine categories automatically from the training data.
- list : ``categories
i
`` holds the categories expected in the ith column. The passed categories should not mix strings and numeric values, and should be sorted in case of numeric values.
The used categories can be found in the ``categories_`` attribute.
dtype : number type, default np.float64 Desired dtype of output.
Attributes ---------- categories_ : list of arrays The categories of each feature determined during fitting (in order of the features in X and corresponding with the output of ``transform``).
See Also -------- sklearn.preprocessing.OneHotEncoder : Performs a one-hot encoding of categorical features. sklearn.preprocessing.LabelEncoder : Encodes target labels with values between 0 and n_classes-1.
Examples -------- Given a dataset with two features, we let the encoder find the unique values per feature and transform the data to an ordinal encoding.
>>> from sklearn.preprocessing import OrdinalEncoder >>> enc = OrdinalEncoder() >>> X = ['Male', 1], ['Female', 3], ['Female', 2]
>>> enc.fit(X) OrdinalEncoder() >>> enc.categories_ array(['Female', 'Male'], dtype=object), array([1, 2, 3], dtype=object)
>>> enc.transform(['Female', 3], ['Male', 1]
) array([0., 2.],
[1., 0.]
)
>>> enc.inverse_transform([1, 0], [0, 1]
) array(['Male', 1],
['Female', 2]
, dtype=object)