class feature_engine.encoding.DecisionTreeEncoder(encoding_method='arbitrary', cv=3, scoring='neg_mean_squared_error', param_grid=None, regression=True, random_state=None, variables=None, ignore_format=False)[source]#

The DecisionTreeEncoder() encodes categorical variables with predictions of a decision tree.

The encoder first fits a decision tree using a single feature and the target (fit), and then replaces the values of the original feature by the predictions of the tree (transform). The transformer will train a decision tree per every feature to encode.

The DecisionTreeEncoder() will encode only categorical variables by default (type ‘object’ or ‘categorical’). You can pass a list of variables to encode or the encoder will find and encode all categorical variables.

With ignore_format=True you have the option to encode numerical variables as well. In this case, you can either enter the list of variables to encode, or the transformer will automatically select all variables.

More details in the User Guide.

encoding_method: str, default=’arbitrary’

The method used to encode the categories to numerical values before fitting the decision tree.

‘ordered’: the categories are numbered in ascending order according to the target mean value per category.

‘arbitrary’ : categories are numbered arbitrarily.

cv: int, cross-validation generator or an iterable, default=3

Determines the cross-validation splitting strategy. Possible inputs for cv are:

For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. These splitters are instantiated with shuffle=False so the splits will be the same across calls. For more details check Scikit-learn’s cross_validate’s documentation.

scoring: str, default=’neg_mean_squared_error’

Desired metric to optimise the performance for the decision tree. Comes from sklearn.metrics. See the DecisionTreeRegressor or DecisionTreeClassifier model evaluation documentation for more options:

param_grid: dictionary, default=None

The hyperparameters for the decision tree to test with a grid search. The param_grid can contain any of the permitted hyperparameters for Scikit-learn’s DecisionTreeRegressor() or DecisionTreeClassifier(). If None, then param_grid will optimise the ‘max_depth’ over [1, 2, 3, 4].

regression: boolean, default=True

Indicates whether the encoder should train a regression or a classification decision tree.

random_state: int, default=None

The random_state to initialise the training of the decision tree. It is one of the parameters of the Scikit-learn’s DecisionTreeRegressor() or DecisionTreeClassifier(). For reproducibility it is recommended to set the random_state to an integer.

variables: list, default=None

The list of categorical variables that will be encoded. If None, the encoder will find and transform all variables of type object or categorical by default. You can also make the transformer accept numerical variables, see the next parameter.

ignore_format: bool, default=False

Whether the format in which the categorical variables are cast should be ignored. If False, the encoder will automatically select variables of type object or categorical, or check that the variables entered by the user are of type object or categorical. If True, the encoder will select all variables or accept all variables entered by the user, including those cast as numeric.


sklearn Pipeline containing the ordinal encoder and the decision tree.


The group of variables that will be transformed.


List with the names of features seen during fit.


The number of features in the train set used in fit.

See also



The authors designed this method originally to work with numerical variables. We can replace numerical variables by the predictions of a decision tree utilising the DecisionTreeDiscretiser(). Here we extend this functionality to work also with categorical variables.

NAN are introduced when encoding categories that were not present in the training dataset. If this happens, try grouping infrequent categories using the RareLabelEncoder().



Niculescu-Mizil, et al. “Winning the KDD Cup Orange Challenge with Ensemble Selection”. JMLR: Workshop and Conference Proceedings 7: 23-34. KDD 2009


>>> import pandas as pd
>>> from feature_engine.encoding import DecisionTreeEncoder
>>> X = pd.DataFrame(dict(x1 = [1,2,3,4,5], x2 = ["b", "b", "b", "a", "a"]))
>>> y = pd.Series([2.2,4, 1.5, 3.2, 1.1])
>>> dte = DecisionTreeEncoder(cv=2)
>>>, y)
>>> dte.transform(X)
   x1        x2
0   1  2.566667
1   2  2.566667
2   3  2.566667
3   4  2.150000
4   5  2.150000

You can also use it for classification by using regression=False.

>>> y = pd.Series([0,1,1,1,0])
>>> dte = DecisionTreeEncoder(regression=False, cv=2)
>>>, y)
>>> dte.transform(X)
   x1        x2
0   1  0.666667
1   2  0.666667
2   3  0.666667
3   4  0.500000
4   5  0.500000



Fit a decision tree per variable.


Fit to data, then transform it.


Get output feature names for transformation.


Get parameters for this estimator.


Set the parameters of this estimator.


Replace categorical variable by the predictions of the decision tree.

fit(X, y)[source]#

Fit a decision tree per variable.

Xpandas dataframe of shape = [n_samples, n_features]

The training input samples. Can be the entire dataframe, not just the categorical variables.

ypandas series.

The target variable. Required to train the decision tree and for ordered ordinal encoding.

fit_transform(X, y=None, **fit_params)[source]#

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Xarray-like of shape (n_samples, n_features)

Input samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

Target values (None for unsupervised transformations).


Additional fit parameters.

X_newndarray array of shape (n_samples, n_features_new)

Transformed array.


Get output feature names for transformation. In other words, returns the variable names of transformed dataframe.

input_featuresarray or list, default=None

This parameter exits only for compatibility with the Scikit-learn pipeline.

  • If None, then feature_names_in_ is used as feature names in.

  • If an array or list, then input_features must match feature_names_in_.

feature_names_out: list

Transformed feature names.


List[Union[str, int]] ..


Get parameters for this estimator.

deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.


Parameter names mapped to their values.


inverse_transform is not implemented for this transformer.


Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.


Estimator parameters.

selfestimator instance

Estimator instance.


Replace categorical variables by the predictions of the decision tree.

Xpandas dataframe of shape = [n_samples, n_features]

The input samples.

X_newpandas dataframe of shape = [n_samples, n_features].

Dataframe with variables encoded with decision tree predictions.


DataFrame ..