API Reference

class feature_engine.selection.SelectBySingleFeaturePerformance(estimator, scoring='roc_auc', cv=3, threshold=None, variables=None)[source]

SelectBySingleFeaturePerformance() selects features based on the performance obtained from a machine learning model trained utilising a single feature. In other words, it trains a machine learning model for every single feature, utilising that individual feature, then determines each model performance. If the performance of the model based on the single feature is greater than a user specified threshold, then the feature is retained, otherwise removed.

The models are trained on the individual features using cross-validation. The performance metric to evaluate and the machine learning model to train are specified by the user.


A Scikit-learn estimator for regression or classification.

variables: str or list, default=None

The list of variable(s) to be evaluated. If None, the transformer will evaluate all numerical variables in the dataset.

scoring: str, default=’roc_auc’

Desired metric to optimise the performance for the estimator. Comes from sklearn.metrics. See the model evaluation documentation for more options:

threshold: float, int, default = None

The value that defines if a feature will be kept or removed.

The r2varies between 0 and 1. So a threshold needs to be set-up within these boundaries.

The roc-auc varies between 0.5 and 1. So a threshold needs to be set-up within these boundaries.

For metrics like the mean_square_error and the root_mean_square_error the threshold will be a big number.

The threshold can be specified by the user. If None, it will be automatically set to the mean performance value of all features.

cv: int, cross-validation generator or an iterable, default=3

Determines the cross-validation splitting strategy. Possible inputs for cv are:

For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, Fold is used. These splitters are instantiated with shuffle=False so the splits will be the same across calls.

For more details check Scikit-learn’s cross_validate documentation



List with the features to remove from the dataset.


Dictionary with the single feature model performance per feature.


The variables to consider for the feature selection.


The number of features in the train set used in fit.



Find the important features.


Reduce X to the selected features.


Fit to data, then transform it.

fit(X, y)[source]

Select features.

X: pandas dataframe of shape = [n_samples, n_features]

The input dataframe

y: array-like of shape (n_samples)

Target variable. Required to train the estimator.


Return dataframe with selected features.

X: pandas dataframe of shape = [n_samples, n_features].

The input dataframe.

X_transformed: pandas dataframe of shape = [n_samples, n_selected_features]

Pandas dataframe with the selected features.


DataFrame ..


The SelectBySingleFeaturePerformance()selects features based on the performance of machine learning models trained using individual features. In other words, selects features based on their individual performance, returned by estimators trained on only that particular feature.

import pandas as pd
from sklearn.datasets import load_diabetes
from sklearn.linear_model import LinearRegression
from feature_engine.selection import SelectBySingleFeaturePerformance

# load dataset
diabetes_X, diabetes_y = load_diabetes(return_X_y=True)
X = pd.DataFrame(diabetes_X)
y = pd.DataFrame(diabetes_y)

# initialize feature selector
sel = SelectBySingleFeaturePerformance(
        estimator=LinearRegression(), scoring="r2", cv=3, threshold=0.01)

# fit transformer, y)

{0: 0.029231969375784466,
 1: -0.003738551760264386,
 2: 0.336620809987693,
 3: 0.19219056680145055,
 4: 0.037115559827549806,
 5: 0.017854228256932614,
 6: 0.15153886177526896,
 7: 0.17721609966501747,
 8: 0.3149462084418813,
 9: 0.13876602125792703}