SelectByShuffling

API Reference

class feature_engine.selection.SelectByShuffling(estimator=RandomForestClassifier(), scoring='roc_auc', cv=3, threshold=None, variables=None, random_state=None)[source]

SelectByShuffling() selects features by determining the drop in machine learning model performance when each feature’s values are randomly shuffled.

If the variables are important, a random permutation of their values will decrease dramatically the machine learning model performance. Contrarily, the permutation of the values should have little to no effect on the model performance metric we are assessing.

The SelectByShuffling() first trains a machine learning model utilising all features. Next, it shuffles the values of 1 feature, obtains a prediction with the pre-trained model, and determines the performance drop (if any). If the drop in performance is bigger than a threshold then the feature is retained, otherwise removed. It continues until all features have been shuffled and the drop in performance evaluated.

The user can determine the model for which performance drop after feature shuffling should be assessed. The user also determines the threshold in performance under which a feature will be removed, and the performance metric to evaluate.

Model training and performance calculation are done with cross-validation.

Parameters
variablesstr or list, default=None

The list of variable(s) to be shuffled from the dataframe. If None, the transformer will shuffle all numerical variables in the dataset.

estimatorobject, default = RandomForestClassifier()

A Scikit-learn estimator for regression or classification.

scoringstr, default=’roc_auc’

Desired metric to optimise the performance for the estimator. Comes from sklearn.metrics. See the model evaluation documentation for more options: https://scikit-learn.org/stable/modules/model_evaluation.html

thresholdfloat, int, default = None

The value that defines if a feature will be kept or removed. Note that for metrics like roc-auc, r2_score and accuracy, the thresholds will be floats between 0 and 1. For metrics like the mean_square_error and the root_mean_square_error the threshold will be a big number. The threshold can be defined by the user. If None, the selector will select features which performance drift is smaller than the mean performance drift across all features.

cvint, default=3

Desired number of cross-validation fold to be used to fit the estimator.

random_state: int, default=None

Controls the randomness when shuffling features.

Attributes

initial_model_performance_:

Performance of the model trained using the original dataset.

performance_drifts_:

Dictionary with the performance drift per shuffled feature.

features_to_drop_:

List with the features to remove from the dataset.

Methods

fit:

Find the important features.

transform:

Reduce X to the selected features.

fit_transform:

Fit to data, then transform it.

fit(X, y)[source]

Find the important features.

Parameters
Xpandas dataframe of shape = [n_samples, n_features]

The input dataframe

yarray-like of shape (n_samples)

Target variable. Required to train the estimator.

Returns
self
transform(X)[source]

Return dataframe with selected features.

Parameters
Xpandas dataframe of shape = [n_samples, n_features].

The input dataframe.

Returns
X_transformed: pandas dataframe of shape = [n_samples, n_selected_features]

Pandas dataframe with the selected features.

rtype

DataFrame ..

Example

The SelectByShuffling() selects important features if permutation their values at random produces a decrease in the initial model performance.

import pandas as pd
from sklearn.datasets import load_diabetes
from sklearn.linear_model import LinearRegression
from feature_engine.selection import ShuffleFeaturesSelector

# load dataset
diabetes_X, diabetes_y = load_diabetes(return_X_y=True)
X = pd.DataFrame(diabetes_X)
y = pd.DataFrame(diabetes_y)

# initialize linear regresion estimator
linear_model = LinearRegression()

# initialize feature selector
tr = SelectByShuffling(estimator=linear_model, scoring="r2", cv=3)

# fit transformer
Xt = tr.fit_transform(X, y)

tr.initial_model_performance_
0.488702767247119
tr.performance_drifts_
{0: -0.02368121940502793,
 1: 0.017909161264480666,
 2: 0.18565460365508413,
 3: 0.07655405817715671,
 4: 0.4327180164470878,
 5: 0.16394693824418372,
 6: -0.012876023845921625,
 7: 0.01048781540981647,
 8: 0.3921465005640224,
 9: -0.01427065640301245}
tr.selected_features_
[1, 2, 3, 4, 5, 7, 8]
print(Xt.head())
          1         2         3         4         5         7         8
0  0.050680  0.061696  0.021872 -0.044223 -0.034821 -0.002592  0.019908
1 -0.044642 -0.051474 -0.026328 -0.008449 -0.019163 -0.039493 -0.068330
2  0.050680  0.044451 -0.005671 -0.045599 -0.034194 -0.002592  0.002864
3 -0.044642 -0.011595 -0.036656  0.012191  0.024991  0.034309  0.022692
4 -0.044642 -0.036385  0.021872  0.003935  0.015596 -0.002592 -0.031991
None