SelectByShuffling

class feature_engine.selection.SelectByShuffling(estimator, scoring='roc_auc', cv=3, threshold=None, variables=None, random_state=None)[source]

SelectByShuffling() selects features by determining the drop in machine learning model performance when each feature’s values are randomly shuffled.

If the variables are important, a random permutation of their values will decrease dramatically the machine learning model performance. Contrarily, the permutation of the values should have little to no effect on the model performance metric we are assessing if the feature is not predictive.

The SelectByShuffling() first trains a machine learning model utilising all features. Next, it shuffles the values of 1 feature, obtains a prediction with the pre-trained model, and determines the performance drop (if any). If the drop in performance is bigger than a threshold then the feature is retained, otherwise removed. It continues until all features have been shuffled and examined.

The user can determine the model for which performance drop after feature shuffling should be assessed. The user also determines the threshold in performance under which a feature will be removed, and the performance metric to evaluate.

Model training and performance calculation are done with cross-validation.

More details in the User Guide.

Parameters
estimator: object

A Scikit-learn estimator for regression or classification.

variables: str or list, default=None

The list of variable(s) to be shuffled from the dataframe. If None, the transformer will shuffle all numerical variables in the dataset.

scoring: str, default=’roc_auc’

Desired metric to optimise the performance for the estimator. Comes from sklearn.metrics. See the model evaluation documentation for more options: https://scikit-learn.org/stable/modules/model_evaluation.html

threshold: float, int, default = None

The value that defines if a feature will be kept or removed. Note that for metrics like roc-auc, r2_score and accuracy, the thresholds will be floats between 0 and 1. For metrics like the mean_square_error and the root_mean_square_error the threshold might be a big number. The threshold can be defined by the user. If None, the selector will select features which performance drift is smaller than the mean performance drift across all features.

cv: int, cross-validation generator or an iterable, default=3

Determines the cross-validation splitting strategy. Possible inputs for cv are:

For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. These splitters are instantiated with shuffle=False so the splits will be the same across calls. For more details check Scikit-learn’s cross_validate’s documentation.

random_state: int, default=None

Controls the randomness when shuffling features.

Attributes
initial_model_performance_:

Performance of the model trained using the original dataset.

performance_drifts_:

Dictionary with the performance drift per shuffled feature.

features_to_drop_:

List with the features to remove from the dataset.

variables_:

The variables that will be considered for the feature selection.

n_features_in_:

The number of features in the train set used in fit.

See also

sklearn.inspection.permutation_importance

Notes

This transformer is a similar concept to the permutation_importance from Scikit-learn. The function in Scikit-learn is used to evaluate feature importance instead of to select features.

Methods

fit:

Find the important features.

transform:

Reduce X to the selected features.

fit_transform:

Fit to data, then transform it.

fit(X, y)[source]

Find the important features.

Parameters
X: pandas dataframe of shape = [n_samples, n_features]

The input dataframe.

y: array-like of shape (n_samples)

Target variable. Required to train the estimator.

fit_transform(X, y=None, **fit_params)[source]

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters
Xarray-like of shape (n_samples, n_features)

Input samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

Target values (None for unsupervised transformations).

**fit_paramsdict

Additional fit parameters.

Returns
X_newndarray array of shape (n_samples, n_features_new)

Transformed array.

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
paramsdict

Parameter names mapped to their values.

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters
**paramsdict

Estimator parameters.

Returns
selfestimator instance

Estimator instance.

transform(X)[source]

Return dataframe with selected features.

Parameters
X: pandas dataframe of shape = [n_samples, n_features].

The input dataframe.

Returns
X_new: pandas dataframe of shape = [n_samples, n_selected_features]

Pandas dataframe with the selected features.

rtype

DataFrame ..