SmartCorrelatedSelection

API Reference

class feature_engine.selection.SmartCorrelatedSelection(variables=None, method='pearson', threshold=0.8, missing_values='ignore', selection_method='missing_values', estimator=None, scoring='roc_auc', cv=3)[source]

SmartCorrelatedSelection() finds groups of correlated features and then selects, from each group, a feature following certain criteria:

  • Feature with least missing values

  • Feature with most unique values

  • Feature with highest variance

  • Best performing feature according to estimator entered by user

SmartCorrelatedSelection() returns a dataframe containing from each group of correlated features, the selected variable, plus all original features that were not correlated to any other.

Correlation is calculated with pandas.corr().

SmartCorrelatedSelection() works only with numerical variables. Categorical variables will need to be encoded to numerical or will be excluded from the analysis.

Parameters
variables: list, default=None

The list of variables to evaluate. If None, the transformer will evaluate all numerical variables in the dataset.

method: string or callable, default=’pearson’

Can take ‘pearson’, ‘spearman’, ‘kendall’ or callable. It refers to the correlation method to be used to identify the correlated features.

  • pearson : standard correlation coefficient

  • kendall : Kendall Tau correlation coefficient

  • spearman : Spearman rank correlation

  • callable: callable with input two 1d ndarrays and returning a float.

For more details on this parameter visit the pandas.corr() documentation.

threshold: float, default=0.8

The correlation threshold above which a feature will be deemed correlated with another one and removed from the dataset.

missing_values: str, default=ignore

Takes values ‘raise’ and ‘ignore’. Whether the missing values should be raised as error or ignored when determining correlation.

selection_method: str, default= “missing_values”

Takes the values “missing_values”, “cardinality”, “variance” and “model_performance”.

“missing_values”: keeps the feature from the correlated group with least missing observations

“cardinality”: keeps the feature from the correlated group with the highest cardinality.

“variance”: keeps the feature from the correlated group with the highest variance.

“model_performance”: trains a machine learning model using the correlated feature group and retains the feature with the highest importance.

estimator: object, default = None

A Scikit-learn estimator for regression or classification.

scoring: str, default=’roc_auc’

Desired metric to optimise the performance of the estimator. Comes from sklearn.metrics. See the model evaluation documentation for more options: https://scikit-learn.org/stable/modules/model_evaluation.html

cv: int, cross-validation generator or an iterable, default=3

Determines the cross-validation splitting strategy. Possible inputs for cv are:

For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, Fold is used. These splitters are instantiated with shuffle=False so the splits will be the same across calls.

For more details check Scikit-learn’s cross_validate documentation

Attributes

correlated_feature_sets_:

Groups of correlated features. Each list is a group of correlated features.

features_to_drop_:

The correlated features to remove from the dataset.

variables_:

The variables to consider for the feature selection.

n_features_in_:

The number of features in the train set used in fit.

Methods

fit:

Find best feature from each correlated groups.

transform:

Return selected features.

fit_transform:

Fit to the data. Then transform it.

fit(X, y=None)[source]

Find the correlated feature groups. Determine which feature should be selected from each group.

Parameters
X: pandas dataframe of shape = [n_samples, n_features]

The training dataset.

y: pandas series. Default = None

y is needed if selection_method == ‘model_performance’.

Returns
self
transform(X)[source]

Return dataframe with selected features.

Parameters
X: pandas dataframe of shape = [n_samples, n_features].

The input dataframe.

Returns
X_transformed: pandas dataframe of shape = [n_samples, n_selected_features]

Pandas dataframe with the selected features.

rtype

DataFrame ..

Example

import pandas as pd
from sklearn.datasets import make_classification
from feature_engine.selection import SmartCorrelatedSelection

# make dataframe with some correlated variables
def make_data():
    X, y = make_classification(n_samples=1000,
                               n_features=12,
                               n_redundant=4,
                               n_clusters_per_class=1,
                               weights=[0.50],
                               class_sep=2,
                               random_state=1)

    # trasform arrays into pandas df and series
    colnames = ['var_'+str(i) for i in range(12)]
    X = pd.DataFrame(X, columns=colnames)
    return X


X = make_data()


# set up the selector
tr = SmartCorrelatedSelection(
    variables=None,
    method="pearson",
    threshold=0.8,
    missing_values="raise",
    selection_method="variance",
    estimator=None,
)

Xt = tr.fit_transform(X)

tr.correlated_feature_sets_
[{'var_0', 'var_8'}, {'var_4', 'var_6', 'var_7', 'var_9'}]
tr.features_to_drop_
['var_0', 'var_4', 'var_6', 'var_9']
print(print(Xt.head()))
      var_1     var_2     var_3     var_5    var_10    var_11     var_8  \
0 -2.376400 -0.247208  1.210290  0.091527  2.070526 -1.989335  2.070483
1  1.969326 -0.126894  0.034598 -0.186802  1.184820 -1.309524  2.421477
2  1.499174  0.334123 -2.233844 -0.313881 -0.066448 -0.852703  2.263546
3  0.075341  1.627132  0.943132 -0.468041  0.713558  0.484649  2.792500
4  0.372213  0.338141  0.951526  0.729005  0.398790 -0.186530  2.186741

      var_7
0 -2.230170
1 -1.447490
2 -2.240741
3 -3.534861
4 -2.053965