Winsorizer

class feature_engine.outliers.Winsorizer(capping_method='gaussian', tail='right', fold=3, add_indicators=False, variables=None, missing_values='raise')[source]

The Winsorizer() caps maximum and/or minimum values of a variable at automatically determined values, and optionally adds indicators.

The values to cap variables are determined using:

  • a Gaussian approximation

  • the inter-quantile range proximity rule (IQR)

  • percentiles

Gaussian limits:

  • right tail: mean + 3* std

  • left tail: mean - 3* std

IQR limits:

  • right tail: 75th quantile + 3* IQR

  • left tail: 25th quantile - 3* IQR

where IQR is the inter-quartile range: 75th quantile - 25th quantile.

percentiles:

  • right tail: 95th percentile

  • left tail: 5th percentile

You can select how far out to cap the maximum or minimum values with the parameter 'fold'.

If capping_method='gaussian' fold gives the value to multiply the std.

If capping_method='iqr' fold is the value to multiply the IQR.

If capping_method='quantiles', fold is the percentile on each tail that should be censored. For example, if fold=0.05, the limits will be the 5th and 95th percentiles. If fold=0.1, the limits will be the 10th and 90th percentiles.

The Winsorizer() works only with numerical variables. A list of variables can be indicated. Alternatively, the Winsorizer() will select and cap all numerical variables in the train set.

The transformer first finds the values at one or both tails of the distributions (fit). The transformer then caps the variables (transform).

More details in the User Guide.

Parameters
capping_method: str, default=’gaussian’

Desired capping method. Can take ‘gaussian’, ‘iqr’ or ‘quantiles’.

‘gaussian’: the transformer will find the maximum and / or minimum values to cap the variables using the Gaussian approximation.

‘iqr’: the transformer will find the boundaries using the IQR proximity rule.

‘quantiles’: the limits are given by the percentiles.

tail: str, default=’right’

Whether to cap outliers on the right, left or both tails of the distribution. Can take ‘left’, ‘right’ or ‘both’.

fold: int or float, default=3

How far out to to place the capping values. The number that will multiply the std or IQR to calculate the capping values. Recommended values, 2 or 3 for the gaussian approximation, or 1.5 or 3 for the IQR proximity rule.

If capping_method='quantiles', then 'fold' indicates the percentile. So if fold=0.05, the limits will be the 95th and 5th percentiles.

Note: Outliers will be removed up to a maximum of the 20th percentiles on both sides. Thus, when capping_method='quantiles', then 'fold' takes values between 0 and 0.20.

add_indicators: bool, default=False

Whether to add indicator variables to flag the capped outliers. If ‘True’, binary variables will be added to flag outliers on the left and right tails of the distribution. One binary variable per tail, per variable.

variables: list, default=None

The list of variables for which the outliers will be capped. If None, the transformer will select and cap all numerical variables.

missing_values: string, default=’raise’

Indicates if missing values should be ignored or raised. Sometimes we want to remove outliers in the raw, original data, sometimes, we may want to remove outliers in the already pre-transformed data. If missing_values='ignore', the transformer will ignore missing data when learning the capping parameters or transforming the data. If missing_values='raise' the transformer will return an error if the training or the datasets to transform contain missing values.

Attributes
right_tail_caps_:

Dictionary with the maximum values at which variables will be capped.

left_tail_caps_ :

Dictionary with the minimum values at which variables will be capped.

variables_:

The group of variables that will be transformed.

n_features_in_:

The number of features in the train set used in fit.

Methods

fit:

Learn the values that should be used to replace outliers.

transform:

Cap the variables.

fit_transform:

Fit to the data. Then transform it.

fit(X, y=None)[source]

Learn the values that should be used to replace outliers.

Parameters
Xpandas dataframe of shape = [n_samples, n_features]

The training input samples.

ypandas Series, default=None

y is not needed in this transformer. You can pass y or None.

fit_transform(X, y=None, **fit_params)[source]

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters
Xarray-like of shape (n_samples, n_features)

Input samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

Target values (None for unsupervised transformations).

**fit_paramsdict

Additional fit parameters.

Returns
X_newndarray array of shape (n_samples, n_features_new)

Transformed array.

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
paramsdict

Parameter names mapped to their values.

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters
**paramsdict

Estimator parameters.

Returns
selfestimator instance

Estimator instance.

transform(X)[source]

Cap the variable values. Optionally, add outlier indicators.

Parameters
X: pandas dataframe of shape = [n_samples, n_features]

The data to be transformed.

Returns
X_new: pandas dataframe of shape = [n_samples, n_features + n_ind]

The dataframe with the capped variables and indicators. The number of output variables depends on the values for ‘tail’ and ‘add_indicators’: if passing ‘add_indicators=False’, will be equal to ‘n_features’, otherwise, will have an additional indicator column per processed feature for each tail.

rtype

DataFrame ..