# Winsorizer¶

Censors variables at predefined minimum and maximum values. The minimum and maximum values can be calculated in 1 of 3 different ways:

Gaussian limits:

right tail: mean + 3* std

left tail: mean - 3* std

IQR limits:

right tail: 75th quantile + 3* IQR

left tail: 25th quantile - 3* IQR

where IQR is the inter-quartile range: 75th quantile - 25th quantile.

percentiles or quantiles:

right tail: 95th percentile

left tail: 5th percentile

See the API Reference for more details.

```import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split

from feature_engine import outlier_removers as outr

data = data.replace('?', np.nan)
data['cabin'] = data['cabin'].astype(str).str[0]
data['pclass'] = data['pclass'].astype('O')
data['embarked'].fillna('C', inplace=True)
data['fare'] = data['fare'].astype('float')
data['fare'].fillna(data['fare'].median(), inplace=True)
data['age'] = data['age'].astype('float')
data['age'].fillna(data['age'].median(), inplace=True)
return data

# Separate into train and test sets
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['survived', 'name', 'ticket'], axis=1),
data['survived'], test_size=0.3, random_state=0)

# set up the capper
capper = outr.Winsorizer(
distribution='gaussian', tail='right', fold=3, variables=['age', 'fare'])

# fit the capper
capper.fit(X_train)

# transform the data
train_t= capper.transform(X_train)
test_t= capper.transform(X_test)

capper.right_tail_caps_
```
```{'age': 72.03416424092518, 'fare': 174.78162171790427}
```
```train_t[['fare', 'age']].max()
```
```fare    174.781622
age      67.490484
dtype: float64
```

## API Reference¶

class `feature_engine.outlier_removers.``Winsorizer`(distribution='gaussian', tail='right', fold=3, variables=None, missing_values='raise')[source]

The Winsorizer() caps maximum and / or minimum values of a variable.

The Winsorizer() works only with numerical variables. A list of variables can be indicated. Alternatively, the Winsorizer() will select all numerical variables in the train set.

The Winsorizer() first calculates the capping values at the end of the distribution. The values are determined using 1) a Gaussian approximation, 2) the inter-quantile range proximity rule or 3) percentiles.

Gaussian limits:

right tail: mean + 3* std

left tail: mean - 3* std

IQR limits:

right tail: 75th quantile + 3* IQR

left tail: 25th quantile - 3* IQR

where IQR is the inter-quartile range: 75th quantile - 25th quantile.

percentiles or quantiles:

right tail: 95th percentile

left tail: 5th percentile

You can select how far out to cap the maximum or minimum values with the parameter ‘fold’.

If distribution=’gaussian’ fold gives the value to multiply the std.

If distribution=’skewed’ fold is the value to multiply the IQR.

If distribution=’quantile’, fold is the percentile on each tail that should be censored. For example, if fold=0.05, the limits will be the 5th and 95th percentiles. If fold=0.1, the limits will be the 10th and 90th percentiles.

The transformer first finds the values at one or both tails of the distributions (fit).

The transformer then caps the variables (transform).

Parameters
• distribution (str, default=gaussian) –

Desired distribution. Can take ‘gaussian’, ‘skewed’ or ‘quantiles’.

gaussian: the transformer will find the maximum and / or minimum values to cap the variables using the Gaussian approximation.

skewed: the transformer will find the boundaries using the IQR proximity rule.

quantiles: the limits are given by the percentiles.

• tail (str, default=right) – Whether to cap outliers on the right, left or both tails of the distribution. Can take ‘left’, ‘right’ or ‘both’.

• fold (int or float, default=3) –

How far out to to place the capping values. The number that will multiply the std or IQR to calculate the capping values. Recommended values, 2 or 3 for the gaussian approximation, or 1.5 or 3 for the IQR proximity rule.

If distribution=’quantile’, then ‘fold’ indicates the percentile. So if fold=0.05, the limits will be the 95th and 5th percentiles. Note: Outliers will be removed up to a maximum of the 20th percentiles on both sides. Thus, when distribution=’quantile’, then ‘fold’ takes values between 0 and 0.20.

• variables (list, default=None) – The list of variables for which the outliers will be capped. If None, the transformer will find and select all numerical variables.

• missing_values (string, default='raise') – Indicates if missing values should be ignored or raised. Sometimes we want to remove outliers in the raw, original data, sometimes, we may want to remove outliers in the already pre-transformed data. If missing_values=’ignore’, the transformer will ignore missing data when learning the capping parameters or transforming the data. If missing_values=’raise’ the transformer will return an error if the training or other datasets contain missing values.

`fit`(X, y=None)[source]

Learns the values that should be used to replace outliers.

Parameters
• X (pandas dataframe of shape = [n_samples, n_features]) – The training input samples.

• y (None) – y is not needed in this transformer. You can pass y or None.

`right_tail_caps\_`

The dictionary containing the maximum values at which variables will be capped.

Type

dictionary

`left_tail_caps\_`

The dictionary containing the minimum values at which variables will be capped.

Type

dictionary

`transform`(X)[source]

Caps the variable values, that is, censors outliers.

Parameters

X (pandas dataframe of shape = [n_samples, n_features]) – The data to be transformed.

Returns

X_transformed – The dataframe with the capped variables.

Return type

pandas dataframe of shape = [n_samples, n_features]