EqualFrequencyDiscretiser

API Reference

class feature_engine.discretisation.EqualFrequencyDiscretiser(variables=None, q=10, return_object=False, return_boundaries=False)[source]

The EqualFrequencyDiscretiser() divides continuous numerical variables into contiguous equal frequency intervals, that is, intervals that contain approximately the same proportion of observations.

The interval limits are determined using pandas.qcut(), in other words, the interval limits are determined by the quantiles. The number of intervals, i.e., the number of quantiles in which the variable should be divided is determined by the user.

The EqualFrequencyDiscretiser() works only with numerical variables. A list of variables can be passed as argument. Alternatively, the discretiser will automatically select and transform all numerical variables.

The EqualFrequencyDiscretiser() first finds the boundaries for the intervals or quantiles for each variable.

Then it transforms the variables, that is, it sorts the values into the intervals.

Parameters
variables: list, default=None

The list of numerical variables that will be discretised. If None, the EqualFrequencyDiscretiser() will select all numerical variables.

q: int, default=10

Desired number of equal frequency intervals / bins. In other words the number of quantiles in which the variables should be divided.

return_object: bool, default=False

Whether the the discrete variable should be returned casted as numeric or as object. If you would like to proceed with the engineering of the variable as if it was categorical, use True. Alternatively, keep the default to False.

Categorical encoders in Feature-engine work only with variables of type object, thus, if you wish to encode the returned bins, set return_object to True.

return_boundaries: bool, default=False

Whether the output should be the interval boundaries. If True, it returns the interval boundaries. If False, it returns integers.

Attributes

binner_dict_:

Dictionary with the interval limits per variable.

variables_:

The variables to discretise.

n_features_in_:

The number of features in the train set used in fit.

References

1

Kotsiantis and Pintelas, “Data preprocessing for supervised leaning,” International Journal of Computer Science, vol. 1, pp. 111 117, 2006.

2

Dong. “Beating Kaggle the easy way”. Master Thesis. https://www.ke.tu-darmstadt.de/lehre/arbeiten/studien/2015/Dong_Ying.pdf

Methods

fit:

Find the interval limits.

transform:

Sort continuous variable values into the intervals.

fit_transform:

Fit to the data, then transform it.

fit(X, y=None)[source]

Learn the limits of the equal frequency intervals.

Parameters
X: pandas dataframe of shape = [n_samples, n_features]

The training dataset. Can be the entire dataframe, not just the variables to be transformed.

y: None

y is not needed in this encoder. You can pass y or None.

Returns
self
Raises
TypeError
  • If the input is not a Pandas DataFrame

  • If any of the user provided variables are not numerical

ValueError
  • If there are no numerical variables in the df or the df is empty

  • If the variable(s) contain null values

transform(X)[source]

Sort the variable values into the intervals.

Parameters
X: pandas dataframe of shape = [n_samples, n_features]

The data to transform.

Returns
X: pandas dataframe of shape = [n_samples, n_features]

The transformed data with the discrete variables.

rtype

DataFrame ..

Raises
TypeError

If the input is not a Pandas DataFrame

ValueError
  • If the variable(s) contain null values

  • If the dataframe is not of the same size as the one used in fit()

Example

The EqualFrequencyDiscretiser() sorts the variable values into contiguous intervals of equal proportion of observations. The limits of the intervals are calculated according to the quantiles. The number of intervals or quantiles should be determined by the user. The transformer can return the variable as numeric or object (default = numeric).

The EqualFrequencyDiscretiser() works only with numerical variables. A list of variables can be indicated, or the discretiser will automatically select all numerical variables in the train set.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split

from feature_engine.discretisation import EqualFrequencyDiscretiser

# Load dataset
data = data = pd.read_csv('houseprice.csv')

# Separate into train and test sets
X_train, X_test, y_train, y_test =  train_test_split(
            data.drop(['Id', 'SalePrice'], axis=1),
            data['SalePrice'], test_size=0.3, random_state=0)

# set up the discretisation transformer
disc = EqualFrequencyDiscretiser(q=10, variables=['LotArea', 'GrLivArea'])

# fit the transformer
disc.fit(X_train)

# transform the data
train_t= disc.transform(X_train)
test_t= disc.transform(X_test)

disc.binner_dict_
{'LotArea': [-inf,
  5007.1,
  7164.6,
  8165.700000000001,
  8882.0,
  9536.0,
  10200.0,
  11046.300000000001,
  12166.400000000001,
  14373.9,
  inf],
 'GrLivArea': [-inf,
  912.0,
  1069.6000000000001,
  1211.3000000000002,
  1344.0,
  1479.0,
  1603.2000000000003,
  1716.0,
  1893.0000000000005,
  2166.3999999999996,
  inf]}
# with equal frequency discretisation, each bin contains approximately
# the same number of observations.
train_t.groupby('GrLivArea')['GrLivArea'].count().plot.bar()
plt.ylabel('Number of houses')
../_images/equalfrequencydiscretisation.png