A library of sklearn compatible categorical variable encoders

Overview

Categorical Encoding Methods

Test Suite and Linting DOI

A set of scikit-learn-style transformers for encoding categorical variables into numeric by means of different techniques.

Important Links

Documentation: http://contrib.scikit-learn.org/category_encoders/

Encoding Methods

Unsupervised:

  • Backward Difference Contrast [2][3]
  • BaseN [6]
  • Binary [5]
  • Count [10]
  • Hashing [1]
  • Helmert Contrast [2][3]
  • Ordinal [2][3]
  • One-Hot [2][3]
  • Polynomial Contrast [2][3]
  • Sum Contrast [2][3]

Supervised:

  • CatBoost [11]
  • Generalized Linear Mixed Model [12]
  • James-Stein Estimator [9]
  • LeaveOneOut [4]
  • M-estimator [7]
  • Target Encoding [7]
  • Weight of Evidence [8]

Installation

The package requires: numpy, statsmodels, and scipy.

To install the package, execute:

$ python setup.py install

or

pip install category_encoders

or

conda install -c conda-forge category_encoders

To install the development version, you may use:

pip install --upgrade git+https://github.com/scikit-learn-contrib/category_encoders

Usage

All of the encoders are fully compatible sklearn transformers, so they can be used in pipelines or in your existing scripts. Supported input formats include numpy arrays and pandas dataframes. If the cols parameter isn't passed, all columns with object or pandas categorical data type will be encoded. Please see the docs for transformer-specific configuration options.

Examples

There are two types of encoders: unsupervised and supervised. An unsupervised example:

from category_encoders import *
import pandas as pd
from sklearn.datasets import load_boston

# prepare some data
bunch = load_boston()
y = bunch.target
X = pd.DataFrame(bunch.data, columns=bunch.feature_names)

# use binary encoding to encode two categorical features
enc = BinaryEncoder(cols=['CHAS', 'RAD']).fit(X)

# transform the dataset
numeric_dataset = enc.transform(X)

And a supervised example:

from category_encoders import *
import pandas as pd
from sklearn.datasets import load_boston

# prepare some data
bunch = load_boston()
y_train = bunch.target[0:250]
y_test = bunch.target[250:506]
X_train = pd.DataFrame(bunch.data[0:250], columns=bunch.feature_names)
X_test = pd.DataFrame(bunch.data[250:506], columns=bunch.feature_names)

# use target encoding to encode two categorical features
enc = TargetEncoder(cols=['CHAS', 'RAD'])

# transform the datasets
training_numeric_dataset = enc.fit_transform(X_train, y_train)
testing_numeric_dataset = enc.transform(X_test)

For the transformation of the training data with the supervised methods, you should use fit_transform() method instead of fit().transform(), because these two methods do not have to generate the same result. The difference can be observed with LeaveOneOut encoder, which performs a nested cross-validation for the training data in fit_transform() method (to decrease over-fitting of the downstream model) but uses all the training data for scoring with transform() method (to get as accurate estimates as possible).

Furthermore, you may benefit from following wrappers:

  • PolynomialWrapper, which extends supervised encoders to support polynomial targets
  • NestedCVWrapper, which helps to prevent overfitting

Additional examples and benchmarks can be found in the examples directory.

Contributing

Category encoders is under active development, if you'd like to be involved, we'd love to have you. Check out the CONTRIBUTING.md file or open an issue on the github project to get started.

References

  1. Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing for Large Scale Multitask Learning. Proc. ICML.
  2. Contrast Coding Systems for categorical variables. UCLA: Statistical Consulting Group. From https://stats.idre.ucla.edu/r/library/r-library-contrast-coding-systems-for-categorical-variables/.
  3. Gregory Carey (2003). Coding Categorical Variables. From http://psych.colorado.edu/~carey/Courses/PSYC5741/handouts/Coding%20Categorical%20Variables%202006-03-03.pdf
  4. Strategies to encode categorical variables with many categories. From https://www.kaggle.com/c/caterpillar-tube-pricing/discussion/15748#143154.
  5. Beyond One-Hot: an exploration of categorical variables. From http://www.willmcginnis.com/2015/11/29/beyond-one-hot-an-exploration-of-categorical-variables/
  6. BaseN Encoding and Grid Search in categorical variables. From http://www.willmcginnis.com/2016/12/18/basen-encoding-grid-search-category_encoders/
  7. Daniele Miccii-Barreca (2001). A Preprocessing Scheme for High-Cardinality Categorical Attributes in Classification and Prediction Problems. SIGKDD Explor. Newsl. 3, 1. From http://dx.doi.org/10.1145/507533.507538
  8. Weight of Evidence (WOE) and Information Value Explained. From https://www.listendata.com/2015/03/weight-of-evidence-woe-and-information.html
  9. Empirical Bayes for multiple sample sizes. From http://chris-said.io/2017/05/03/empirical-bayes-for-multiple-sample-sizes/
  10. Simple Count or Frequency Encoding. From https://www.datacamp.com/community/tutorials/encoding-methodologies
  11. Transforming categorical features to numerical features. From https://tech.yandex.com/catboost/doc/dg/concepts/algorithm-main-stages_cat-to-numberic-docpage/
  12. Andrew Gelman and Jennifer Hill (2006). Data Analysis Using Regression and Multilevel/Hierarchical Models. From https://faculty.psau.edu.sa/filedownload/doc-12-pdf-a1997d0d31f84d13c1cdc44ac39a8f2c-original.pdf
Comments
  • Add Multi-Process Supported HashingEncoder

    Add Multi-Process Supported HashingEncoder

    Add multiple-process supported HashingEncoder called NHashingEncoder. By using multi-process, it's several times faster than HashingEncoder. On i5-8259U, encoding 1,000,000 samples by HashingEncoder takes 720+ seconds while NHashingEncoder with parameter "max_process=4" only takes 230+ seconds. On 16x3.2GHz CPU + 64G Memory linux, encoding 20million samples by HashingEncoder takes over 4 hours while NHashingEncoder with parameter"max_process=8" only takes 20 minutes.

    opened by liushulun 30
  • Ordinal encoder support new handle unknown handle missing

    Ordinal encoder support new handle unknown handle missing

    Here is the first pass at making Ordinal Encoder support the fields handle_unknown and handle_missing as described at https://github.com/scikit-learn-contrib/categorical-encoding/issues/92.

    Lets go through the fields and their logic.

    handle_unknown

    1. value
      • unknown values go to -1 at transform time
    2. error
      • throw ValueError if encounter new categories during transform time
    3. return_nan
      • at transform time, return nan

    Ok, now handle_missing has configurations for each setting depending on if nan is present at fit time.

    handle_missing

    1. value
      • Nan present at fit time-> nan is treated as category
      • Nan not present at fit time -> transform returns -2
    2. return_nan
      • fit add -2 mapping and at transform return -2 with nan
    3. error
      • At fit or transform, throw error

    Ok, for a total implementation every encoder will have to be changed. What do we want to do avoiding gigantic Pull Requests? Have a long lived feature branch?

    Ok thoughts,

    1. I am going to implement cucumber tests for the handle_unknown and .handle_missing because trying to keep it all straight in my head is difficult.
    2. I need to go through inverse transform and check it against every new setting.
    3. My implementation for return_nan make processing in the downstream encoders more difficult because we are mapping nan to -2.
    4. The relationship between value and indicator for the multi-column encoders and the output of the ordinal encoder currently confuses me. I am going to sit down and write it all out so I know what should lead to what.
    5. Check the changes to the test_ordinal_dist test in test_ordinal. Why was None not being treated as a category?

    Tell me what you think and I can get started o the other encoders.

    opened by JohnnyC08 23
  • Fix binary encoder for columntransformer

    Fix binary encoder for columntransformer

    I discovered that when using the BinaryEncoder in a sklearn.ColumnTransformer, the passed params are lost.

    This is because the encoder gets instantiated twice in a ColumnTransformer. Currently, params are not registered to self in BinaryEncoder.init(), so they are lost when the ColumnTransformer is put to work.

    Disclaimer: I was able to correctly binary encode in a local debug session. However, as there are so many tests failing on the upstream master currently, it was hard to find out whether my solution has an undesired impact.

    Also, I am confused by ordinal.py L323-L326. Is this a bug? It seems to correctly encode both with the -2 and np.nan...

    opened by datarian 21
  • Quantile encoder

    Quantile encoder

    opened by cmougan 19
  • 1.4.0 Release Organization

    1.4.0 Release Organization

    Hey all, been away from the project for a bit, but I'm going back through all of the issues and PRs worked on recently (looks like a bunch of good progress!). Special thanks to @janmotl for all of the work as primary maintainer over the past months.

    Our last release was 1.3.0 on October 14th. Since then ya'll have:

    • Sped up the TargetEncoder and LeaveOneOutEncoder w/ vectorization (significantly)
    • Added support for Categorical types in many encoders
    • Implemented get_feature_names in remaining transformers
    • Improved testing coverage and quality
    • Solved edge cases in repeated column names for some transformers
    • Added support for transforming pandas Series as well as DataFrames and numpy Arrays
    • Fixed inverse transform for many encoders
    • Lots of smaller performance enhancements and code cleanups

    Which I think is a quite full set of features to constitute a release. I will be opening a separate issue to discuss how we as a community can improve our release cycle, but for now will be going through open issues and tagging anything that should be included before the v1.4.0 release. Any input on what should or shouldn't be completed prior to release is welcome.

    Thank you all for the work and support this year, and Happy Holidays.

    Release 
    opened by wdm0006 17
  • Behavior of OneHotEncoder handle_unknown option

    Behavior of OneHotEncoder handle_unknown option

    I'm trying to understand the behavior (and intent) of the handle_unknown option for OneHotEncoder (and by extension OrdinalEncoder). The docs imply that this should control NaN handling but below examples seem to indicate otherwise (category_encoders==1.2.8)

    In [2]: import pandas as pd
       ...: import numpy as np
       ...: from category_encoders import OneHotEncoder
       ...: 
    
    In [3]: X = pd.DataFrame({'a': ['foo', 'bar', 'bar'],
       ...:                   'b': ['qux', np.nan, 'foo']})
       ...: X
       ...: 
    Out[3]: 
         a    b
    0  foo  qux
    1  bar  NaN
    2  bar  foo
    
    In [4]: encoder = OneHotEncoder(cols=['a', 'b'], handle_unknown='ignore', 
       ...:                         impute_missing=True, use_cat_names=True)
       ...: encoder.fit_transform(X)
       ...: 
    Out[4]: 
       a_foo  a_bar  b_qux  b_nan  b_foo
    0      1      0      1      0      0
    1      0      1      0      1      0
    2      0      1      0      0      1
    
    In [5]: encoder = OneHotEncoder(cols=['a', 'b'], handle_unknown='impute', 
       ...:                         impute_missing=True, use_cat_names=True)
       ...: encoder.fit_transform(X)
       ...: 
    Out[5]: 
       a_foo  a_bar  a_-1  b_qux  b_nan  b_foo  b_-1
    0      1      0     0      1      0      0     0
    1      0      1     0      0      1      0     0
    2      0      1     0      0      0      1     0
    
    In [6]: encoder = OneHotEncoder(cols=['a', 'b'], handle_unknown='error', 
       ...:                         impute_missing=True, use_cat_names=True)
       ...: encoder.fit_transform(X)
       ...: 
    Out[6]: 
       a_foo  a_bar  b_qux  b_nan  b_foo
    0      1      0      1      0      0
    1      0      1      0      1      0
    2      0      1      0      0      1
    
    In [7]: encoder = OneHotEncoder(cols=['a', 'b'], handle_unknown='ignore', 
       ...:                         impute_missing=False, use_cat_names=True)
       ...: encoder.fit_transform(X)
       ...: 
    Out[7]: 
       a_foo  a_bar  b_qux  b_nan  b_foo
    0      1      0      1      0      0
    1      0      1      0      1      0
    2      0      1      0      0      1
    
    

    In particular, 'error' and 'ignore' give the same behavior, treating missing observations as another category. 'impute' adds constant zero-valued columns but also treats missing observations as another category. Naively would've expected behavior similar to pd.get_dummies(X, dummy_na={True|False}), with handle_unknown=ignore corresponding to dummy_na=False.

    bug 
    opened by multiloc 17
  • Get feature names

    Get feature names

    Implemented get_feature_names for HashingEncoder, OneHotEncoder and OrdinalEncoder.

    For my purposes, these work now. Not fully tested. It's more of a proposal for a concept. If liked, I will gladly implement for the rest of the encoders, incorporating any feedback.

    opened by datarian 16
  • Question: Difference between TargetEncoder and LeaveOneOutEncoder

    Question: Difference between TargetEncoder and LeaveOneOutEncoder

    It's not really clear to me what the difference between TargetEncoder and LeaveOneOutEncoder, as both encode using the target with leave-one-out. Can you maybe clarify and clarify this in the docs? Does either work for multi-class classification?

    question 
    opened by amueller 13
  • Implement Target Encoding with Hierarchical Structure Smoothing

    Implement Target Encoding with Hierarchical Structure Smoothing

    From section 4 of the paper sited in TargetEncoding.

    Instead of choosing the prior probability of the target as the null hypothesis, it is reasonable to replace it with the estimated probability at the next higher level of aggregation in the attribute hierarchy

    In other words, if we have a single zipcode 54321 but 100 zipcodes 54322 and 100 zipcodes 54323, we could use the mean of zipcode level 4 5432X as our mean smoothing term for zipcode 54321, instead of the mean for all zipcodes XXXXX.

    This would be a really nice additional piece of functionality to add as an encoder.

    enhancement 
    opened by JoshuaC3 13
  • In the ordinal encoder go ahead and update the existing column instea…

    In the ordinal encoder go ahead and update the existing column instea…

    To fix https://github.com/scikit-learn-contrib/categorical-encoding/issues/100

    The issue was arising because _tmp columns were being appended to the end of the data frame as part of the transform process.

    First, we noticed that the transform process was to append a temporary column, drop the existing column, and rename the temporary column to the existing column name.

    So, we went ahead and reduced that step to one step where we update the existing column using our mapping which preserves the order. I wasn't sure why the above mentioned transform method had that many steps and a single update seems to ensure the tests are passing.

    @janmotl I also noticed in travis the python3 step seems to be running python 2.7 instead of python3. From the install.sh I see mentions of a conda create and in the CI logs I see the mention of a virtualenv being set which I don't see mentioned in the project. Perhaps the travis cache needs to be cleared?

    opened by JohnnyC08 13
  • Differing dimensions for training and test

    Differing dimensions for training and test

    Hi,

    I would like to fit encodings on my training set and then using this fitted encoding to transform both the training and the test set:

    import category_encoders as ce
    
    train = ['Brunswick East', 'Fitzroy', 'Williamstown', 'Newport', 'Balwyn North', 'Doncaster', 'Melbourne', 'Albert Park', 'Bentleigh', 'Northcote']
    test = ['Fitzroy North', 'Fitzroy', 'Richmond', 'Surrey Hills', 'Blackburn', 'Port Melbourne', 'Footscray', 'Yarraville', 'Carnegie', 'Surrey Hills']
    
    encoder = ce.HelmertEncoder()
    encoder.fit(train)
    
    train_t = encoder.transform(train)
    test_t = encoder.transform(test)
    
    print train_t.shape
    >> (10, 10)
    print test_t.shape
    >> (10, 2)
    

    The problem is that the dimensions do not fit. What do I do wrong or how can I fix this issue?

    Best regards, Felix

    opened by FelixNeutatz 12
  • ValueError: `X` and `y` both have indexes, but they do not match.

    ValueError: `X` and `y` both have indexes, but they do not match.

    Expected Behavior

    When running any of the category encoders e.g. TargetEncoder() within a pipeline through permutation_test_score() it errors out with the above message. The error occurs in the convert_inputs() function which checks for if any(X.index != y.index): before raising the error.

    Actual Behavior

    The error is not correct and shouldn't occur. When I ran the same check above on my input X (dataframe) and y (series), the error doesn't occur.

    In fact, when I load input data, after splitting the data into X and y, and after label encoding y, I explicitly convert it into a pd.Series and assign it the X.index, so they are in fact identical.

    If in contrast, I do not convert the label encoded y into a pd.Series and leave it as an ndarray, then this error doesn't occur!

    Also, note that the same pipeline when fitted with the same X, y df and series works absolutely fine.

    Steps to Reproduce the Problem

    See an example of my pipeline below: image

    1. Create an arbitrary pipeline as follows:
    from sklearn.linear_model import SGDClassifier
    from category_encoders import TargetEncoder
    
    test_pipe = Pipeline([('enc', TargetEncoder()), ('clf', SGDClassifier(loss='log_loss'))])
    
    1. Run
    score, perm_scores, pvalue = permutation_test_score(test_pipe, X, y)
    

    Specifications

    • Version: 2.5.1.post0
    • Platform: Python 3.10.8
    • Subsystem: Pandas 1.5.1
    opened by RNarayan73 1
  • OneHotEncoder: handle_missing = 'ignore' would be very useful

    OneHotEncoder: handle_missing = 'ignore' would be very useful

    Expected Behavior

    It would be nice to be able to ignore missing values instead of creating new columns with an "_nan" suffix. Just like it is possible with pandas. What do you think?

    Actual Behavior

    Doesn't exist in the current latest version (accoring to my knowledge)

    Steps to Reproduce

    import pandas as pd
    import numpy as np
    from category_encoders import OneHotEncoder
    
    encoder = OneHotEncoder(
        cols=None,  # all non-numeric
        return_df=True,
        handle_missing="value",  # would be nice to have the option 'ignore'
        use_cat_names=True,
    )
    df = pd.DataFrame(
        {"this": ["GREEN", "GREEN", "YELLOW", "YELLOW"], "that": ["A", "B", "A", np.nan]}
    )
    
    encoder.fit_transform(df) # unwanted result
    pd.get_dummies(df, dummy_na=False) # wanted result
    

    Specifications

    • Version: 2.5.1.post0
    opened by woodly0 0
  • Intercept in Contrast Coding Schemes

    Intercept in Contrast Coding Schemes

    Expected Behavior

    The constant (all values 1) intercept column should not be added when applying contrast coding schemes (i.e. backward difference, sum, polynomial and helmert coding)

    I don't think this intercept column is needed. If you fit a supervised learning model it is probably gonna help to remove the intercept column. I think it is there because when fitting linear models with statsmodels you have to add the intercept.
    However I don't like that the output of an encoder would then depend on whether the intercept column is already there or not, e.g. if I first apply encoder A on column A and then encoder B on column B the intercept column of B overwrite A's intercept column hence not adding a new column. Also if I have (for some reason) a column called intercept that is not constant it would get overwritten.

    Any opinion? Am I missing something? Is the intercept necessary?

    Actual Behavior

    A constant column with all values 1 is added

    Steps to Reproduce the Problem

    Run transform on any fitted contrast coding encoder, e.g.

            train = ['A', 'B', 'C']
            encoder = encoders.BackwardDifferenceEncoder(handle_unknown='value', handle_missing='value')
            encoder.fit_transform(train)
    
    opened by PaulWestenthanner 3
  • No need to check if # of dimensions of testing set align with training set in target_encoder

    No need to check if # of dimensions of testing set align with training set in target_encoder

    https://github.com/scikit-learn-contrib/category_encoders/blob/6a13c14919d56fed8177a173d4b3b82c5ea2fef5/category_encoders/utils.py#L322-L323

    For the function _check_transform_inputs(), I do not want it to report error when # of dimensions for testing set don't align with training set. However, the default is it has to align. Considering that the purpose of target encoder is to transform designated columns using target encoders, nothing else, logically we don't have to validate the dimension alignment.

    opened by hongG1997EQ 1
  • Memory increase of WOEEncoder for newer category_encoders version

    Memory increase of WOEEncoder for newer category_encoders version

    Memory increase of WOEEncoder for category_encoders version >=2.0.0

    Hi, I noticed another memory issue with WOEEncoder. I have submitted the same bug before in #335, the difference between two bugs is the different encoder methods used and different datasets. In order to distinguish between the two encoder APIs, I resubmitted a new bug report.

    Expected Behavior

    Similar memory usage

    Actual Behavior

    According to the experiment results, when the category_encoders version is higher than 2.0.0, weight_enc.fit(train[weight_encode], train['target']) memory usage increase from 58MB to 206MB.

    Memory(MB) | Version -- | -- 209| 2.3.0 209| 2.2.2 209| 2.1.0 209| 2.0.0 58| 1.3.0

    Steps to Reproduce the Problem

    Step 1: Download the dataset

    train.zip

    Step 2: install category_encoders

    pip install  category_encoders == #version#
    

    Step 3: change category_encoders version and save the memory usage

    import numpy as np 
    import pandas as pd 
    train = pd.read_csv('train.csv')
    test = pd.read_csv('test.csv')
    columns = [x for x in train.columns if x != 'target']
    object_col_label = ['bin_0','bin_1','bin_2','bin_3','bin_4']
    one_hot_encode = ['nom_0', 'nom_1', 'nom_2', 'nom_3', 'nom_4']
    target_encode = ['nom_5', 'nom_6', 'nom_7', 'nom_8', 'nom_9']
    weight_encode = target_encode + ['ord_4', 'ord_5' ,'ord_3'] + one_hot_encode + object_col_label
    import category_encoders as ce
    weight_enc = ce.woe.WOEEncoder(cols=weight_encode)
    import tracemalloc
    tracemalloc.start()
    weight_enc.fit(train[weight_encode], train['target'])
    current3, peak3 = tracemalloc.get_traced_memory()
    print("Get_dummies memory usage is {",current3 /1024/1024,"}MB; Peak memory was :{",peak3 / 1024/1024,"}MB")
    

    Specifications

    Version: 2.3.0, 2.2.2, 2.1.0, 2.0.0, 1.3.0 Platform: ubuntu 16.4 OS : Ubuntu CPU : Intel(R) Core(TM) i9-9900K CPU GPU : TITAN V

    opened by Piecer-plc 1
  • Poor performance of OneHotEncoder for category_encoders version >=2.0.0

    Poor performance of OneHotEncoder for category_encoders version >=2.0.0

    Expected Behavior

    Similar memory usage for the different category_encoders versions or better performance for higher category_encoders versions.

    Actual Behavior

    According to the experiment results, when the category_encoders version is higher than 2.0.0, the performance of the model is worse. |Memory(MB) | Version| |- | -| |896 | 2.3.0| | 896| 2.2.2| | 896| 2.1.0| | 896| 2.0.0| | 288| 1.3.0|

    Steps to Reproduce the Problem

    Step 1: download above dataset train & test (63MB) Step 2: install category_encoders

    pip install  category_encoders == #version#
    

    Step 3: change category_encoders version and save the memory usage

    import numpy as np 
    import pandas as pd
    import category_encoders as ce
    import tracemalloc
    df_train = pd.read_csv("train.csv")
    df_test = pd.read_csv("test.csv")
    df_train.drop("id", axis=1, inplace=True) 
    df_test.drop("id", axis=1, inplace=True) 
    cat_labels = [f"cat{i}" for i in range(10)]
    
    tracemalloc.start()
    onehot_encoder = ce.one_hot.OneHotEncoder() 
    onehot_encoder.fit(pd.concat([df_train[cat_labels], df_test[cat_labels]], axis=0))
    train_ohe = onehot_encoder.transform(df_train[cat_labels]) 
    test_ohe = onehot_encoder.transform(df_test[cat_labels]) 
    
    current3, peak3 = tracemalloc.get_traced_memory()
    print("Get_dummies memory usage is {",current3 /1024/1024,"}MB; Peak memory was :{",peak3 / 1024/1024,"}MB")
    

    Specifications

    • Version: 2.3.0, 2.2.2, 2.1.0, 2.0.0, 1.3.0
    • Platform: ubuntu 16.4
    • OS : Ubuntu
    • CPU : Intel(R) Core(TM) i9-9900K CPU
    • GPU : TITAN V
    opened by DSOTM-pf 3
Releases(2.5.1.post0)
Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)

scikit-opt Swarm Intelligence in Python (Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Algorithm, Immune Algorithm,A

郭飞 3.7k Jan 01, 2023
A scikit-learn based module for multi-label et. al. classification

scikit-multilearn scikit-multilearn is a Python module capable of performing multi-label learning tasks. It is built on-top of various scientific Pyth

803 Jan 05, 2023
Multivariate imputation and matrix completion algorithms implemented in Python

A variety of matrix completion and imputation algorithms implemented in Python 3.6. To install: pip install fancyimpute Do not use conda. We don't sup

Alex Rubinsteyn 1.1k Dec 18, 2022
A library of sklearn compatible categorical variable encoders

Categorical Encoding Methods A set of scikit-learn-style transformers for encoding categorical variables into numeric by means of different techniques

2.1k Jan 02, 2023
Fast solver for L1-type problems: Lasso, sparse Logisitic regression, Group Lasso, weighted Lasso, Multitask Lasso, etc.

celer Fast algorithm to solve Lasso-like problems with dual extrapolation. Currently, the package handles the following problems: Lasso weighted Lasso

168 Dec 13, 2022
A Python Package to Tackle the Curse of Imbalanced Datasets in Machine Learning

imbalanced-learn imbalanced-learn is a python package offering a number of re-sampling techniques commonly used in datasets showing strong between-cla

6.2k Jan 01, 2023
scikit-learn inspired API for CRFsuite

sklearn-crfsuite sklearn-crfsuite is a thin CRFsuite (python-crfsuite) wrapper which provides interface simlar to scikit-learn. sklearn_crfsuite.CRF i

418 Jan 09, 2023
scikit-learn cross validators for iterative stratification of multilabel data

iterative-stratification iterative-stratification is a project that provides scikit-learn compatible cross validators with stratification for multilab

745 Jan 05, 2023
A library of extension and helper modules for Python's data analysis and machine learning libraries.

Mlxtend (machine learning extensions) is a Python library of useful tools for the day-to-day data science tasks. Sebastian Raschka 2014-2021 Links Doc

Sebastian Raschka 4.2k Dec 28, 2022
Scikit-learn compatible estimation of general graphical models

skggm : Gaussian graphical models using the scikit-learn API In the last decade, learning networks that encode conditional independence relationships

213 Jan 02, 2023
A Python library for dynamic classifier and ensemble selection

DESlib DESlib is an easy-to-use ensemble learning library focused on the implementation of the state-of-the-art techniques for dynamic classifier and

425 Dec 18, 2022
machine learning with logical rules in Python

skope-rules Skope-rules is a Python machine learning module built on top of scikit-learn and distributed under the 3-Clause BSD license. Skope-rules a

504 Dec 31, 2022
Large-scale linear classification, regression and ranking in Python

lightning lightning is a library for large-scale linear classification, regression and ranking in Python. Highlights: follows the scikit-learn API con

1.6k Dec 31, 2022
Data Analysis Baseline Library

dabl The data analysis baseline library. "Mr Sanchez, are you a data scientist?" "I dabl, Mr president." Find more information on the website. State o

Andreas Mueller 122 Dec 27, 2022
Topological Data Analysis for Python🐍

Scikit-TDA is a home for Topological Data Analysis Python libraries intended for non-topologists. This project aims to provide a curated library of TD

Scikit-TDA 373 Dec 24, 2022
Extra blocks for scikit-learn pipelines.

scikit-lego We love scikit learn but very often we find ourselves writing custom transformers, metrics and models. The goal of this project is to atte

vincent d warmerdam 941 Dec 30, 2022
(AAAI' 20) A Python Toolbox for Machine Learning Model Combination

combo: A Python Toolbox for Machine Learning Model Combination Deployment & Documentation & Stats Build Status & Coverage & Maintainability & License

Yue Zhao 606 Dec 21, 2022