Uplift modeling and causal inference with machine learning algorithms

Overview

PyPI Version Build Status Documentation Status Downloads CII Best Practices

Disclaimer

This project is stable and being incubated for long-term support. It may contain new experimental code, for which APIs are subject to change.

Causal ML: A Python Package for Uplift Modeling and Causal Inference with ML

Causal ML is a Python package that provides a suite of uplift modeling and causal inference methods using machine learning algorithms based on recent research. It provides a standard interface that allows user to estimate the Conditional Average Treatment Effect (CATE) or Individual Treatment Effect (ITE) from experimental or observational data. Essentially, it estimates the causal impact of intervention T on outcome Y for users with observed features X, without strong assumptions on the model form. Typical use cases include

  • Campaign targeting optimization: An important lever to increase ROI in an advertising campaign is to target the ad to the set of customers who will have a favorable response in a given KPI such as engagement or sales. CATE identifies these customers by estimating the effect of the KPI from ad exposure at the individual level from A/B experiment or historical observational data.

  • Personalized engagement: A company has multiple options to interact with its customers such as different product choices in up-sell or messaging channels for communications. One can use CATE to estimate the heterogeneous treatment effect for each customer and treatment option combination for an optimal personalized recommendation system.

The package currently supports the following methods

  • Tree-based algorithms
    • Uplift tree/random forests on KL divergence, Euclidean Distance, and Chi-Square
    • Uplift tree/random forests on Contextual Treatment Selection
  • Meta-learner algorithms
    • S-learner
    • T-learner
    • X-learner
    • R-learner
  • Instrumental variables algorithms
    • 2-Stage Least Squares (2SLS)

Installation

Prerequisites

Install dependencies:

$ pip install -r requirements.txt

Install from pip:

$ pip install causalml

Install from source:

$ git clone https://github.com/uber/causalml.git
$ cd causalml
$ python setup.py build_ext --inplace
$ python setup.py install

Quick Start

Average Treatment Effect Estimation with S, T, X, and R Learners

from causalml.inference.meta import LRSRegressor
from causalml.inference.meta import XGBTRegressor, MLPTRegressor
from causalml.inference.meta import BaseXRegressor
from causalml.inference.meta import BaseRRegressor
from xgboost import XGBRegressor
from causalml.dataset import synthetic_data

y, X, treatment, _, _, e = synthetic_data(mode=1, n=1000, p=5, sigma=1.0)

lr = LRSRegressor()
te, lb, ub = lr.estimate_ate(X, treatment, y)
print('Average Treatment Effect (Linear Regression): {:.2f} ({:.2f}, {:.2f})'.format(te[0], lb[0], ub[0]))

xg = XGBTRegressor(random_state=42)
te, lb, ub = xg.estimate_ate(X, treatment, y)
print('Average Treatment Effect (XGBoost): {:.2f} ({:.2f}, {:.2f})'.format(te[0], lb[0], ub[0]))

nn = MLPTRegressor(hidden_layer_sizes=(10, 10),
                 learning_rate_init=.1,
                 early_stopping=True,
                 random_state=42)
te, lb, ub = nn.estimate_ate(X, treatment, y)
print('Average Treatment Effect (Neural Network (MLP)): {:.2f} ({:.2f}, {:.2f})'.format(te[0], lb[0], ub[0]))

xl = BaseXRegressor(learner=XGBRegressor(random_state=42))
te, lb, ub = xl.estimate_ate(X, e, treatment, y)
print('Average Treatment Effect (BaseXRegressor using XGBoost): {:.2f} ({:.2f}, {:.2f})'.format(te[0], lb[0], ub[0]))

rl = BaseRRegressor(learner=XGBRegressor(random_state=42))
te, lb, ub =  rl.estimate_ate(X=X, p=e, treatment=treatment, y=y)
print('Average Treatment Effect (BaseRRegressor using XGBoost): {:.2f} ({:.2f}, {:.2f})'.format(te[0], lb[0], ub[0]))

See the Meta-learner example notebook for details.

Interpretable Causal ML

Causal ML provides methods to interpret the treatment effect models trained as follows:

Meta Learner Feature Importances

from causalml.inference.meta import BaseSRegressor, BaseTRegressor, BaseXRegressor, BaseRRegressor
from causalml.dataset.regression import synthetic_data

# Load synthetic data
y, X, treatment, tau, b, e = synthetic_data(mode=1, n=10000, p=25, sigma=0.5)
w_multi = np.array(['treatment_A' if x==1 else 'control' for x in treatment]) # customize treatment/control names

slearner = BaseSRegressor(LGBMRegressor(), control_name='control')
slearner.estimate_ate(X, w_multi, y)
slearner_tau = slearner.fit_predict(X, w_multi, y)

model_tau_feature = RandomForestRegressor()  # specify model for model_tau_feature

slearner.get_importance(X=X, tau=slearner_tau, model_tau_feature=model_tau_feature,
                        normalize=True, method='auto', features=feature_names)

# Using the feature_importances_ method in the base learner (LGBMRegressor() in this example)
slearner.plot_importance(X=X, tau=slearner_tau, normalize=True, method='auto')

# Using eli5's PermutationImportance
slearner.plot_importance(X=X, tau=slearner_tau, normalize=True, method='permutation')

# Using SHAP
shap_slearner = slearner.get_shap_values(X=X, tau=slearner_tau)

# Plot shap values without specifying shap_dict
slearner.plot_shap_values(X=X, tau=slearner_tau)

# Plot shap values WITH specifying shap_dict
slearner.plot_shap_values(X=X, shap_dict=shap_slearner)

# interaction_idx set to 'auto' (searches for feature with greatest approximate interaction)
slearner.plot_shap_dependence(treatment_group='treatment_A',
                              feature_idx=1,
                              X=X,
                              tau=slearner_tau,
                              interaction_idx='auto')

See the feature interpretations example notebook for details.

Uplift Tree Visualization

from IPython.display import Image
from causalml.inference.tree import UpliftTreeClassifier, UpliftRandomForestClassifier
from causalml.inference.tree import uplift_tree_string, uplift_tree_plot

uplift_model = UpliftTreeClassifier(max_depth=5, min_samples_leaf=200, min_samples_treatment=50,
                                    n_reg=100, evaluationFunction='KL', control_name='control')

uplift_model.fit(df[features].values,
                 treatment=df['treatment_group_key'].values,
                 y=df['conversion'].values)

graph = uplift_tree_plot(uplift_model.fitted_uplift_tree, features)
Image(graph.create_png())

See the Uplift Tree visualization example notebook for details.

Contributing

We welcome community contributors to the project. Before you start, please read our code of conduct and check out contributing guidelines first.

Contributors

Versioning

We document versions and changes in our changelog.

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

References

Documentation

Citation

To cite CausalML in publications, you can refer to the following sources:

Whitepaper: CausalML: Python Package for Causal Machine Learning

Bibtex:

@misc{chen2020causalml, title={CausalML: Python Package for Causal Machine Learning}, author={Huigang Chen and Totte Harinen and Jeong-Yoon Lee and Mike Yung and Zhenyu Zhao}, year={2020}, eprint={2002.11631}, archivePrefix={arXiv}, primaryClass={cs.CY} }

Papers

  • Nicholas J Radcliffe and Patrick D Surry. Real-world uplift modelling with significance based uplift trees. White Paper TR-2011-1, Stochastic Solutions, 2011.
  • Yan Zhao, Xiao Fang, and David Simchi-Levi. Uplift modeling with multiple treatments and general response types. Proceedings of the 2017 SIAM International Conference on Data Mining, SIAM, 2017.
  • Sören R. Künzel, Jasjeet S. Sekhon, Peter J. Bickel, and Bin Yu. Metalearners for estimating heterogeneous treatment effects using machine learning. Proceedings of the National Academy of Sciences, 2019.
  • Xinkun Nie and Stefan Wager. Quasi-Oracle Estimation of Heterogeneous Treatment Effects. Atlantic Causal Inference Conference, 2018.

Related projects

  • uplift: uplift models in R
  • grf: generalized random forests that include heterogeneous treatment effect estimation in R
  • rlearner: A R package that implements R-Learner
  • DoWhy: Causal inference in Python based on Judea Pearl's do-calculus
  • EconML: A Python package that implements heterogeneous treatment effect estimators from econometrics and machine learning methods
Comments
  • Installation fails with pip and from source in Python 3.7

    Installation fails with pip and from source in Python 3.7

    Hey there,

    I was trying to add causalml as a dependency to our project. When trying pip install causalml this error occurred:

    Collecting causalml
      Using cached https://files.pythonhosted.org/packages/4e/92/fb9af85303fc6b54bf824c36572c30d9a503e9a70a043d1f135f9c03c1fc/causalml-0.4.0.tar.gz
        ERROR: Complete output from command python setup.py egg_info:
        ERROR: Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/private/var/folders/54/4kkg30g93bn1nqsz25r0qkfw0000gq/T/pip-install-hjz2ribq/causalml/setup.py", line 3, in <module>
            from Cython.Build import cythonize
        ModuleNotFoundError: No module named 'Cython'
        ----------------------------------------
    ERROR: Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/54/4kkg30g93bn1nqsz25r0qkfw0000gq/T/pip-install-hjz2ribq/causalml/
    

    After install Cython and running pip install causalml this occurred:

     Error compiling Cython file:
    ------------------------------------------------------------
    ...
                if sample_weight != NULL:
                    # the weights of 1 and 1 + eps are used for control and treatment respectively
                    is_treated = (sample_weight[i] - 1.0) * one_over_eps
    
                # assume that there is only one output (k = 0)
                y_ik = y[i * self.y_stride]
                                ^
    ------------------------------------------------------------
    
    causalml/inference/tree/causaltree.pyx:163:29: Accessing Python attribute not allowed without gil
    Traceback (most recent call last):
      File "setup.py", line 43, in <module>
        ext_modules=cythonize(extensions),
      File "/Users/MaximilianFranz/anaconda3/envs/justcause/lib/python3.7/site-packages/Cython/Build/Dependencies.py", line 1096, in cythonize
        cythonize_one(*args)
      File "/Users/MaximilianFranz/anaconda3/envs/justcause/lib/python3.7/site-packages/Cython/Build/Dependencies.py", line 1219, in cythonize_one
        raise CompileError(None, pyx_file)
    Cython.Compiler.Errors.CompileError: causalml/inference/tree/causaltree.pyx
    

    I'm not familiar with the intricacies of Cython and compiling .pyx files, thus I can't point at any potential underlying issue. Maybe it's just me?

    Any ideas or tips how to solve this?

    installation 
    opened by MaximilianFranz 20
  • fix issue filter method failure with NaNs in the data issue #348, add…

    fix issue filter method failure with NaNs in the data issue #348, add…

    … null_impute using sklearn SimpleImputer

    Proposed changes

    NaNs present in the data caused pd.qcut at https://github.com/uber/causalml/blob/master/causalml/feature_selection/filters.py#L341 to return np.nan as NaN values don't fall in any specified bin range. This caused issue since x_bins.max() returned np.nan which caused error at line https://github.com/uber/causalml/blob/master/causalml/feature_selection/filters.py#L344

    Types of changes

    What types of changes does your code introduce to CausalML? Put an x in the boxes that apply

    • [x] Bugfix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] Documentation Update (if none of the other choices apply)

    Checklist

    Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code.

    • [x] I have read the CONTRIBUTING doc
    • [x] I have signed the CLA
    • [x] Lint and unit tests pass locally with my changes
    • [ ] I have added tests that prove my fix is effective or that my feature works
    • [x] I have added necessary documentation (if appropriate)
    • [ ] Any dependent changes have been merged and published in downstream modules

    Further comments

    First to fix the issue raised in #348 , changed the code at https://github.com/uber/causalml/blob/master/causalml/feature_selection/filters.py#L344 from x_bins.max() to np.nanmax(x_bins).astype(int) which will no more return np.nan as max value since it picks max value from given input ignoring np.nan. Also introduced NaN imputation. As suggested by the author in the aforementioned issue #348, Added a way to pass null_impute argument to impute data. Used SimpleImputer from sklearn to do the same

    opened by manojbalaji1 10
  • install issues

    install issues

    Describe the bug

    To Reproduce Steps to reproduce the behavior: I followed your instructions

    1. git clone (you need to add this in your top readme)
    2. pip install -r requirements...

    during the install

    ERROR: tensorflow 2.4.1 has requirement h5py~=2.10.0, but you'll have h5py 3.1.0 which is incompatible. ERROR: tensorflow 2.4.1 has requirement numpy~=1.19.2, but you'll have numpy 1.18.5 which is incompatible.

    this is an easy fix in the requirements.txt , just bringing it to your attention. Expected behavior A clear and concise description of what you expected to happen.

    install should complete without errors

    Screenshots If applicable, add screenshots to help explain your problem.

    Environment (please complete the following information):

    • OS: [e.g. macOS, Windows, Ubuntu] macos
    • Python Version: [e.g. 3.6, 3.7] 3.7.2
    • Versions of Major Dependencies (pandas, scikit-learn, cython): [e.g. pandas==0.25, scikit-learn==0.22, cython==0.28] whatever is in your requirements.txt (i'm installing in a venv)

    Additional context Add any other context about the problem here.

    installation 
    opened by knail1 10
  • Continuous Response Variable: P-values

    Continuous Response Variable: P-values

    I'm training a random forest model, where the response variable is continuous. When I look at one tree from the forest, the p-values are always NaN. Why is that?

    enhancement 
    opened by soodimilanlouei 9
  • Causal trees interpretation example

    Causal trees interpretation example

    Proposed changes

    Hi, I included causal_trees_interpretation.ipynb with an example of sklearn feature importance and shap for causal trees. The related PR of causal trees support in shap: https://github.com/slundberg/shap/pull/2654

    Additional minor changes:

    • Makefile update to prevent cython from compilation errors in case of missing cython or sklearn.
    • Basic Cython directives in builder.pyx to make cython compilation work as expected.

    Types of changes

    What types of changes does your code introduce to CausalML? Put an x in the boxes that apply

    • [x] Bugfix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] Documentation Update (if none of the other choices apply)

    Checklist

    Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code.

    • [ ] I have read the CONTRIBUTING doc
    • [ ] I have signed the CLA
    • [ ] Lint and unit tests pass locally with my changes
    • [ ] I have added tests that prove my fix is effective or that my feature works
    • [ ] I have added necessary documentation (if appropriate)
    • [ ] Any dependent changes have been merged and published in downstream modules

    Further comments

    If this is a relatively large or complex change, kick off the discussion by explaining why you chose the solution you did and what alternatives you considered, etc. This PR template is adopted from appium.

    example 
    opened by alexander-pv 8
  • ElasticNetPropensityModel class is empty

    ElasticNetPropensityModel class is empty

    Based on the inference/meta/xlearner.py fit function docstr when p (propensity score) is None then ElasticNetPropensityModel() is used to generate the propensity scores. However, it appears to me that the ElasticNetPropensityModel in causalml/propensity.py is empty. Is this is a bug or am I missing something? Because I don't get an error message when fitting a xlearner learner

    class ElasticNetPropensityModel(LogisticRegressionPropensityModel): pass

    enhancement question 
    opened by shaddyab 8
  • Some fundamental questions about CausalML

    Some fundamental questions about CausalML

    I have some questions about your package.

    I plan to use your code to calculate CATE for each individual but when I have multiple categorical Treatments (0,1,2). When I read your documentations. I got confused how to use it and which method to use. Before using your package I was using CAusallift package to calculate CATE for each individual. But Causallift uses binary treatment and it does not support multi Treatment. In your examples I don't find where to use Multi treatment and also I don't see an example for classification problem. Like XGBoost classifier for multi treatment and binary output.

    It will be great If you explain here how should I use your package step by step.

    Thanks

    question 
    opened by Jami1141 8
  • Causalml ATE and Learning Rate Sensitivity

    Causalml ATE and Learning Rate Sensitivity

    The default learning rate for LGBMRegressor in CausalML is 0.1. But I wanted to see how it would perform with other values of learning rates. As you can see below, it seems that the ATE can drastically change by changing the learning rate of the base learner. Anyone can help explain this behavior?

    Here is my setup. I generate some synthetic data:

    # train data
    y, X, treatment, _, _, e = synthetic_data(mode=1, n=10000, p=5, sigma=1.0)
    print(X.shape, y.shape, treatment.shape)
    
    # test data
    y_test, X_test, treatment_test, _, _, _ = synthetic_data(mode=1, n=5000, p=5, sigma=1.0)
    print(X_test.shape, y_test.shape, treatment_test.shape)
    

    and built a simple T-learner with different learning rates:

    for lr in [1e-5,1e-4,1e-3,1e-2,1e-1]:
        learner = LGBMRegressor(n_estimators=100, learning_rate=lr)
        learner_t = BaseTRegressor(learner=learner, control_name=0)
        ate_train, _, _ = learner_t.estimate_ate(X=X, treatment=treatment, y=y)
        
        # get ATE for test data
        te_test= learner_t.predict(X=X_test)
        ate_test = te_test.mean(axis=0)[0]
        
        print(f"learning rate: {lr}, train ATE: {ate_train[0]:.2f}")
        print(f"learning rate: {lr}, test ATE: {ate_test:.2f}")
            
        print("-"*50)
    

    Here is the output:

    learning rate: 1e-05, train ATE: 0.87
    learning rate: 1e-05, test ATE: 0.87
    --------------------------------------------------
    learning rate: 0.0001, train ATE: 0.87
    learning rate: 0.0001, test ATE: 0.87
    --------------------------------------------------
    learning rate: 0.001, train ATE: 0.84
    learning rate: 0.001, test ATE: 0.84
    --------------------------------------------------
    learning rate: 0.01, train ATE: 0.69
    learning rate: 0.01, test ATE: 0.69
    --------------------------------------------------
    learning rate: 0.1, train ATE: 0.49
    learning rate: 0.1, test ATE: 0.50
    --------------------------------------------------
    
    enhancement 
    opened by mave5 7
  • Propensity score requirement in X-learner and R-learner

    Propensity score requirement in X-learner and R-learner

    First, thanks for open-sourcing this package. I've learned a lot!

    I'm wondering if there's a particular reason why the user must pass self generated propensity scores to be used in the X-learner and R-learner? While it most likely forces the user to understand how well calibrated the scores are, I would think there's validity in estimating them 'under the hood' using the learner or another supplied model parameterization (similar to how BaseRLearner.outcome_learner can be optionally specified).

    In terms of the X-learner, while Kunzel et al. state they have good experiences specifying g(x) as the propensity scores, its worth noting g(x) is simply a weighting function that is chosen to minimize the variance of the CATEs. Stating this in the documentation / naming conventions might be helpful also?

    Thanks

    documentation enhancement 
    opened by nsalas24 7
  • Fixing handling of series inputs to meta learners

    Fixing handling of series inputs to meta learners

    For your consideration. The meta learners have a check check_control_in_treatment which breaks on Pandas Series (which are a handy way to organize data) as in doesn't work directly on Series objects.

    • Also dropped assert for r and x learner as the assert should (I believe) be handled in the base (just commented out, can remove)
    • Also changed version in setup to be pulled from init.py to keep version recorded in one and only one place.

    Happy to iterate.

    refactoring 
    opened by lawinslow 7
  • Question: Propensity Score?

    Question: Propensity Score?

    Great library, just starting to take a look but I didnt immediately see documentation to answer the question I had in regards to if in the available methods, there is any distinction between observational or experimental data and specifically if propensity scores (propensity to be treated) were leveraged for observational data - perhaps as weights or through matching or as a covariate?

    question 
    opened by BrianMiner 7
  • some problems in installing causalml

    some problems in installing causalml

    When installing causalml, I used methods including pip and conda, but none of them were successfully installed. Here are the errors I installed with the pip command, I don't know how to solve it

    1 2

    opened by deft899 0
  • Add capability to predict the outcomes to causal tree/forest

    Add capability to predict the outcomes to causal tree/forest

    While we use CausalML to predict the effects, one often wants to know the outcome values of the control and/or treatment given the covariates at the same time. Even though one could build separate prediction tree/forest for this purpose, not only that approach is more inconvenient and expensive, but it is hard to ensure the prediction model agrees with the causal model. (It seems that the nodes of CausalTree/CausalRandomForest already contain the necessary values, e.g. ct_y_sum and ct_count etc. It currently lack ways to aggregate them at the API level.)

    enhancement 
    opened by winston-zillow 0
  • UpliftRandomForestClassifier Model object cannot be pickled when saving as joblib

    UpliftRandomForestClassifier Model object cannot be pickled when saving as joblib

    UpliftRandomForestClassifier Model object cannot be pickled when saving as joblib. There seems to be no way to save UpliftRandomForestClassifier Model object

    bug 
    opened by maheshv26 1
  • Add the flags of the treatment and control groups to the matched data obtained by propensity score matching

    Add the flags of the treatment and control groups to the matched data obtained by propensity score matching

    …frame

    Proposed changes

    Add the flags of the treatment and control groups to the matched data obtained by propensity score matching , in order to facilitate matched data analysis and processing

    Types of changes

    What types of changes does your code introduce to CausalML? Put an x in the boxes that apply

    • [ ] Bugfix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] Documentation Update (if none of the other choices apply)

    Checklist

    Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code.

    • [x] I have read the CONTRIBUTING doc
    • [x] I have signed the CLA
    • [x] Lint and unit tests pass locally with my changes
    • [x] I have added tests that prove my fix is effective or that my feature works
    • [ ] I have added necessary documentation (if appropriate)
    • [ ] Any dependent changes have been merged and published in downstream modules

    Further comments

    If this is a relatively large or complex change, kick off the discussion by explaining why you chose the solution you did and what alternatives you considered, etc. This PR template is adopted from appium.

    opened by chenzhongd 1
Releases(v0.13.0)
  • v0.13.0(Sep 2, 2022)

    • CausalML surpassed 1MM downloads on PyPI and 3,200 stars on GitHub. Thanks for choosing CausalML and supporting us on GitHub.
    • We have 7 new contributors @saiwing-yeung, @lixuan12315, @aldenrogers, @vincewu51, @AlkanSte, @enzoliao, and @alexander-pv. Thanks for your contributions!
    • @alexander-pv revamped CausalTreeRegressor and added CausalRandomForestRegressor with more seamless integration with scikit-learn's Cython tree module. He also added integration with shap for causal tree/ random forest interpretation. Please check out the example notebook.
    • We dropped the support for Python 3.6 and removed its test workflow.

    What's Changed

    • Fix typo (% -> $) by @saiwing-yeung in https://github.com/uber/causalml/pull/488
    • Add function for calculating PNS bounds by @t-tte in https://github.com/uber/causalml/pull/482
    • Fix hard coding bug by @t-tte in https://github.com/uber/causalml/pull/492
    • Update README of conda install and instruction of maintain in conda-forge by @ppstacy in https://github.com/uber/causalml/pull/485
    • Update examples.rst by @lixuan12315 in https://github.com/uber/causalml/pull/496
    • Fix incorrect effect_learner_objective in XGBRRegressor by @jeongyoonlee in https://github.com/uber/causalml/pull/504
    • Fix Filter F doesn't work with latest statsmodels' F test f-value format by @paullo0106 in https://github.com/uber/causalml/pull/505
    • Exclude tests in setup.py by @aldenrogers in https://github.com/uber/causalml/pull/508
    • Enabling higher orders feature importance for F filter and LR filter by @zhenyuz0500 in https://github.com/uber/causalml/pull/509
    • Ate pretrain 0506 by @vincewu51 in https://github.com/uber/causalml/pull/511
    • Update methodology.rst by @AlkanSte in https://github.com/uber/causalml/pull/518
    • Fix the bug of incorrect result in qini for multiple models by @enzoliao in https://github.com/uber/causalml/pull/520
    • Test get_qini() by @enzoliao in https://github.com/uber/causalml/pull/523
    • Fixed typo in uplift_trees_with_synthetic_data.ipynb by @jroessler in https://github.com/uber/causalml/pull/531
    • Remove Python 3.6 test from workflows by @jeongyoonlee in https://github.com/uber/causalml/pull/535
    • Causal trees update by @alexander-pv in https://github.com/uber/causalml/pull/522
    • Causal trees interpretation example by @alexander-pv in https://github.com/uber/causalml/pull/536

    New Contributors

    • @saiwing-yeung made their first contribution in https://github.com/uber/causalml/pull/488
    • @lixuan12315 made their first contribution in https://github.com/uber/causalml/pull/496
    • @aldenrogers made their first contribution in https://github.com/uber/causalml/pull/508
    • @vincewu51 made their first contribution in https://github.com/uber/causalml/pull/511
    • @AlkanSte made their first contribution in https://github.com/uber/causalml/pull/518
    • @enzoliao made their first contribution in https://github.com/uber/causalml/pull/520
    • @alexander-pv made their first contribution in https://github.com/uber/causalml/pull/522

    Full Changelog: https://github.com/uber/causalml/compare/v0.12.3...v0.13.0

    Source code(tar.gz)
    Source code(zip)
  • v0.12.3(Mar 14, 2022)

    This patch is to release a version without the constraint of Shap which can be used for conda-forge.

    What's Changed

    • Modify the requirement version of Shap by @ppstacy in https://github.com/uber/causalml/pull/483

    Full Changelog: https://github.com/uber/causalml/compare/v0.12.2...v0.12.3

    Source code(tar.gz)
    Source code(zip)
  • v0.12.2(Feb 18, 2022)

    This patch includes three updates by our latest contributors, @tonkolviktor and @heiderich. We also start using black, a Python formatter. Please check out the updated contribution guideline to learn how to use it.

    What's Changed

    • Opens up scipy dependency version range towards newer releases (#441) by @tonkolviktor in https://github.com/uber/causalml/pull/473
    • Merely define preferred backend for joblib instead of hard-coding it by @heiderich in https://github.com/uber/causalml/pull/476
    • Allow parallel prediction and make joblib's backend configurable for UpliftRandomForestClassifier by @heiderich in https://github.com/uber/causalml/pull/477
    • Reformat code using black by @jeongyoonlee in https://github.com/uber/causalml/pull/474

    New Contributors

    • @tonkolviktor made their first contribution in https://github.com/uber/causalml/pull/473
    • @heiderich made their first contribution in https://github.com/uber/causalml/pull/476

    Full Changelog: https://github.com/uber/causalml/compare/v0.12.1...v0.12.2

    Source code(tar.gz)
    Source code(zip)
  • v0.12.1(Feb 5, 2022)

    This patch includes two bug fixes for UpliftRandomForestClassifier as follows:

    • #462 by @paullo0106: Use the correct treatment_idx for fillTree() when applying validation data set
    • #468 by @jeongyoonlee: Switch the joblib backend for UpliftRandomForestClassifier to threading to avoid memory copy across trees
    Source code(tar.gz)
    Source code(zip)
  • v0.12.0(Jan 14, 2022)

    0.12.0 (Jan 2022)

    • CausalML surpassed 637K downloads on PyPI and 2,500 stars on Github!
    • We have 4 new community contributors, Luis (@lgmoneda ), Ravi (@raviksharma), Louis (@LouisHernandez17) and JackRab (@JackRab). Thanks for the contribution!
    • We refactored and speeded up UpliftTreeClassifier/UpliftRandomForestClassifier by 5x with Cython (#422 #440 by @jeongyoonlee)
    • We revamped our API documentation, it now includes the latest methodology, references, installation, notebook examples, and graphs! (#413 by @huigangchen @t-tte @zhenyuz0500 @jeongyoonlee @paullo0106)
    • Our team gave talks at 2021 Conference on Digital Experimentation @ MIT ([email protected]), Causal Data Science Meeting 2021, and KDD 2021 Tutorials on CausalML introduction and applications. Please take a look if you missed them! Full list of publications and talks can be found here.

    Updates

    • Update documentation on Instrument Variable methods @huigangchen (#447)
    • Add benchmark simulation studies example notebook by @t-tte (#443)
    • Add sample_weight support for R-learner by @paullo0106 (#425)
    • Fix incorrect binning of numeric features in UpliftTreeClassifier by @jeongyoonlee (#420)
    • Update papers, talks, and publication info to README and refs.bib by @zhenyuz0500 (#410 #414 #433)
    • Add instruction for contributing.md doc by @jeongyoonlee (#408)
    • Fix incorrect feature importance calculation logic by @paullo0106 (#406)
    • Add parallel jobs support for NearestNeighbors search with n_jobs parameter by @paullo0106 (#389)
    • Fix bug in simulate_randomized_trial by @jroessler (#385)
    • Add GA pytest workflow by @ppstacy (#380)
    Source code(tar.gz)
    Source code(zip)
  • v0.11(Jul 29, 2021)

    0.11.0 (2021-07-28)

    (sorry for the spam, attempting to correctly update to the right files)

    • CausalML surpassed 2K stars!
    • We have 3 new community contributors, Jannik (@jroessler), Mohamed (@ibraaaa), and Leo (@lleiou). Thanks for the contribution!

    Major Updates

    • Make tensorflow dependency optional and add python 3.9 support by @jeongyoonlee (#343)
    • Add delta-delta-p (ddp) tree inference approach by @jroessler (#327)
    • Add conda env files for Python 3.6, 3.7, and 3.8 by @jeongyoonlee (#324)

    Minor Updates

    • Fix inconsistent feature importance calculation in uplift tree by @paullo0106 (#372)
    • Fix filter method failure with NaNs in the data issue by @manojbalaji1 (#367)
    • Add automatic package publish by @jeongyoonlee (#354)
    • Fix typo in unit_selection optimization by @jeongyoonlee (#347)
    • Fix docs build failure by @jeongyoonlee (#335)
    • Convert pandas inputs to numpy in S/T/R Learners by @jeongyoonlee (#333)
    • Require scikit-learn as a dependency of setup.py by @ibraaaa (#325)
    • Fix AttributeError when passing in Outcome and Effect learner to R-Learner by @paullo0106 (#320)
    • Fix error when there is no positive class for KL Divergence filter by @lleiou (#311)
    • Add versions to cython and numpy in setup.py for requirements.txt accordingly by @maccam912 (#306)
    Source code(tar.gz)
    Source code(zip)
  • v0.10.0(Feb 19, 2021)

    0.10.0 (2021-02-19)

    • CausalML surpassed 235,000 downloads!
    • We have 5 new community contributors, Suraj (@surajiyer), Harsh (@HarshCasper), Manoj (@manojbalaji1), Matthew (@maccam912) and Václav (@vaclavbelak). Thanks for the contribution!

    Major Updates

    • Add Policy learner, DR learner, DRIV learner by @huigangchen (#292)
    • Add wrapper for CEVAE, a deep latent-variable and variational autoencoder based model by @ppstacy (#276)

    Minor Updates

    • Add propensity_learner to R-learner by @jeongyoonlee (#297)
    • Add BaseLearner class for other meta-learners to inherit from without duplicated code by @jeongyoonlee (#295)
    • Fix installation issue for Shap>=0.38.1 by @paullo0106 (#287)
    • Fix import error for sklearn>= 0.24 by @jeongyoonlee (#283)
    • Fix KeyError issue in Filter method for certain dataset by @surajiyer (#281)
    • Fix inconsistent cumlift score calculation of multiple models by @vaclavbelak (#273)
    • Fix duplicate values handling in feature selection method by @manojbalaji1 (#271)
    • Fix the color spectrum of SHAP summary plot for feature interpretations of meta-learners by @paullo0106 (#269)
    • Add IIA and value optimization related documentation by @t-tte (#264)
    • Fix StratifiedKFold arguments for propensity score estimation by @paullo0106 (#262)
    • Refactor the code with string format argument and is to compare object types, and change methods not using bound instance to static methods by @harshcasper (#256, #260)
    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Oct 23, 2020)

    0.9.0 (2020-10-23)

    • CausalML won the 1st prize at the poster session in UberML'20
    • DoWhy integrated CausalML starting v0.4 (release note)
    • CausalML team welcomes new project leadership, Mert Bay
    • We have 4 new community contributors, Mario Wijaya (@mwijaya3), Harry Zhao (@deeplaunch), Christophe (@ccrndn) and Georg Walther (@waltherg). Thanks for the contribution!

    Major Updates

    • Add feature importance and its visualization to UpliftDecisionTrees and UpliftRF by @yungmsh (#220)
    • Add feature selection example with Filter methods by @paullo0106 (#223)

    Minor Updates

    • Implement propensity model abstraction for common interface by @waltherg (#223)
    • Fix bug in BaseSClassifier and BaseXClassifier by @yungmsh and @ppstacy (#217, #218)
    • Fix parentNodeSummary for UpliftDecisionTrees by @paullo0106 (#238)
    • Add pd.Series for propensity score condition check by @paullo0106 (#242)
    • Fix the uplift random forest prediction output by @ppstacy (#236)
    • Add functions and methods to init for optimization module by @mwijaya3 (#228)
    • Install GitHub Stale App to close inactive issues automatically @jeongyoonlee (#237)
    • Update documentation by @deeplaunch, @ccrndn, @ppstacy(#214, #231, #232)
    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(Oct 21, 2020)

    0.8.0 (2020-07-17)

    CausalML surpassed 100,000 downloads! Thanks for the support.

    Major Updates

    • Add value optimization to optimize by @t-tte (#183)
    • Add counterfactual unit selection to optimize by @t-tte (#184)
    • Add sensitivity analysis to metrics by @ppstacy (#199, #212)
    • Add the iv estimator submodule and add 2SLS model to it by @huigangchen (#201)

    Minor Updates

    • Add GradientBoostedPropensityModel by @yungmsh (#193)
    • Add covariate balance visualization by @yluogit (#200)
    • Fix bug in the X learner propensity model by @ppstacy (#209)
    • Update package dependencies by @jeongyoonlee (#195, #197)
    • Update documentation by @jeongyoonlee, @ppstacy and @yluogit (#181, #202, #205)
    Source code(tar.gz)
    Source code(zip)
Owner
Uber Open Source
Open Source Software at Uber
Uber Open Source
Uses WiFi signals :signal_strength: and machine learning to predict where you are

Uses WiFi signals and machine learning (sklearn's RandomForest) to predict where you are. Even works for small distances like 2-10 meters.

Pascal van Kooten 5k Jan 09, 2023
A Lucid Framework for Transparent and Interpretable Machine Learning Models.

Currently a Beta-Version lucidmode is an open-source, low-code and lightweight Python framework for transparent and interpretable machine learning mod

lucidmode 15 Aug 12, 2022
A handy tool for common machine learning models' hyper-parameter tuning.

Common machine learning models' hyperparameter tuning This repo is for a collection of hyper-parameter tuning for "common" machine learning models, in

Kevin Hu 2 Jan 27, 2022
Multiple Linear Regression using the LinearRegression class from sklearn.linear_model library

Multiple-Linear-Regression-master - A python program to implement Multiple Linear Regression using the LinearRegression class from sklearn.linear model library

Kushal Shingote 1 Feb 06, 2022
ThunderGBM: Fast GBDTs and Random Forests on GPUs

Documentations | Installation | Parameters | Python (scikit-learn) interface What's new? ThunderGBM won 2019 Best Paper Award from IEEE Transactions o

Xtra Computing Group 648 Dec 16, 2022
Azure MLOps (v2) solution accelerators.

Azure MLOps (v2) solution accelerator Welcome to the MLOps (v2) solution accelerator repository! This project is intended to serve as the starting poi

Microsoft Azure 233 Jan 01, 2023
Examples and code for the Practical Machine Learning workshop series

Practical Machine Learning Workshop Series Practical Machine Learning for Quantitative Finance Post conference workshop at the WBS Spring Conference D

CompatibL 21 Jun 25, 2022
A game theoretic approach to explain the output of any machine learning model.

SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allo

Scott Lundberg 18.2k Jan 02, 2023
A library to generate synthetic time series data by easy-to-use factors and generator

timeseries-generator This repository consists of a python packages that generates synthetic time series dataset in a generic way (under /timeseries_ge

Nike Inc. 87 Dec 20, 2022
Required for a machine learning pipeline data preprocessing and variable engineering script needs to be prepared

Feature-Engineering Required for a machine learning pipeline data preprocessing and variable engineering script needs to be prepared. When the dataset

kemalgunay 5 Apr 21, 2022
A linear equation solver using gaussian elimination. Implemented for fun and learning/teaching.

A linear equation solver using gaussian elimination. Implemented for fun and learning/teaching. The solver will solve equations of the type: A can be

Sanjeet N. Dasharath 3 Feb 15, 2022
Timeseries analysis for neuroscience data

=================================================== Nitime: timeseries analysis for neuroscience data ===============================================

NIPY developers 212 Dec 09, 2022
TorchDrug is a PyTorch-based machine learning toolbox designed for drug discovery

A powerful and flexible machine learning platform for drug discovery

MilaGraph 1.1k Jan 08, 2023
Climin is a Python package for optimization, heavily biased to machine learning scenarios

climin climin is a Python package for optimization, heavily biased to machine learning scenarios distributed under the BSD 3-clause license. It works

Biomimetic Robotics and Machine Learning at Technische Universität München 177 Sep 02, 2022
The unified machine learning framework, enabling framework-agnostic functions, layers and libraries.

The unified machine learning framework, enabling framework-agnostic functions, layers and libraries. Contents Overview In a Nutshell Where Next? Overv

Ivy 8.2k Dec 31, 2022
A high-performance topological machine learning toolbox in Python

giotto-tda is a high-performance topological machine learning toolbox in Python built on top of scikit-learn and is distributed under the G

giotto.ai 632 Dec 29, 2022
CinnaMon is a Python library which offers a number of tools to detect, explain, and correct data drift in a machine learning system

CinnaMon is a Python library which offers a number of tools to detect, explain, and correct data drift in a machine learning system

Zelros 67 Dec 28, 2022
Combines MLflow with a database (PostgreSQL) and a reverse proxy (NGINX) into a multi-container Docker application

Combines MLflow with a database (PostgreSQL) and a reverse proxy (NGINX) into a multi-container Docker application (with docker-compose).

Philip May 2 Dec 03, 2021
STUMPY is a powerful and scalable Python library for computing a Matrix Profile, which can be used for a variety of time series data mining tasks

STUMPY STUMPY is a powerful and scalable library that efficiently computes something called the matrix profile, which can be used for a variety of tim

TD Ameritrade 2.5k Jan 06, 2023
Can a machine learning project be implemented to estimate the salaries of baseball players whose salary information and career statistics for 1986 are shared?

END TO END MACHINE LEARNING PROJECT ON HITTERS DATASET Can a machine learning project be implemented to estimate the salaries of baseball players whos

Pinar Oner 7 Dec 18, 2021