Code for Low-Cost Algorithmic Recourse for Users With Uncertain Cost Functions

Overview

EMS-COLS-recourse

Initial Code for Low-Cost Algorithmic Recourse for Users With Uncertain Cost Functions

Folder structure:

  • data folder contains raw and final preprocessed data, along with the pre-processing script.
  • Src folder contain the code for our method.
  • trained_model contains the trained black box model checkpoint.

Making the environment

conda create -n rec_gen python=3.8.1
conda activate rec_gen
pip install -r requirements.txt

Steps for running experiments.

change current working directory to src

cd ./src/
  1. Run data_io.py to dump mcmc cost samples.
python ./utils/data_io.py --save_data --data_name adult_binary --dump_negative_data --num_mcmc 1000

python ./utils/data_io.py --save_data --data_name compas_binary --dump_negative_data --num_mcmc 1000
  1. run main experiments on COLS and P-COLS.
python run.py --data_name adult_binary --num_mcmc 1000 --model ls --num_cfs 10 --project_name exp_main --budget 5000
python run.py --data_name compas_binary --num_mcmc 1000 --model ls --num_cfs 10 --project_name exp_main --budget 5000

python run.py --data_name adult_binary --num_mcmc 1000 --model pls --num_cfs 10 --project_name exp_main --budget 5000
python run.py --data_name compas_binary --num_mcmc 1000 --model pls --num_cfs 10 --project_name exp_main --budget 5000
  1. Run ablation Experiments
python run.py --data_name adult_binary --num_mcmc 1000 --model ls --num_cfs 10 --project_name exp_ablation --budget 3000 --eval cost
python run.py --data_name adult_binary --num_mcmc 1000 --model ls --num_cfs 10 --project_name exp_ablation --budget 3000 --eval cost_simple
python run.py --data_name adult_binary --num_mcmc 1000 --model ls --num_cfs 10 --project_name exp_ablation --budget 3000 --eval proximity
python run.py --data_name adult_binary --num_mcmc 1000 --model ls --num_cfs 10 --project_name exp_ablation --budget 3000 --eval sparsity
python run.py --data_name adult_binary --num_mcmc 1000 --model ls --num_cfs 10 --project_name exp_ablation --budget 3000 --eval diversity
  1. Run experiments with budget
python run.py --data_name adult_binary --model ls --num_cfs 10 --num_users 100 --project_name exp_budget --budget 500
python run.py --data_name adult_binary --model ls --num_cfs 10 --num_users 100 --project_name exp_budget --budget 1000
python run.py --data_name adult_binary --model ls --num_cfs 10 --num_users 100 --project_name exp_budget --budget 2000
python run.py --data_name adult_binary --model ls --num_cfs 10 --num_users 100 --project_name exp_budget --budget 3000
python run.py --data_name adult_binary --model ls --num_cfs 10 --num_users 100 --project_name exp_budget --budget 5000
python run.py --data_name adult_binary --model ls --num_cfs 10 --num_users 100 --project_name exp_budget --budget 10000
  1. Run experiments with number of counterfactuals
python run.py --data_name adult_binary --model model_name --num_cfs 1 --num_users 100 --project_name exp_cfs --budget 5000
python run.py --data_name adult_binary --model model_name --num_cfs 2 --num_users 100 --project_name exp_cfs --budget 5000
python run.py --data_name adult_binary --model model_name --num_cfs 3 --num_users 100 --project_name exp_cfs --budget 5000
python run.py --data_name adult_binary --model model_name --num_cfs 5 --num_users 100 --project_name exp_cfs --budget 5000
python run.py --data_name adult_binary --model model_name --num_cfs 10 --num_users 100 --project_name exp_cfs --budget 5000
python run.py --data_name adult_binary --model model_name --num_cfs 20 --num_users 100 --project_name exp_cfs --budget 5000
python run.py --data_name adult_binary --model model_name --num_cfs 30 --num_users 100 --project_name exp_cfs --budget 5000
  1. Experiment with respect to Monte Carlo samples
  • Run these commands for different num_mcmc values. Default set to 5 in commands.
python ./utils/data_io.py --save_data --data_name adult_binary --dump_negative_data --num_mcmc 5

python run.py --data_name adult_binary --num_mcmc 5 --model model_name --num_cfs 10 --project_name exp_mcmc --budget 5000 --num_users 100

To train a new blackbox model

  • Run this right after preprocessing the data.
python train_model.py --data_name adult --max_epochs 1000 --check_val_every_n_epoch=1 --learning_rate=0.0001
Owner
Prateek Yadav
Prateek Yadav
Datasets, Transforms and Models specific to Computer Vision

torchvision The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision. Installat

13.1k Jan 02, 2023
Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs

PhyCRNet Physics-informed convolutional-recurrent neural networks for solving spatiotemporal PDEs Paper link: [ArXiv] By: Pu Ren, Chengping Rao, Yang

Pu Ren 11 Aug 23, 2022
Blender scripts for computing geodesic distance

GeoDoodle Geodesic distance computation for Blender meshes Table of Contents Overivew Usage Implementation Overview This addon provides an operator fo

20 Jun 08, 2022
Official repository for "Exploiting Session Information in BERT-based Session-aware Sequential Recommendation", SIGIR 2022 short.

Session-aware BERT4Rec Official repository for "Exploiting Session Information in BERT-based Session-aware Sequential Recommendation", SIGIR 2022 shor

Jamie J. Seol 22 Dec 13, 2022
Build Graph Nets in Tensorflow

Graph Nets library Graph Nets is DeepMind's library for building graph networks in Tensorflow and Sonnet. Contact DeepMind 5.2k Jan 05, 2023

Implementation of the SUMO (Slim U-Net trained on MODA) model

SUMO - Slim U-Net trained on MODA Implementation of the SUMO (Slim U-Net trained on MODA) model as described in: TODO: add reference to paper once ava

6 Nov 19, 2022
This is a collection of simple PyTorch implementations of neural networks and related algorithms. These implementations are documented with explanations,

labml.ai Deep Learning Paper Implementations This is a collection of simple PyTorch implementations of neural networks and related algorithms. These i

labml.ai 16.4k Jan 09, 2023
Learning to Map Large-scale Sparse Graphs on Memristive Crossbar

Release of AutoGMap:Learning to Map Large-scale Sparse Graphs on Memristive Crossbar For reproduction of our searched model, the Ubuntu OS is recommen

2 Aug 23, 2022
DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing

DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing Figure: Joint multi-attribute edits using DyStyle model. Great diversity

74 Dec 03, 2022
[AAAI 2021] EMLight: Lighting Estimation via Spherical Distribution Approximation and [ICCV 2021] Sparse Needlets for Lighting Estimation with Spherical Transport Loss

EMLight: Lighting Estimation via Spherical Distribution Approximation (AAAI 2021) Update 12/2021: We release our Virtual Object Relighting (VOR) Datas

Fangneng Zhan 144 Jan 06, 2023
Exploring the link between uncertainty estimates obtained via "exact" Bayesian inference and out-of-distribution (OOD) detection.

Uncertainty-based OOD detection Exploring the link between uncertainty estimates obtained by "exact" Bayesian inference and out-of-distribution (OOD)

Christian Henning 1 Nov 05, 2022
Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"

Memory Compressed Attention Implementation of the Self-Attention layer of the proposed Memory-Compressed Attention, in Pytorch. This repository offers

Phil Wang 47 Dec 23, 2022
Robustness via Cross-Domain Ensembles

Robustness via Cross-Domain Ensembles [ICCV 2021, Oral] This repository contains tools for training and evaluating: Pretrained models Demo code Traini

Visual Intelligence & Learning Lab, Swiss Federal Institute of Technology (EPFL) 27 Dec 23, 2022
Replication of Pix2Seq with Pretrained Model

Pretrained-Pix2Seq We provide the pre-trained model of Pix2Seq. This version contains new data augmentation. The model is trained for 300 epochs and c

peng gao 51 Nov 22, 2022
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

Microsoft 5.7k Jan 09, 2023
A dataset for online Arabic calligraphy

Calliar Calliar is a dataset for Arabic calligraphy. The dataset consists of 2500 json files that contain strokes manually annotated for Arabic callig

ARBML 114 Dec 28, 2022
[CVPR'22] COAP: Learning Compositional Occupancy of People

COAP: Compositional Articulated Occupancy of People Paper | Video | Project Page This is the official implementation of the CVPR 2022 paper COAP: Lear

Marko Mihajlovic 111 Dec 11, 2022
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment

Arch-Net: Model Distillation for Architecture Agnostic Model Deployment The official implementation of Arch-Net: Model Distillation for Architecture A

MEGVII Research 22 Jan 05, 2023
scikit-learn inspired API for CRFsuite

sklearn-crfsuite sklearn-crfsuite is a thin CRFsuite (python-crfsuite) wrapper which provides interface simlar to scikit-learn. sklearn_crfsuite.CRF i

417 Dec 20, 2022
Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021.

NL-CSNet-Pytorch Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021. Note: this repo only shows the strategy of

WenxueCui 7 Nov 07, 2022