CONditionals for Ordinal Regression and classification in tensorflow

Overview

Condor Ordinal regression in Tensorflow Keras

Continuous Integration License Python 3

Tensorflow Keras implementation of CONDOR Ordinal Regression (aka ordinal classification) by Garrett Jenkinson et al (2021).

CONDOR is compatible with any state-of-the-art deep neural network architecture, requiring only modification of the output layer, the labels, and the loss function. Read our full documentation to learn more.

We also have implemented CONDOR for pytorch.

This package includes:

  • Ordinal tensorflow loss function: CondorOrdinalCrossEntropy
  • Ordinal tensorflow error metric: OrdinalMeanAbsoluteError
  • Ordinal tensorflow error metric: OrdinalEarthMoversDistance
  • Ordinal tensorflow sparse loss function: CondorSparseOrdinalCrossEntropy
  • Ordinal tensorflow sparse error metric: SparseOrdinalMeanAbsoluteError
  • Ordinal tensorflow sparse error metric: SparseOrdinalEarthMoversDistance
  • Ordinal tensorflow activation function: ordinal_softmax
  • Ordinal sklearn label encoder: CondorOrdinalEncoder

Installation

Install the stable version via pip:

pip install condor-tensorflow

Alternatively install the most recent code on GitHub via pip:

pip install git+https://github.com/GarrettJenkinson/condor_tensorflow/

condor_tensorflow should now be available for use as a Python library.

Docker container

As an alternative to the above, we provide a convenient Dockerfile that will build a container with condor_tensorflow along with all of its dependencies (Python 3.6+, Tensorflow 2.2+, sklearn, numpy). This can be used as follows:

# Clone this git repository
git clone https://github.com/GarrettJenkinson/condor_tensorflow/

# Change directory to the cloned repository root
cd condor_tensorflow

# Create a docker image
docker build -t cpu_tensorflow -f cpu.Dockerfile ./

# run image to serve a jupyter notebook 
docker run -it -p 8888:8888 --rm cpu_tensorflow

# how to run bash inside container (with Python that will have required dependencies available)
docker run -u $(id -u):$(id -g) -it -p 8888:8888 --rm cpu_tensorflow bash

Assuming a GPU enabled machine with NVIDIA drivers installed replace cpu above with gpu.

Example

This is a quick example to show basic model implementation syntax.
Example assumes existence of input data (variable 'X') and ordinal labels (variable 'labels').

import tensorflow as tf
import condor_tensorflow as condor
NUM_CLASSES = 5
# Ordinal 'labels' variable has 5 labels, 0 through 4.
enc_labs = condor.CondorOrdinalEncoder(nclasses=NUM_CLASSES).fit_transform(labels)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(32, activation = "relu"))
model.add(tf.keras.layers.Dense(NUM_CLASSES-1)) # Note the "-1"
model.compile(loss = condor.CondorOrdinalCrossEntropy(),
              metrics = [condor.OrdinalMeanAbsoluteError()])
model.fit(x = X, y = enc_labs)

See this colab notebook for extended examples of ordinal regression with MNIST and Amazon reviews (universal sentence encoder).

Please post any issues to the issue queue.

Acknowledgments: Many thanks to the CORAL ordinal authors and the CORAL pytorch authors whose repos provided a roadmap for this codebase.

References

Jenkinson, Khezeli, Oliver, Kalantari, Klee. Universally rank consistent ordinal regression in neural networks, arXiv:2110.07470, 2021.

Comments
  • providing weighted metric  causes error

    providing weighted metric causes error

    example code:

    compileOptions = {
    'optimizer': tf.keras.optimizers.Adam(learning_rate=5e-4),
    'loss': condor.CondorOrdinalCrossEntropy(),
    'metrics': [
                condor.OrdinalEarthMoversDistance(name='condorErrOrdinalMoversDist'),
                condor.OrdinalMeanAbsoluteError(name='ordinalMAbsErr')
                ]
    'weighted_metrics': [
                condor.OrdinalEarthMoversDistance(name='condorErrOrdinalMoversDist'),
                condor.OrdinalMeanAbsoluteError(name='ordinalMAbsErr')
                ]
    }
    
    model.compile(**compileOptions)
    model.fit(x=X_train,y=Y_train,batch_size=32,epochs=100,validation_data=(x_val, y_val, val_sample_weights), sample_weight=sampleweight_train)
    
    

    would generate the following error:

    
        File "/usr/local/lib/python3.7/dist-packages/condor_tensorflow/metrics.py", line 24, in update_state  *
            if sample_weight:
    
        ValueError: condition of if statement expected to be `tf.bool` scalar, got Tensor("ExpandDims_1:0", shape=(None, 1), dtype=float32); to use as boolean Tensor, use `tf.cast`; to check for None, use `is not None`
    

    If I don't provide weighted_metrics in model.compile option but remain to use sample_weight=sampleweight_train argument in model.fit, no errors would show up.

    Thank you!

    enhancement 
    opened by tingjhenjiang 7
  • loss reduction support

    loss reduction support

    While I want to do a distributed training including training on Google Colab TPU, errors as shown below would occurs:

    
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
        528     self._self_setattr_tracking = False  # pylint: disable=protected-access
        529     try:
    --> 530       result = method(self, *args, **kwargs)
        531     finally:
        532       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access
    
    /usr/local/lib/python3.7/dist-packages/keras/engine/training_v1.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, distribute, **kwargs)
        434           targets=self._targets,
        435           skip_target_masks=self._prepare_skip_target_masks(),
    --> 436           masks=self._prepare_output_masks())
        437 
        438       # Prepare sample weight modes. List with the same length as model outputs.
    
    /usr/local/lib/python3.7/dist-packages/keras/engine/training_v1.py in _handle_metrics(self, outputs, targets, skip_target_masks, sample_weights, masks, return_weighted_metrics, return_weighted_and_unweighted_metrics)
       1962           metric_results.extend(
       1963               self._handle_per_output_metrics(self._per_output_metrics[i],
    -> 1964                                               target, output, output_mask))
       1965         if return_weighted_and_unweighted_metrics or return_weighted_metrics:
       1966           metric_results.extend(
    
    /usr/local/lib/python3.7/dist-packages/keras/engine/training_v1.py in _handle_per_output_metrics(self, metrics_dict, y_true, y_pred, mask, weights)
       1913       with backend.name_scope(metric_name):
       1914         metric_result = training_utils_v1.call_metric_function(
    -> 1915             metric_fn, y_true, y_pred, weights=weights, mask=mask)
       1916         metric_results.append(metric_result)
       1917     return metric_results
    
    /usr/local/lib/python3.7/dist-packages/keras/engine/training_utils_v1.py in call_metric_function(metric_fn, y_true, y_pred, weights, mask)
       1175 
       1176   if y_pred is not None:
    -> 1177     return metric_fn(y_true, y_pred, sample_weight=weights)
       1178   # `Mean` metric only takes a single value.
       1179   return metric_fn(y_true, sample_weight=weights)
    
    /usr/local/lib/python3.7/dist-packages/keras/metrics.py in __call__(self, *args, **kwargs)
        235     from keras.distribute import distributed_training_utils  # pylint:disable=g-import-not-at-top
        236     return distributed_training_utils.call_replica_local_fn(
    --> 237         replica_local_fn, *args, **kwargs)
        238 
        239   def __str__(self):
    
    /usr/local/lib/python3.7/dist-packages/keras/distribute/distributed_training_utils.py in call_replica_local_fn(fn, *args, **kwargs)
         58     with strategy.scope():
         59       return strategy.extended.call_for_each_replica(fn, args, kwargs)
    ---> 60   return fn(*args, **kwargs)
         61 
         62 
    
    /usr/local/lib/python3.7/dist-packages/keras/metrics.py in replica_local_fn(*args, **kwargs)
        215         update_op = None
        216       else:
    --> 217         update_op = self.update_state(*args, **kwargs)  # pylint: disable=not-callable
        218       update_ops = []
        219       if update_op is not None:
    
    /usr/local/lib/python3.7/dist-packages/keras/utils/metrics_utils.py in decorated(metric_obj, *args, **kwargs)
         71 
         72     with tf_utils.graph_context_for_symbolic_tensors(*args, **kwargs):
    ---> 73       update_op = update_state_fn(*args, **kwargs)
         74     if update_op is not None:  # update_op will be None in eager execution.
         75       metric_obj.add_update(update_op)
    
    /usr/local/lib/python3.7/dist-packages/keras/metrics.py in update_state_fn(*args, **kwargs)
        175         control_status = tf.__internal__.autograph.control_status_ctx()
        176         ag_update_state = tf.__internal__.autograph.tf_convert(obj_update_state, control_status)
    --> 177         return ag_update_state(*args, **kwargs)
        178     else:
        179       if isinstance(obj.update_state, tf.__internal__.function.Function):
    
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in wrapper(*args, **kwargs)
        694       try:
        695         with conversion_ctx:
    --> 696           return converted_call(f, args, kwargs, options=options)
        697       except Exception as e:  # pylint:disable=broad-except
        698         if hasattr(e, 'ag_error_metadata'):
    
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
        381 
        382   if not options.user_requested and conversion.is_allowlisted(f):
    --> 383     return _call_unconverted(f, args, kwargs, options)
        384 
        385   # internal_convert_user_code is for example turned off when issuing a dynamic
    
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in _call_unconverted(f, args, kwargs, options, update_cache)
        462 
        463   if kwargs is not None:
    --> 464     return f(*args, **kwargs)
        465   return f(*args)
        466 
    
    /usr/local/lib/python3.7/dist-packages/keras/metrics.py in update_state(self, y_true, y_pred, sample_weight)
        723 
        724     ag_fn = tf.__internal__.autograph.tf_convert(self._fn, tf.__internal__.autograph.control_status_ctx())
    --> 725     matches = ag_fn(y_true, y_pred, **self._fn_kwargs)
        726     return super(MeanMetricWrapper, self).update_state(
        727         matches, sample_weight=sample_weight)
    
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in wrapper(*args, **kwargs)
        694       try:
        695         with conversion_ctx:
    --> 696           return converted_call(f, args, kwargs, options=options)
        697       except Exception as e:  # pylint:disable=broad-except
        698         if hasattr(e, 'ag_error_metadata'):
    
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
        381 
        382   if not options.user_requested and conversion.is_allowlisted(f):
    --> 383     return _call_unconverted(f, args, kwargs, options)
        384 
        385   # internal_convert_user_code is for example turned off when issuing a dynamic
    
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in _call_unconverted(f, args, kwargs, options, update_cache)
        462 
        463   if kwargs is not None:
    --> 464     return f(*args, **kwargs)
        465   return f(*args)
        466 
    
    /usr/local/lib/python3.7/dist-packages/keras/losses.py in __call__(self, y_true, y_pred, sample_weight)
        141       losses = call_fn(y_true, y_pred)
        142       return losses_utils.compute_weighted_loss(
    --> 143           losses, sample_weight, reduction=self._get_reduction())
        144 
        145   @classmethod
    
    /usr/local/lib/python3.7/dist-packages/keras/losses.py in _get_reduction(self)
        182          self.reduction == losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE)):
        183       raise ValueError(
    --> 184           'Please use `tf.keras.losses.Reduction.SUM` or '
        185           '`tf.keras.losses.Reduction.NONE` for loss reduction when losses are '
        186           'used with `tf.distribute.Strategy` outside of the built-in training '
    
    ValueError: Please use `tf.keras.losses.Reduction.SUM` or `tf.keras.losses.Reduction.NONE` for loss reduction when losses are used with `tf.distribute.Strategy` outside of the built-in training loops. You can implement `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` using global batch size like:
    
    with strategy.scope():
        loss_obj = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
        loss = tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size)
    Please see https://www.tensorflow.org/tutorials/distribute/custom_training for more details.
    

    it seems that support of loss reduction has not been implemented. It may be a little tricky, but it would be nice if you can add this enhancement.

    Thank you!

    enhancement 
    opened by tingjhenjiang 3
  • Importance weights.

    Importance weights.

    I had a question about the importance weights code below that was in one of the tutorial docs.

    Importance weights customization
    A quick example to show how the importance weights can be customized.
    model = create_model(num_classes = NUM_CLASSES)
    model.summary()
    # We have num_classes - 1 outputs (cumulative logits), so there are 9 elements
    # in the importance vector to customize.
    importance_weights = [1., 1., 0.5, 0.5, 0.5, 1., 1., 0.1, 0.1]
    loss_fn = condor.SparseCondorOrdinalCrossEntropy(importance_weights = importance_weights)
    model.compile(tf.keras.optimizers.Adam(lr = learning_rate), loss = loss_fn)
    history = model.fit(dataset, epochs = num_epochs)
    

    My problem:

    I have 5 classes, with underrepresentation of say the first and lass class. I want to use weights to assign higher importance to the underrepresented classes. In a dense layer with n(classes) == n(output_layers), the vector would look like.

    [1,0.5,0.5,0.5,1]

    With the CONDOR, using num_classes - 1 output layers, is it still possible to assign higher weights to underrepresented classes?

    I don't understand how to relate the N-1 output layers weights to the original weights where n(classes) == n(output_layers).

    Any feedback is appreciated.

    opened by jake-foxy 2
  • activation function at last layer

    activation function at last layer

    Hello, I've a dataset in which the labels are like (0,1,2,3). It means the number of classes in Y is 4.

    Method 1:

    Using the condor.CondorOrdinalEncoder(nclasses=4).fit_transform(labels) to transform labels to an array in shape (n, 3). [ [0,0,1],  [1,0,0] ] as model prediction objects. The last layer is tf.keras.layers.Dense(units=4-1), according to the readme, however by this design the default activation function of the last layer would be None/Linear( f(x) = x), and the output of the model would be simple logits. Should I keep the model outputs simple logits(no activation function)?

    Method 2:

    If I use tf.keras.layers.Dense(units=4-2, activation=condor.ordinal_softmax) as the last layer together with label data in shape (n, 3), would that be fine? (the condor.ordinal_softmax function would increase the number of dimension)

    Method 3: Or I should use tf.keras.layers.Dense(units=4-1, activation=condor.ordinal_softmax) as the last layer together with label data in shape (n, 4)?

    Which method is better? Thank you!

    opened by tingjhenjiang 2
  • Update labelencoder.py

    Update labelencoder.py

    When fitting data with nclass=0:

    1. self.feature_names_in_ would lose its functionality(the previous commit).
    2. Also, using sklearn.compose.ColumnTransformer to transform multiple columns with CondorOrdinalEncoder at a time would cause self.nclass changing in every transformation and thus the transformation would fail, and therefore it is necessary to differentiate.
    opened by tingjhenjiang 1
  • Upadate labelencoder.py add get_feature_names_out method

    Upadate labelencoder.py add get_feature_names_out method

    When I try to integrate sklearn.compose.ColumnTransformer, sklearn.pipeline with condor encoder, I find it difficult and errors happen due to lack of support. Therefore I add the support of get_feature_names_out method, which complies with the structure of sklearn.

    opened by tingjhenjiang 1
Releases(v1.0.1)
Vehicle speed detection with python

Vehicle-speed-detection In the project simulate the tracker.py first then simulate the SpeedDetector.py. Finally, a new window pops up and the output

3 Dec 15, 2022
Clean Machine Learning, a Coding Kata

Kata: Clean Machine Learning From Dirty Code First, open the Kata in Google Colab (or else download it) You can clone this project and launch jupyter-

Neuraxio 13 Nov 03, 2022
Kaggle Ultrasound Nerve Segmentation competition [Keras]

Ultrasound nerve segmentation using Keras (1.0.7) Kaggle Ultrasound Nerve Segmentation competition [Keras] #Install (Ubuntu {14,16}, GPU) cuDNN requir

179 Dec 28, 2022
A PyTorch implementation of the Transformer model in "Attention is All You Need".

Attention is all you need: A Pytorch Implementation This is a PyTorch implementation of the Transformer model in "Attention is All You Need" (Ashish V

Yu-Hsiang Huang 7.1k Jan 04, 2023
Official repository for "Deep Recurrent Neural Network with Multi-scale Bi-directional Propagation for Video Deblurring".

RNN-MBP Deep Recurrent Neural Network with Multi-scale Bi-directional Propagation for Video Deblurring (AAAI-2022) by Chao Zhu, Hang Dong, Jinshan Pan

SIV-LAB 22 Aug 31, 2022
Calibrated Hyperspectral Image Reconstruction via Graph-based Self-Tuning Network.

mask-uncertainty-in-HSI This repository contains the testing code and pre-trained models for the paper Calibrated Hyperspectral Image Reconstruction v

JIAMIAN WANG 9 Dec 29, 2022
Spatiotemporal resampling methods for mlr3

mlr3spatiotempcv Package website: release | dev Spatiotemporal resampling methods for mlr3. This package extends the mlr3 package framework with spati

45 Nov 21, 2022
CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection

CIFS This repository provides codes for CIFS (ICML 2021). CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Sel

Hanshu YAN 19 Nov 12, 2022
Alphabetical Letter Recognition

DecisionTrees-Image-Classification Alphabetical Letter Recognition In these demo we are using "Decision Trees" Our database is composed by Learning Im

Mohammed Firass 4 Nov 30, 2021
Neural style transfer as a class in PyTorch

pt-styletransfer Neural style transfer as a class in PyTorch Based on: https://github.com/alexis-jacq/Pytorch-Tutorials Adds: StyleTransferNet as a cl

Tyler Kvochick 31 Jun 27, 2022
Hyperopt for solving CIFAR-100 with a convolutional neural network (CNN) built with Keras and TensorFlow, GPU backend

Hyperopt for solving CIFAR-100 with a convolutional neural network (CNN) built with Keras and TensorFlow, GPU backend This project acts as both a tuto

Guillaume Chevalier 103 Jul 22, 2022
Neural Message Passing for Computer Vision

Neural Message Passing for Quantum Chemistry Implementation of different models of Neural Networks on graphs as explained in the article proposed by G

Pau Riba 310 Nov 07, 2022
Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)

Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching Official pytorch implementation of "Show, Attend and Distill: Kn

Clova AI Research 80 Dec 16, 2022
Distributed Asynchronous Hyperparameter Optimization better than HyperOpt.

UltraOpt : Distributed Asynchronous Hyperparameter Optimization better than HyperOpt. UltraOpt is a simple and efficient library to minimize expensive

98 Aug 16, 2022
PyTorch wrappers for using your model in audacity!

audacitorch This package contains utilities for prepping PyTorch audio models for use in Audacity. More specifically, it provides abstract classes for

Hugo Flores García 130 Dec 14, 2022
DockStream: A Docking Wrapper to Enhance De Novo Molecular Design

DockStream Description DockStream is a docking wrapper providing access to a collection of ligand embedders and docking backends. Docking execution an

AstraZeneca - Molecular AI 72 Jan 02, 2023
PyTorch implementation of the ExORL: Exploratory Data for Offline Reinforcement Learning

ExORL: Exploratory Data for Offline Reinforcement Learning This is an original PyTorch implementation of the ExORL framework from Don't Change the Alg

Denis Yarats 52 Jan 01, 2023
Official implementation for "QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation" (CVPR 2022)

QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation (CVPR2022) https://arxiv.org/abs/2203.08483 Unpaired image-to-image (I2I

Xueqi Hu 50 Dec 16, 2022
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

5 Dec 10, 2022
The Python ensemble sampling toolkit for affine-invariant MCMC

emcee The Python ensemble sampling toolkit for affine-invariant MCMC emcee is a stable, well tested Python implementation of the affine-invariant ense

Dan Foreman-Mackey 1.3k Dec 31, 2022