Neural network visualization toolkit for tf.keras

Overview

tf-keras-vis

Downloads PyPI version Python package License: MIT

Note!

We've released v0.7.0! In this release, the gradient calculation of ActivationMaximization is changed for the sake of fixing a critical problem. Although the calculation result are now a bit different compared to the past versions, you could avoid it by using legacy implementation as follows:

# from tf_keras_vis.activation_maximization import ActivationMaximization
from tf_keras_vis.activation_maximization.legacy import ActivationMaximization

In addition to above, we've also fixed some problems related Regularizers. Although we newly provide tf_keras_vis.activation_maximization.regularizers module that includes the regularizers whose bugs are fixed, like ActivationMaximization, you could also use legacy implementation as follows:

# from tf_keras_vis.activation_maximization.regularizers import Norm, TotalVariation2D 
from tf_keras_vis.utils.regularizers import Norm, TotalVariation2D

Please see the release note for details. If you face any problem related to this release, please feel free to ask us in Issues page!

Web documents

https://keisen.github.io/tf-keras-vis-docs/

Overview

tf-keras-vis is a visualization toolkit for debugging tf.keras.Model in Tensorflow2.0+. Currently supported methods for visualization include:

  • Feature Visualization
  • Class Activation Maps
  • Saliency Maps

tf-keras-vis is designed to be light-weight, flexible and ease of use. All visualizations have the features as follows:

  • Support N-dim image inputs, that's, not only support pictures but also such as 3D images.
  • Support batch wise processing, so, be able to efficiently process multiple input images.
  • Support the model that have either multiple inputs or multiple outputs, or both.
  • Support the mixed-precision model.

And in ActivationMaximization,

  • Support Optimizers that are built to tf.keras.

Visualizations

Dense Unit

Convolutional Filter

Class Activation Map

The images above are generated by GradCAM++.

Saliency Map

The images above are generated by SmoothGrad.

Usage

ActivationMaximization (Visualizing Convolutional Filter)

import tensorflow as tf
from tensorflow.keras.applications import VGG16
from matplotlib import pyplot as plt
from tf_keras_vis.activation_maximization import ActivationMaximization
from tf_keras_vis.activation_maximization.callbacks import Progress
from tf_keras_vis.activation_maximization.input_modifiers import Jitter, Rotate2D
from tf_keras_vis.activation_maximization.regularizers import TotalVariation2D, Norm
from tf_keras_vis.utils.model_modifiers import ExtractIntermediateLayer, ReplaceToLinear
from tf_keras_vis.utils.scores import CategoricalScore

# Create the visualization instance.
# All visualization classes accept a model and model-modifier, which, for example,
#     replaces the activation of last layer to linear function so on, in constructor.
activation_maximization = \
   ActivationMaximization(VGG16(),
                          model_modifier=[ExtractIntermediateLayer('block5_conv3'),
                                          ReplaceToLinear()],
                          clone=False)

# You can use Score class to specify visualizing target you want.
# And add regularizers or input-modifiers as needed.
activations = \
   activation_maximization(CategoricalScore(FILTER_INDEX),
                           steps=200,
                           input_modifiers=[Jitter(jitter=16), Rotate2D(degree=1)],
                           regularizers=[TotalVariation2D(weight=1.0),
                                         Norm(weight=0.3, p=1)],
                           optimizer=tf.keras.optimizers.RMSprop(1.0, 0.999),
                           callbacks=[Progress()])

## Since v0.6.0, calling `astype()` is NOT necessary.
# activations = activations[0].astype(np.uint8)

# Render
plt.imshow(activations[0])

Gradcam++

import numpy as np
from matplotlib import pyplot as plt
from matplotlib import cm
from tf_keras_vis.gradcam_plus_plus import GradcamPlusPlus
from tf_keras_vis.utils.model_modifiers import ReplaceToLinear
from tf_keras_vis.utils.scores import CategoricalScore

# Create GradCAM++ object
gradcam = GradcamPlusPlus(YOUR_MODEL_INSTANCE,
                          model_modifier=ReplaceToLinear(),
                          clone=True)

# Generate cam with GradCAM++
cam = gradcam(CategoricalScore(CATEGORICAL_INDEX),
              SEED_INPUT)

## Since v0.6.0, calling `normalize()` is NOT necessary.
# cam = normalize(cam)

plt.imshow(SEED_INPUT_IMAGE)
heatmap = np.uint8(cm.jet(cam[0])[..., :3] * 255)
plt.imshow(heatmap, cmap='jet', alpha=0.5) # overlay

Please see the guides below for more details:

Getting Started Guides

[NOTES] If you have ever used keras-vis, you may feel that tf-keras-vis is similar with keras-vis. Actually tf-keras-vis derived from keras-vis, and both provided visualization methods are almost the same. But please notice that tf-keras-vis APIs does NOT have compatibility with keras-vis.

Requirements

  • Python 3.6-3.9
  • tensorflow>=2.0.4

Installation

  • PyPI
$ pip install tf-keras-vis tensorflow
  • Source (for development)
$ git clone https://github.com/keisen/tf-keras-vis.git
$ cd tf-keras-vis
$ pip install -e .[develop]

Use Cases

  • chitra
    • A Deep Learning Computer Vision library for easy data loading, model building and model interpretation with GradCAM/GradCAM++.

Known Issues

  • With InceptionV3, ActivationMaximization doesn't work well, that's, it might generate meaninglessly blur image.
  • With cascading model, Gradcam and Gradcam++ don't work well, that's, it might occur some error. So we recommend to use FasterScoreCAM in this case.
  • channels-first models and data is unsupported.

ToDo

  • Guides
    • Visualizing multiple attention or activation images at once utilizing batch-system of model
    • Define various score functions
    • Visualizing attentions with multiple inputs models
    • Visualizing attentions with multiple outputs models
    • Advanced score functions
    • Tuning Activation Maximization
    • Visualizing attentions for N-dim image inputs
  • We're going to add some methods such as below
    • Deep Dream
    • Style transfer
Comments
  • Release v0.6.0

    Release v0.6.0

    • [x] Support mixed-precision of Tensorflow 2.4+, (i.e., NOT support experimental API).
    • [x] Change a terminology that Loss to Score
    • [x] Refactoring Testcases, ActivationMaximization and some utilities
    • [x] Some bugfixes
    • [x] Write docstrings
    • [x] Update example notebook
    • [x] README.md
    • [x] Add tests
    • [x] Update setup.py
    • [x] Write Release note

    Closes #24, #43 , #45, #47 and #51

    bug documentation enhancement 
    opened by keisen 36
  • How can I make this work with a custom top layer?

    How can I make this work with a custom top layer?

    Hello. I'm new to ML and newer still to this library. I want to use the gradcam on a vgg16 architecture with custom top layers that I created. If my model in this case consists of two separate models, what would I have to change, say, in the example case to make this work?

    opened by Dsanzv 9
  • Guided Backpropagation

    Guided Backpropagation

    Hi @keisen, thanks for putting this together. Is there a way to generate a guided backpropagation saliency map with a custom function in gradient-modifier?

    enhancement 
    opened by estanley16 8
  • "array type dtype('float16') not supported" in ScoreCAM with mixed_precision

    This example fails when interpolating data deep down in scipy:

    import numpy as np
    import tensorflow.keras as keras
    from tf_keras_vis.scorecam import ScoreCAM
    
    policy = keras.mixed_precision.experimental.Policy("mixed_float16")
    keras.mixed_precision.experimental.set_policy(policy)
    
    ScoreCAM(model := keras.applications.MobileNet())(
        lambda output: [o[0] for o in output], np.empty(model.input.shape[1:])
    )
    
    bug 
    opened by bersbersbers 8
  • Feature request: support mixed precision for GradCam(PlusPlus)

    Feature request: support mixed precision for GradCam(PlusPlus)

    This short example shows that GradCam(PlusPlus) fails when the network is trained with mixed precision:

    import numpy as np
    import tensorflow.keras as keras
    from tensorflow.keras.mixed_precision import experimental as mixed_precision
    from tf_keras_vis.gradcam import Gradcam, GradcamPlusPlus
    from tf_keras_vis.saliency import Saliency
    
    policy = mixed_precision.Policy("mixed_float16")
    mixed_precision.set_policy(policy)
    
    model = keras.applications.MobileNet()
    input_data = np.empty(model.input.shape[1:])
    
    
    def model_modifier(model):
        model.layers[-1].activation = keras.activations.linear
        return model
    
    
    def loss(output):
        return [o[0] for o in output]
    
    
    saliency = Saliency(model, model_modifier, clone=False)
    
    # works
    saliency(loss, input_data)
    
    # works
    saliency(loss, input_data, smooth_samples=20)
    
    # does not work: "array type dtype('float16') not supported"
    gradcam = Gradcam(model, model_modifier, clone=False)
    gradcam(loss, input_data, penultimate_layer=-1)
    
    # does not work: "array type dtype('float16') not supported"
    gradcampp = GradcamPlusPlus(model, model_modifier, clone=False)
    gradcampp(loss, input_data, penultimate_layer=-1)
    

    It works fine when you comment out mixed_precision.set_policy(policy).

    bug 
    opened by bersbersbers 8
  • How to select an specific output from a Multi Task Learning Model?

    How to select an specific output from a Multi Task Learning Model?

    Hello Keisen,

    First I'd like to thank you for sharing this project. I've already used the former keras-vis and I found your project exactly when I was developing a new code for TF 2.1.

    I built a multi task model with 3 output layers and I'm not sure how to select each of this outputs before running the saliency and normalize methods.

    Here is the outputs of my model (obtained via model.outputs command). The first and third outputs use binary_crossentropy loss while the second uses MSE loss.

    [<tf.Tensor 'sex_3/Identity:0' shape=(None, 1) dtype=float32>,
     <tf.Tensor 'age_3/Identity:0' shape=(None, 1) dtype=float32>,
     <tf.Tensor 'autism_3/Identity:0' shape=(None, 1) dtype=float32>]
    

    I had success running the code bellow (from your examples). But I failed when I needed to select a different output and loss than the last one.

    def model_modifier(m):
        m.layers[-1].activation = tf.keras.activations.linear
    saliency = Saliency(model, model_modifier)
    loss = lambda output: K.mean(output[:, 0])
    

    Thanks in advance!

    opened by SergioLeonardoMendes 8
  • Python 3.8 support

    Python 3.8 support

    [email protected]:~/pip_patches> python --version
    Python 3.8.2
    [email protected]:~/pip_patches> pip install tf-keras-vis==0.2.2
    ERROR: Could not find a version that satisfies the requirement tf-keras-vis==0.2.2 (from versions: 0.0.1, 0.1.0)
    ERROR: No matching distribution found for tf-keras-vis==0.2.2
    
    enhancement 
    opened by bersbersbers 6
  • 'Tensor' object has no attribute 'ndim'

    'Tensor' object has no attribute 'ndim'

    First, great Thanks to this masterpiece! It really saved my life when I found this toolkit. But I got some errors while I was trying to visualize my model via saliency map. I think the problem is from the format of seed_input, I followed exactly the same way as the tutorial did, using one image, using img_to_array to process it, and passing the array to the seed_input, but some how seed_inputs were transformed into a tensor such that it has no attribute 'ndim', I tried to transform X into ndarray by K.eval(X), but so far it didn't work well. Many thanks for all the help!!

    ---> 94 seed_inputs = list(seed_inputs)
         95 total = (np.zeros_like(X[0]) for X in seed_inputs)
         96 for i in range(smooth_samples):
    
    File ~\Anaconda3\lib\site-packages\tf_keras_vis\saliency.py:93, in <genexpr>(.0)
         89 seed_inputs = ((X, smooth_noise * (tf.math.reduce_max(X, axis=axis, keepdims=True) -
         90                                    tf.math.reduce_min(X, axis=axis, keepdims=True)))
         91                for X, axis in seed_inputs)
         92 print(seed_inputs)
    ---> 93 seed_inputs = (X + np.random.normal(0., sigma,X.shape) for X, sigma in seed_inputs)
         94 seed_inputs = list(seed_inputs)
         95 total = (np.zeros_like(X[0]) for X in seed_inputs)
    
    File ~\Anaconda3\lib\site-packages\tf_keras_vis\saliency.py:89, in <genexpr>(.0)
         86 seed_inputs = (tf.reshape(X, (smooth_samples, -1) + tuple(X.shape[1:]))
         87                for X in seed_inputs)
         88 seed_inputs = ((X, tuple(range(K.eval(X).ndim)[2:])) for X in seed_inputs)
    ---> 89 seed_inputs = ((X, smooth_noise * (tf.math.reduce_max(X, axis=axis, keepdims=True) -
         90                                    tf.math.reduce_min(X, axis=axis, keepdims=True)))
         91                for X, axis in seed_inputs)
         92 print(seed_inputs)
         93 seed_inputs = (X + np.random.normal(0., sigma, X.shape) for X, sigma in seed_inputs)
    
    File ~\Anaconda3\lib\site-packages\tf_keras_vis\saliency.py:88, in <genexpr>(.0)
         84 seed_inputs = (tf.tile(X, (smooth_samples, ) + tuple(np.ones(X.ndim - 1, np.int)))
         85                for X in seed_inputs)
         86 seed_inputs = (tf.reshape(X, (smooth_samples, -1) + tuple(X.shape[1:]))
         87                for X in seed_inputs)
    ---> 88 seed_inputs = ((X, tuple(range(X.ndim)[2:])) for X in seed_inputs)
         89 seed_inputs = ((X, smooth_noise * (tf.math.reduce_max(X, axis=axis, keepdims=True) -
         90                                    tf.math.reduce_min(X, axis=axis, keepdims=True)))
         91                for X, axis in seed_inputs)
         92 print(seed_inputs)
    
    File ~\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py:446, in Tensor.__getattr__(self, name)
        437 if name in {"T", "astype", "ravel", "transpose", "reshape", "clip", "size",
        438             "tolist", "data"}:
        439   # TODO(wangpeng): Export the enable_numpy_behavior knob
        440   raise AttributeError(
        441       f"{type(self).__name__} object has no attribute '{name}'. " + """
        442     If you are looking for numpy-related methods, please run the following:
        443     from tensorflow.python.ops.numpy_ops import np_config
        444     np_config.enable_numpy_behavior()
        445   """)
    --> 446 self.__getattribute__(name)
    
    AttributeError: 'Tensor' object has no attribute 'ndim'
    
    opened by YIsonP 5
  • Score-CAM is broken

    Score-CAM is broken

    I tried to run on a custom model and also ran the examples and it seems that score-cam is broken - takes too much time to compute, most of the time it fails due to memory issues, and when it does finish, the output is just zeros.

    Can you confirm that the problem is just not with me?

    Cheers.

    opened by miguelCalado 5
  • Graph disconected: transfer-learning model with custom dense layers

    Graph disconected: transfer-learning model with custom dense layers

    Hi, I'm getting an error when I try to get the grad-cam visualizations for a custom model as given below. image

    Then when I try to call to GradCam I get: ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_1:0", shape=(None, 224, 224, 3), dtype=float32) at layer "rescaling". The following previous layers were accessed without issue: []

    opened by mohdsinad 5
  • Support for LayerCAM

    Support for LayerCAM

    Hi, @keisen , Our paper "LayerCAM: Exploring Hierarchical Class Activation Maps for Localization" is accepted by TIP recently, which can visualize the class activation maps from any CNN layer of an off-the-shelf network. Could you add our method to your popular repository for more people to try this method? Our method is a simple modification of Grad-CAM. It should easy to implement. Here is the paper and code. Hope for your reply.

    opened by PengtaoJiang 5
  • GradCAM does not detect correct convolutional layer in multi-input case

    GradCAM does not detect correct convolutional layer in multi-input case

    Thank you for this useful visualization package!

    Right now I have a two-input, one-output model as follows image

    I am using GradCAM, with code snippet below. I have stated my penultimate layer to be 'ria-conv', which happens after concatenation of both networks.

    image

    However, the dimension of cam is 2 - one for each network (15 * 256 * 320 is my image count * image dimensions) , as shown in this screenshot image

    As far as my understanding goes, since the actual last convolutional layer is after concatenation, cam should be 1 * 15 * 256 * 320? This happens both when I explicitly state the penultimate layer and when I state it as -1.

    opened by marieff587 0
  • Reconstruction Error Score?

    Reconstruction Error Score?

    Hi! Thanks for this XAI visualization package!

    I am trying to use the saliency method applied to Autoencoders and reconstruction errors. Also, I do not have images, but a feature array for each observation. My goal is to check the "importance" of each input feature to the overall reconstruction, thus was trying to use smoothgrad method.

    In this case, I do not understand what should I pass to 'score'. The output of the model is just the reconstructed input (n_samples x n_features).

    I also couldn't pass a custom function to the score attribute as you suggest in the documentation. 'From the example in the repository:

    Instead of using CategoricalScore object, you can also define the function from scratch as follows:

    def score_function(output): # The output variable refers to the output of the model, # so, in this case, output shape is (3, 1000) i.e., (samples, classes). return (output[0][1], output[1][294], output[2][413]) , But then, how can you pass the function to the method? It needs to be callable, so it gives "ValueError: Score object must be callable! " Could you add an example explicitly using score_function passed to saliency() (or any other) instance, instead of the instance score from one of the defined score classes (BinaryScore, CategoricalScore) ?

    Thank you in advance!

    opened by inesws 2
  • Fix penultimate layer search condition

    Fix penultimate layer search condition

    Hi there!

    Currently the last Conv layer is being automatically used for GradCAM if not specified differently. We noticed that for some models like MobileNetV3 this results in the wrong layer being used (as already mentioned in #61).

    Specifically to MobileNetV3 this has some problems:

    1. The logits layer is implemented using a Conv2D layer causing this layer with shape (None, 1, 1, 1024) to be selected as the penultimate layer. Obviously this will result in incorrect/useless GradCAM images.
    2. Selecting the previous Conv layer manually however does not include some important activations which occasionally causes inverted GradCAM images (see #61).

    We propose to use a different search condition which searches for the last layer with four dimensions and a width and height of more than 1. This will help both problems mentioned above:

    1. The selected layer will have dimensions greater than 1 by 1
    2. Important activations will still be included since the penultimate layer doesn't necessarily have to be a Conv layer anymore resulting in non-inverted GradCAM images.

    A similar implementation is being used in sicara/tf-explain. Their implementation would however still be affected by the first problem.

    I added some quick tests which could be extended in the future.

    Let me know what you think!

    opened by stnkl 2
  • TypeError: '<=' not supported between instances of 'int' and 'str'

    TypeError: '<=' not supported between instances of 'int' and 'str'

    Hello all,

    wondered if anyone has encounted the error:

    TypeError: '<=' not supported between instances of 'int' and 'str'

    `%%time
    from tensorflow.keras import backend as K
    from tf_keras_vis.saliency import Saliency
    # from tf_keras_vis.utils import normalize
    
    # Create Saliency object.
    saliency = Saliency(model,
                        model_modifier=replace2linear,
                        clone=True)
    
    # Generate saliency map
    saliency_map = saliency(score, X)
    
    ## Since v0.6.0, calling `normalize()` is NOT necessary.
    #saliency_map = normalize(saliency_map)
    
    # Render
    f, ax = plt.subplots(nrows=1, ncols=3, figsize=(12, 4))
    for i, title in enumerate(image_titles):
        ax[i].set_title(title, fontsize=16)
        ax[i].imshow(saliency_map[i], cmap='jet')
        ax[i].axis('off')
    plt.tight_layout()
    plt.show()
    
    TypeError                                 Traceback (most recent call last)
    <timed exec> in <module>
    
    /usr/local/lib/python3.8/dist-packages/tf_keras_vis/saliency.py in __call__(self, score, seed_input, smooth_samples, smooth_noise, keepdims, gradient_modifier, training, normalize_map, unconnected_gradients)
         98             grads = [g / smooth_samples for g in total]
         99         else:
    --> 100             grads = self._get_gradients(seed_inputs, scores, gradient_modifier, training,
        101                                         unconnected_gradients)
        102         # Visualizing
    
    /usr/local/lib/python3.8/dist-packages/tf_keras_vis/saliency.py in _get_gradients(self, seed_inputs, scores, gradient_modifier, training, unconnected_gradients)
        115             outputs = self.model(seed_inputs, training=training)
        116             outputs = listify(outputs)
    --> 117             score_values = self._calculate_scores(outputs, scores)
        118         grads = tape.gradient(score_values,
        119                               seed_inputs,
    
    /usr/local/lib/python3.8/dist-packages/tf_keras_vis/__init__.py in _calculate_scores(self, outputs, score_functions)
         86         score_values = (func(output) for output, func in zip(outputs, score_functions))
         87         score_values = (self._mean_score_value(score) for score in score_values)
    ---> 88         score_values = list(score_values)
         89         return score_values
         90 
    
    /usr/local/lib/python3.8/dist-packages/tf_keras_vis/__init__.py in <genexpr>(.0)
         85     def _calculate_scores(self, outputs, score_functions):
         86         score_values = (func(output) for output, func in zip(outputs, score_functions))
    ---> 87         score_values = (self._mean_score_value(score) for score in score_values)
         88         score_values = list(score_values)
         89         return score_values
    
    /usr/local/lib/python3.8/dist-packages/tf_keras_vis/__init__.py in <genexpr>(.0)
         84 
         85     def _calculate_scores(self, outputs, score_functions):
    ---> 86         score_values = (func(output) for output, func in zip(outputs, score_functions))
         87         score_values = (self._mean_score_value(score) for score in score_values)
         88         score_values = list(score_values)
    
    /usr/local/lib/python3.8/dist-packages/tf_keras_vis/utils/scores.py in __call__(self, output)
         99             raise ValueError("`output` ndim must be 2 or more (batch_size, ..., channels), "
        100                              f"but was {output.ndim}")
    --> 101         if output.shape[-1] <= max(self.indices):
        102             raise ValueError(
        103                 f"Invalid index value. indices: {self.indices}, output.shape: {output.shape}")
    
    TypeError: '<=' not supported between instances of 'int' and 'str'
    `
    
    opened by JackDanHollister 0
Releases(v0.8.4)
  • v0.8.4(Nov 25, 2022)

    Bugfixes

    • Fixes a bug which seed_input value passed to ActivationMaximization was converted to Numpy array, but not tf.Tensor.
    • Fixes a bug which GifGenerator2D didn't cast tf.Tensor to Numpy array.
    • Fixes a bug which np.int was used in tf_keras_vis.saliency module.
    Source code(tar.gz)
    Source code(zip)
  • v0.8.3(Nov 24, 2022)

  • v0.8.2(Aug 26, 2022)

  • v0.8.1(Feb 5, 2022)

  • v0.8.0(Aug 14, 2021)

    Breaking Changes

    • Remove normalize_gradient option from tf_keras_vis.activation_maximization.ActivationMaximization.__call__(), tf_keras_vis.activation_maximization.legacy.ActivationMaximization.__call__() and tf_keras_vis.gradcam.Gradcam.__call__().
    • Remove standardize_cam option from tf_keras_vis.gradcam.Gradcam.__call__(), tf_keras_vis.gradcam_plus_plus.GradcamPlusPlus.__call__() and tf_keras_vis.scorecam.Scorecam.__call__(), Use normalize_cam option instead
    • Remove standardize_saliency option from tf_keras_vis.saliency.Saliency.__call__(), Use normalize_map option instead
    • Deprecate tf_keras_vis.utils.standardize(), Use tf_keras_vis.utils.normalize() instead

    Add features

    • Add support for LayerCAM
    • Add gradient_modifier option to tf_keras_vis.gradcam.Gradcam.__call__() and tf_keras_vis.gradcam_plus_plus.GradcamPlusPlus.__call__()
    • Add __version__ to tf_keras_vis module

    Other Changes

    • Add VERSION file to define the current version number.
    • Add MANIFEST.in file to exclude unnecessary files (such tests) from the package.
    Source code(tar.gz)
    Source code(zip)
  • v0.7.2(Jul 12, 2021)

  • v0.7.1(Jul 2, 2021)

  • v0.7.0(Jun 26, 2021)

    Fixes critical bugs

    ActivationMaximization

    We've fixed a problem of unstable gradient calculation in ActivationMaximization. In addition, because the related implementation has a bad effect on the process with the mixed-precision model, as a result, the problems related to mixed-precision with ActivationMaximization below were also fixed.

    • Fixed issues related to mixed-precision
      • The results of fully-precision and mixed-precision models are different.
      • When the model has a layer which is set explicitly as float32 dtype, ActivationMaximization might raise an error.
      • Regularization values calculated by ActivationMaximization might be NaN or inf easily.

    Because the results of the gradients calculation are now different compared to the past versions, to keep compatibility, we newly provide the module tf_keras_vis.activation_maximization.legacy. If you have the code adjusted by yourself in the past versions, you could also use legacy implementation as follows:

    # from tf_keras_vis.activation_maximization import ActivationMaximization
    from tf_keras_vis.activation_maximization.legacy import ActivationMaximization
    

    Please notice that the tf_keras_vis.activation_maximization.legacy module above still has the problem of unstable gradient calculation. So we strongly recommend, if you don't have any code adjusted by yourself in the past versions, using the tf_keras_vis.activation_maximization module.

    Regularization for ActivationMaximization

    We also found and fixed some bugs of Regularizers below.

    • Fixed issues related to Regularizers
      • The TotalVariation2D has a problem that the more the number of samples of seed_input, the smaller the regularization value of it.
      • The Norm has a problem that the larger the spatial size of seed_input, the smaller the regularization value of it.

    In addition to above, we've changed the signature of Regularizer#__call__(). The method now accepts only one seed_input (the legacy one accepts whole seed_inputs). With this change, the regularizers argument of ActivationMaximization#__call__() now accepts a dictionary object that contains the Regularizer instances for each model input.

    To keep compatibility, we've newly provided the tf_keras_vis.activation_maximization.regularizers module that includes the regularizers improved, instead of updating the tf_keras_vis.utils.regularizers module. If you have the code implemented or adjusted by yourself in the past versions, you could also use legacy implementation as follows:

    # from tf_keras_vis.activation_maximization.regularizers import Norm, TotalVariation2D 
    from tf_keras_vis.utils.regularizers import Norm, TotalVariation2D
    

    Please notice that the tf_keras_vis.utils.regularizers module still has the bugs and a lot of warnings will be printed. So we strongly recommend, if you do NOT have any code adjusted by yourself in the past versions, using the tf_keras_vis.utils.regularizers module.

    If you face any problem related to this release, please feel free to ask us in Issues page.

    Add features and Improvements

    • Add tf_keras_vis.utils.model_modifiers module.
      • To fix issues / #49
      • This module includes ModelModifier, ReplaceToLinear, ExtractIntermediateLayer and GuidedBackpropagation.
      • As a result, model_modifier argument of tf_keras_vis.ModelVisualization#__init__() now also accepts a tf_keras_vis.utils.model_modifiers.ModelModifier instance, a list of Callable objects or ModelModifier instances.
    • Add tf_keras_vis.gradcam_plus_plus module.
      • This module includes GradcamPlusPlus.
    • Add tf_keras_vis.activation_maximization.legacy module.
      • This module includes ActivationMaximization that still has the problem of unstable gradient calculation.
    • Add tf_keras_vis.activation_maximization.input_modifiers module.
      • This module includes Jitter, Rotate and Scale.
    • Add tf_keras_vis.activation_maximization.regularizers module.
      • This module includes TotalVariation2D and Norm that fixed some bugs.
    • Add Scale, that is the new InputModifier class, to the tf_keras_vis.activation_maximization.input_modifiers module.
    • Add Progress, that is the new Callback class, to the tf_keras_vis.activation_maximization.callbacks module.
    • Add activation_modifiers argument to ActivationMaximization#__call__().
    • ~~Add a github actions recipe to publish tf-keras-vis to Anaconda.org~~
      • To fix issues / #54
    • Improve Scorecam
      • Fixes the incorrect weight calculation. (Reducing noise)
      • Change cubic interpolation to linear one. (10x faster)
      • Change to apply softmax function to scores. (More stable)
      • Add validation to check invalid scores.

    Breaking Changes

    • In all visualization, the score argument now must be a list of tf_keras_vis.utils.scores.Score instances or Callable objects when the model has multiple outputs.
    • Change the default parameters of ActivationMaximization#__call__().
      • Because of fixing critical bugs in ActivationMaximization that the calculation of gradient descent is unstable.
    • Deprecates tf_keras_vis.utils.regularizers module, Use tf_keras_vis.activation_maximization.regularizers module instead.
      • For now, both current and legacy regularizers can be used in ActivationMaximization, but please notice that they can't be mixed to use.
    • Deprecates tf_keras_vis.utils.input_modifiers, Use tf_keras_vis.activation_maximization.input_modifiers module instead.
    • Deprecates tf_keras_vis.activation_maximization.callbacks.PrintLogger, use Progress instead.
    • Add **arguments argument to Callback#on_begin().
      • **arguments is the values passed to ActivationMaximization#__call__() as arguments.
    • Deprecates tf_keras_vis.gradcam.GradcamPlusPlus, Use tf_keras_vis.gradcam_plus_plus.GradcamPlusPlus  module instead.

    Bugfixes and Other Changes

    • Fixes a bug that Scorecam didn't work correctly with multi-inputs model.
    • Fixes some bugs when loading input modifiers.
    • Fixes a bug that Callback#on_end() might NOT be called when an error occurs.
    • Improve an error message when max_N is invalid in Scorecam.
    • Improve the input_range argument of ActivationMaximization#__call__() to raise an error when it's invalid.
    • Change docstring style to google.
    • Replace str#format() to f-string
    Source code(tar.gz)
    Source code(zip)
  • v0.6.2(Jun 3, 2021)

    Improvements

    • tf_keras_vis.utils.input_modifiers.Jitter raises ValueError with proper message when the dimension of seed_input is 2.

    Breaking Changes

    • Deprecates tf_keras_vis.utils.input_modifiers.Rotate. Use tf_keras_vis.utils.input_modifiers.Rotate2D instead.

    Other Changes

    • Update classifiers in setup.py
    Source code(tar.gz)
    Source code(zip)
  • v0.6.1(May 26, 2021)

  • v0.6.0(May 22, 2021)

    Change a terminology

    • The Loss is changed to Score. Because visualizations does NOT need to calculate any loss between labels and the model outputs, and the calculated values is used as just scores, we thought that the former is proper than latter.

    Add Features and Improvements

    • Support Python 3.9
    • Support Tensorflow 2.4 and 2.5
    • Support mixed-precision
      • Issues / #43 , #45 and #47
      • only tensorflow 2.4.0+
    • Add unconnected_gradients option to __call__() of ActivationMaximization, Saliency, GradCAM, GradCAM++.
    • Add standardize_cam option to __call__() of GradCAM, GradCAM++ and ScoreCAM.
    • Add normalize_saliency option to __call__() of Saliency.

    Breaking Changes

    • In all visualization class constructor, the model passed as a argument is NOT cloned when model_modifier is None.
      • issues / #51
    • Deprecates and Disable normalize_gradient option in ActivationMaximizaion and GradCAM.
    • Deprecates tf_keras_vis.utils.callback module. Use tf_keras_vis.activation_maximization.callbacks module instead.
      • Deprecates and Rename Print to PrintLogger.
      • Deprecates and Rename GifGenerator to GifGenerator2D.
    • Deprecates tf_keras_vis.utils.regularizers.TotalVariation. Use tf_keras_vis.utils.regularizers.TotalVariation2D instead.
    • Deprecates tf_keras_vis.utils.regularizers.L2Norm. Use tf_keras_vis.utils.regularizers.Norm instead.
    • Deprecates and Rename tf_keras_vis.utils.normalize to tf_keras_vis.utils.standardize.
    • Don't need to use tf_keras_vis.utils.normalize to visualize CAM or Saliency. Use standardize_cam and standardize_saliency option instead respectively.
    • Don't need to cast activations maximized by ActivationMaximization to visualize.

    BugFix and Other Changes

    • Fixes a problem in Rotate input-modifier that it can't work correctly when input tensors is not 2D images.
    • Add a test utility and testcases.
    • Update dockerfiles and example notebooks.

    Known Issues

    • With a mixed-precision model, Regurarization values that is calculated by ActivationMaximization may be NaN.
    • With a mixed-precision model that has a layer which are set float32 dtype exlicitly, ActivationMaximization may raise a error.
    Source code(tar.gz)
    Source code(zip)
  • v0.5.5(Dec 15, 2020)

    Bugfix

    • Fix a problem that GradCAM and GradCAM++ were NOT working correctly with mixed precision . (See issues / #41 for details).

    Other Changes

    • Support tensorflow 2.4.0.
    • Python 3.5 is no longer supported (but it can be used if tensorflow<2.4.0).
    Source code(tar.gz)
    Source code(zip)
  • v0.5.4(Dec 4, 2020)

    Bugfix

    • Fix a problem that normalize_gradient option of ActivationMaximization and GradCAM was not working correctly. (See issues / #38 for details).
    Source code(tar.gz)
    Source code(zip)
  • v0.5.3(Aug 11, 2020)

  • v0.5.2(Jul 14, 2020)

  • v0.5.0(Jul 12, 2020)

    Major Feature

    • Add new methods (ScoreCAM and Faster-ScoreCAM)

    Breaking Changes

    • tf_keras_vis.utils.print_gpus() was deleted.
    • tf_keras_vis.utils.losses.SmoothedLoss was deleted.

    BugFix and Other Changes

    • Fixes a problem that ActivationMaximization couldn't calculate and normalize losses correctly when multiple samples.
    • Fixes a problem that the output of ActivationMaximization was restricted when input_ranges contain None.
    • Improve to raise ValueError when input shape doesn't have channels dimension in GradCAM.
    • Fixes a problem that GradCAM++ couldn't calculate scores correctly when multiple samples.
    • Add tf_keras_vis.utils. num_of_gpus() instead of tf_keras_vis.utils. print_gpus().
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Jun 26, 2020)

    Major Feature

    • Add GradCAM++ algorism implemented as GradcamPlusPlus class

    Bugfixes

    • The noise of SmoothGrad is too strong.
    • Rotation in InputModifier across channles-dim.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.3(Jun 22, 2020)

    Improvements

    • Refactoring and Vectorizing some processing in Gradcam and Saliency
    • Remove .travis.yml because there is Github Action as CI tools
    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Jun 19, 2020)

    This release includes the patch for improving Issue #15 that is serious bugs. So We highly recommend to update tf-keras-vis.

    BugFix and Other Changes

    • Fixes bug that tf_keras_vis.utils.normalize() can't batchwisely do normalization.
    • Add expand_cam argument to GradCAM#call() for returning non-interpolated cam values.
    • In setup.py, rename development of extras_require to develop.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(May 23, 2020)

  • v0.3.0(May 23, 2020)

    Bug Fixes

    • Fixes a lot of bugs in ActivationMaximization and Gradcam

    Breaking Changes

    • tf_keras_vis.activation_maximization.ActivationMaximization.__call__()'s regularizers argument no longer accepts dict object.
    • Rename tf_keras_vis.gradcam.Gradcam.__call__()'s seek_penultimate_layer argument to seek_penultimate_conv_layer.
    • Rename tf_keras_vis.utils.listify's empty_list_if_none argument to return_empty_list_if_none.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.5(May 8, 2020)

  • v0.2.4(May 8, 2020)

  • v0.2.2(May 7, 2020)

  • v0.2.1(Jan 31, 2020)

    Document issues fixes and other changes

    • Fixes examples of docker command in README.md
    • Remove unnecessary comments in source

    [!NOTE] This release is for updating description page on PyPI, so it NOT contains any changes of library's behaviors.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jan 29, 2020)

    Bug Fixes and Other Changes

    • In Saliency and Gradcam, support seed_input that don't have samples (batch) dimension.
    • Fixes a lot of bugs
    • Dockerfiles changed to tensorlfow image base.
    • Added tests and .travis.yml .

    Breaking Changes

    • tf_keras_vis.utils.losses.SmoothingLoss class renamed to tf_keras_vis.utils.losses.SmoothedLoss.
    Source code(tar.gz)
    Source code(zip)
Owner
Yasuhiro Kubota
Software engineering, Program language, WebRTC, Machine Learning
Yasuhiro Kubota
Visual analysis and diagnostic tools to facilitate machine learning model selection.

Yellowbrick Visual analysis and diagnostic tools to facilitate machine learning model selection. What is Yellowbrick? Yellowbrick is a suite of visual

District Data Labs 3.9k Dec 30, 2022
Lucid library adapted for PyTorch

Lucent PyTorch + Lucid = Lucent The wonderful Lucid library adapted for the wonderful PyTorch! Lucent is not affiliated with Lucid or OpenAI's Clarity

Lim Swee Kiat 520 Dec 26, 2022
Bias and Fairness Audit Toolkit

The Bias and Fairness Audit Toolkit Aequitas is an open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers

Data Science for Social Good 513 Jan 06, 2023
Implementation of linear CorEx and temporal CorEx.

Correlation Explanation Methods Official implementation of linear correlation explanation (linear CorEx) and temporal correlation explanation (T-CorEx

Hrayr Harutyunyan 34 Nov 15, 2022
FairML - is a python toolbox auditing the machine learning models for bias.

======== FairML: Auditing Black-Box Predictive Models FairML is a python toolbox auditing the machine learning models for bias. Description Predictive

Julius Adebayo 338 Nov 09, 2022
Interactive convnet features visualization for Keras

Quiver Interactive convnet features visualization for Keras The quiver workflow Video Demo Build your model in keras model = Model(...) Launch the vis

Keplr 1.7k Dec 21, 2022
TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for Tensorflow 2 (supported including English, Korean, Chinese, German and Easy to adapt for other languages)

🤪 TensorFlowTTS provides real-time state-of-the-art speech synthesis architectures such as Tacotron-2, Melgan, Multiband-Melgan, FastSpeech, FastSpeech2 based-on TensorFlow 2. With Tensorflow 2, we c

3k Jan 04, 2023
Lime: Explaining the predictions of any machine learning classifier

lime This project is about explaining what machine learning classifiers (or models) are doing. At the moment, we support explaining individual predict

Marco Tulio Correia Ribeiro 10.3k Jan 01, 2023
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)

Hierarchical neural-net interpretations (ACD) 🧠 Produces hierarchical interpretations for a single prediction made by a pytorch neural network. Offic

Chandan Singh 111 Jan 03, 2023
A Practical Debugging Tool for Training Deep Neural Networks

Cockpit is a visual and statistical debugger specifically designed for deep learning!

31 Aug 14, 2022
Python implementation of R package breakDown

pyBreakDown Python implementation of breakDown package (https://github.com/pbiecek/breakDown). Docs: https://pybreakdown.readthedocs.io. Requirements

MI^2 DataLab 41 Mar 17, 2022
JittorVis - Visual understanding of deep learning model.

JittorVis - Visual understanding of deep learning model.

182 Jan 06, 2023
Neural network visualization toolkit for tf.keras

Neural network visualization toolkit for tf.keras

Yasuhiro Kubota 262 Dec 19, 2022
tensorboard for pytorch (and chainer, mxnet, numpy, ...)

tensorboardX Write TensorBoard events with simple function call. The current release (v2.1) is tested on anaconda3, with PyTorch 1.5.1 / torchvision 0

Tzu-Wei Huang 7.5k Jan 07, 2023
🎆 A visualization of the CapsNet layers to better understand how it works

CapsNet-Visualization For more information on capsule networks check out my Medium articles here and here. Setup Use pip to install the required pytho

Nick Bourdakos 387 Dec 06, 2022
Algorithms for monitoring and explaining machine learning models

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-qual

Seldon 1.9k Dec 30, 2022
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM

Class Activation Map methods implemented in Pytorch pip install grad-cam ⭐ Comprehensive collection of Pixel Attribution methods for Computer Vision.

Jacob Gildenblat 6.5k Jan 01, 2023
A collection of research papers and software related to explainability in graph machine learning.

A collection of research papers and software related to explainability in graph machine learning.

AstraZeneca 1.9k Dec 26, 2022
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), Tens

Lutz Roeder 20.9k Dec 28, 2022
Logging MXNet data for visualization in TensorBoard.

Logging MXNet Data for Visualization in TensorBoard Overview MXBoard provides a set of APIs for logging MXNet data for visualization in TensorBoard. T

Amazon Web Services - Labs 327 Dec 05, 2022