Delve is a Python package for analyzing the inference dynamics of your PyTorch model.

Overview

Delve: Deep Live Visualization and Evaluation logo

PyPI version Tests License: MIT DOI

Delve is a Python package for analyzing the inference dynamics of your model.

playground

Use Delve if you need a lightweight PyTorch extension that:

  • Gives you insight into the inference dynamics of your architecture
  • Allows you to optimize and adjust neural networks models to your dataset without much trial and error
  • Allows you to analyze the eigenspaces your data at different stages of inference
  • Provides you basic tooling for experiment logging

Motivation

Designing a deep neural network is a trial and error heavy process that mostly revolves around comparing performance metrics of different runs. One of the key issues with this development process is that the results of metrics not realy propagte back easily to concrete design improvements. Delve provides you with spectral analysis tools that allow you to investigate the inference dynamic evolving in the model while training. This allows you to spot underutilized and unused layers. Missmatches between object size and neural architecture among other inefficiencies. These observations can be propagated back directly to design changes in the architecture even before the model has fully converged, allowing for a quicker and mor guided design process.

Installation

pip install delve

Using Layer Saturation to improve model performance

The saturation metric is the core feature of delve. By default saturation is a value between 0 and 1.0 computed for any convolutional, lstm or dense layer in the network. The saturation describes the percentage of eigendirections required for explaining 99% of the variance. Simply speaking, it tells you how much your data is "filling up" the individual layers inside your model.

In the image below you can see how saturation portraits inefficiencies in your neural network. The depicted model is ResNet18 trained on 32 pixel images, which is way to small for a model with a receptive field exceeding 400 pixels in the final layers.

resnet.PNG

To visualize what this poorly chosen input resolution does to the inference, we trained logistic regressions on the output of every layer to solve the same task as the model. You can clearly see that only the first half of the model (at best) is improving the intermedia solutions of our logistic regression "probes". The layers following this are contributing nothing to the quality of the prediction! You also see that saturation is extremly low for this layers!

We call this a tail and it can be removed by either increasing the input resolution or (which is more economical) reducing the receptive field size to match the object size of your dataset.

resnetBetter.PNG

We can do this by removing the first two downsampling layers, which quarters the growth of the receptive field of your network, which reduced not only the number of parameters but also makes more use of the available parameters, by making more layers contribute effectivly!

For more details check our publication on this topics

Demo

import torch
from delve import CheckLayerSat
from torch.cuda import is_available
from torch.nn import CrossEntropyLoss
from torchvision.datasets import CIFAR10
from torchvision.transforms import ToTensor, Compose
from torch.utils.data.dataloader import DataLoader
from torch.optim import Adam
from torchvision.models.vgg import vgg16

# setup compute device
from tqdm import tqdm

if __name__ == "__main__":

    device = "cuda:0" if is_available() else "cpu"

    # Get some data
    train_data = CIFAR10(root="./tmp", train=True,
                         download=True, transform=Compose([ToTensor()]))
    test_data = CIFAR10(root="./tmp", train=False, download=True, transform=Compose([ToTensor()]))

    train_loader = DataLoader(train_data, batch_size=1024,
                              shuffle=True, num_workers=6,
                              pin_memory=True)
    test_loader = DataLoader(test_data, batch_size=1024,
                             shuffle=False, num_workers=6,
                             pin_memory=True)

    # instantiate model
    model = vgg16(num_classes=10).to(device)

    # instantiate optimizer and loss
    optimizer = Adam(params=model.parameters())
    criterion = CrossEntropyLoss().to(device)

    # initialize delve
    tracker = CheckLayerSat("my_experiment", save_to="plotcsv", modules=model, device=device)

    # begin training
    for epoch in range(10):
        model.train()
        for (images, labels) in tqdm(train_loader):
            images, labels = images.to(device), labels.to(device)
            prediction = model(images)
            optimizer.zero_grad(set_to_none=True)
            with torch.cuda.amp.autocast():
                outputs = model(images)
                _, predicted = torch.max(outputs.data, 1)

                loss = criterion(outputs, labels)
            loss.backward()
            optimizer.step()

        total = 0
        test_loss = 0
        correct = 0
        model.eval()
        for (images, labels) in tqdm(test_loader):
            images, labels = images.to(device), labels.to(device)
            outputs = model(images)
            loss = criterion(outputs, labels)
            _, predicted = torch.max(outputs.data, 1)

            total += labels.size(0)
            correct += torch.sum((predicted == labels)).item()
            test_loss += loss.item()

        # add some additional metrics we want to keep track of
        tracker.add_scalar("accuracy", correct / total)
        tracker.add_scalar("loss", test_loss / total)

        # add saturation to the mix
        tracker.add_saturations()

    # close the tracker to finish training
    tracker.close()

Why this name, Delve?

delve (verb):

  • reach inside a receptacle and search for something
  • to carry on intensive and thorough research for data, information, or the like
Comments
  • Refactor covariance matrix calculations:

    Refactor covariance matrix calculations:

    Refactor covariance matrix calculations:

    • use only the most current activation
    • change default sampling rate from B
    • flatten B, H, W for Conv layers instead of median

    Fix calculation of number of eigval by argmax.

    Change CIFAR10 training schedule for faster convergence (the loss seemed not to drop almost at all during training example_deep.py):

    • normalization function
    • batch size
    • learning rate
    • h2 sizes (due to different saturation calculations)
    bug enhancement 
    opened by mmarcinkiewicz 11
  • Idiomatic 1.0 code

    Idiomatic 1.0 code

    I see that there are quite a few constructions which are obsolete in Python 0.4+.

    For example: Variable. Plus, it seems that .to(device) is the preferred method to keep transfer to GPU (or not, if not available).

    opened by stared 5
  • Fully convolutional AutoEncoder

    Fully convolutional AutoEncoder

    Hello,

    I have developed AutoEncoder which is fully convolutional and I wanted to check what is the utilization of convolutional layers in it (no dense), but I am not able to do it with this module. Even though, it is written that conv layers are supported.

    opened by dawrym 4
  • TypeError with pytest

    TypeError with pytest

    Running py.test,

    https://github.com/delve-team/delve/blob/6a2b594eb9ce43c38c9f94be8781ea8c57610de2/delve/writers.py#L469 returns a TypeError:

    TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''.

    Contents of df.values[0]:

    array([list([(tensor(-1.5482e-08, dtype=torch.float64), tensor(-1.3955e-09, dtype=torch.float64)), (tensor(-2.5362e-08, dtype=torch.float64), tensor(-2.5179e-09, dtype=torch.float64)), (tensor(-3.1511e-08, dtype=torch.float64), tensor(-3.6894e-09, dtype=torch.float64)), (tensor(-3.5553e-08, dtype=torch.float64), tensor(-4.1750e-09, dtype=torch.float64)), (tensor(-3.8271e-08, dtype=torch.float64), tensor(-4.4061e-09, dtype=torch.float64)), (tensor(-3.7972e-08, dtype=torch.float64), tensor(-2.7664e-09, dtype=torch.float64)), (tensor(-3.7489e-08, dtype=torch.float64), tensor(-1.7852e-09, dtype=torch.float64)), (tensor(-3.7178e-08, dtype=torch.float64), tensor(-1.3027e-09, dtype=torch.float64))])],
          dtype=object)
    
    bug 
    opened by justinshenk 2
  • delve outdated examples

    delve outdated examples

    Traceback (most recent call last): File "example.py", line 39, in <module> "regression/h{}".format(h), "csv", model, device=device, reset_covariance=True, File "Z:\delve\delve\torchcallback.py", line 193, in __init__ self.timeseries_method = timeseries_method NameError: name 'timeseries_method' is not defined

    opened by Saran-nns 2
  • Does it work with submodules?

    Does it work with submodules?

    Typically I use modules within nn.Sequential or custom-defined modules.

    class TwoLayerNet(torch.nn.Module):
        def __init__(self, D_in, H, D_out):
            super(TwoLayerNet, self).__init__()
            self.fc = torch.nn.Sequential(
                torch.nn.Linear(D_in, H),
                torch.nn.Linear(H, D_out)
            )
    
        def forward(self, x):
            return self.fc(x)
    

    and then layers = model.parameters(). However, I get an error:

    Traceback (most recent call last):
      File "example_submodule.py", line 43, in <module>
        stats = CheckLayerSat('regression/h{}'.format(h), layers)
      File "/Users/pmigdal/not_my_repos/delve/delve/main.py", line 50, in __init__
        self.layers = self._get_layers(modules)
      File "/Users/pmigdal/not_my_repos/delve/delve/main.py", line 167, in _get_layers
        for name in modules.state_dict().keys():
    AttributeError: 'generator' object has no attribute 'state_dict'
    

    (for full code example, see: https://gist.github.com/stared/b598c03ade397baf3fa03c52bd79e90d)

    Does it work with submodules?

    opened by stared 2
  • [JOSS review] Doc nitpicks

    [JOSS review] Doc nitpicks

    Things I noticed while reading the docs:

    • Spurious indices and tables link on the saturation page.
    • If I understand correctly then CheckLayerSat is the only way your users should interact with the library. In this case there's no need to include anything else in the API reference. Just focus on the essential API and exclude internal objects.
    • Broken link on top of Reference page.
    • I think the home page of your documentation fills a similar role as the GitHub README, in being the first point of interaction for new users where you should put your best foot forward. Right now your README is a lot more polished, so why not just include the README in the documentation home page and save yourself the hassle of maintaining both separately? (E.g. by converting the README to rst, see mpi4jax where we use this pattern.)
    • I would link more prominently to the integration with the tensorflow playground, which really does a great job of introducing the library! Love the gif.
    • Links under "dependencies" are broken (and the whole section is unnecessary IMO).
    • Emphasize more clearly what I should read to understand the theory behind Delve. You mention several papers but I think highlighting a specific one could be helpful.

    (This is a part of the ongoing review at openjournals/joss-reviews#3992)

    opened by dionhaefner 1
  • [JOSS review] API

    [JOSS review] API

    I wonder if CheckLayerSat is really the best name for your main tracker object. The imperative sounds more like a function name to me, and Sat is so overloaded that it's not obvious what it stands for. I would probably use something like SaturationTracker or so.

    But I understand that changing names in the public API can be a pain, so if you insist to keep it that's fine with me.

    (This is a part of the ongoing review at openjournals/joss-reviews#3992)

    opened by dionhaefner 1
  • [JOSS review] Test coverage

    [JOSS review] Test coverage

    I suggest adding a service like codecov to see how much is actually covered by tests, and adding a badge to the README. There's no shame in not reaching 100% coverage, but if you don't measure it you won't know whether your tests work as intended.

    (This is a part of the ongoing review at openjournals/joss-reviews#3992)

    opened by dionhaefner 1
  • [JOSS review] Incorrect qualifiers

    [JOSS review] Incorrect qualifiers

    Qualifiers in setup.py:

            'Programming Language :: Python :: 3.4',
            'Programming Language :: Python :: 3.5',
            'Programming Language :: Python :: 3.6',
    

    But since you have python_requires='>=3.6', this should probably be something like 3.6 through 3.10.

    (This is a part of the ongoing review at openjournals/joss-reviews#3992)

    opened by dionhaefner 1
  • [JOSS review] Pinning Pytorch

    [JOSS review] Pinning Pytorch

    Is it really necessary to pin Pytorch to ==1.9.0? Seems quite restrictive to me, and makes the package harder to install (because if you e.g. do pip install delve and then pip install torchvision it gets overwritten again).

    (This is a part of the ongoing review at https://github.com/openjournals/joss-reviews/issues/3992)

    opened by dionhaefner 1
  • ConvTranspose2d layers not being tracked

    ConvTranspose2d layers not being tracked

    class simple(nn.Module):
        def __init__(self):
            super().__init__()
            self.conv1 = nn.Conv2d(3, 32, 3)
            self.deconv1 = nn.ConvTranspose2d(32, 3, 3)
    
    simple_model = simple()
    tracker2 = CheckLayerSat("my_experiment", save_to="plotcsv", modules=simple_model, device=image.device)
    

    output:

    added layer conv1
    Skipping deconv1
    

    This is an awesome tool, but I'd love to see how well the decoder part of my autoencoder works.

    opened by marthinwurer 6
Releases(v0.1.49)
Owner
Delve
Delve is a library for visualizing layer saturation during neural network training
Delve
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM

Class Activation Map methods implemented in Pytorch pip install grad-cam ⭐ Comprehensive collection of Pixel Attribution methods for Computer Vision.

Jacob Gildenblat 6.5k Jan 01, 2023
An intuitive library to add plotting functionality to scikit-learn objects.

Welcome to Scikit-plot Single line functions for detailed visualizations The quickest and easiest way to go from analysis... ...to this. Scikit-plot i

Reiichiro Nakano 2.3k Dec 31, 2022
Pytorch Feature Map Extractor

MapExtrackt Convolutional Neural Networks Are Beautiful We all take our eyes for granted, we glance at an object for an instant and our brains can ide

Lewis Morris 40 Dec 07, 2022
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), Tens

Lutz Roeder 20.9k Dec 28, 2022
L2X - Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation.

L2X Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation at ICML 2018,

Jianbo Chen 113 Sep 06, 2022
Algorithms for monitoring and explaining machine learning models

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-qual

Seldon 1.9k Dec 30, 2022
Visualize a molecule and its conformations in Jupyter notebooks/lab using py3dmol

Mol Viewer This is a simple package wrapping py3dmol for a single command visualization of a RDKit molecule and its conformations (embed as Conformer

Benoît BAILLIF 1 Feb 11, 2022
JittorVis - Visual understanding of deep learning model.

JittorVis - Visual understanding of deep learning model.

182 Jan 06, 2023
A python library for decision tree visualization and model interpretation.

dtreeviz : Decision Tree Visualization Description A python library for decision tree visualization and model interpretation. Currently supports sciki

Terence Parr 2.4k Jan 02, 2023
Interpretability and explainability of data and machine learning models

AI Explainability 360 (v0.2.1) The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datase

1.2k Dec 29, 2022
Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)

Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)

Jesse Vig 4.7k Jan 01, 2023
Summary Explorer is a tool to visually explore the state-of-the-art in text summarization.

Summary Explorer is a tool to visually explore the state-of-the-art in text summarization.

Webis 42 Aug 14, 2022
Visualization Toolbox for Long Short Term Memory networks (LSTMs)

Visualization Toolbox for Long Short Term Memory networks (LSTMs)

Hendrik Strobelt 1.1k Jan 04, 2023
Auralisation of learned features in CNN (for audio)

AuralisationCNN This repo is for an example of auralisastion of CNNs that is demonstrated on ISMIR 2015. Files auralise.py: includes all required func

Keunwoo Choi 39 Nov 19, 2022
⬛ Python Individual Conditional Expectation Plot Toolbox

⬛ PyCEbox Python Individual Conditional Expectation Plot Toolbox A Python implementation of individual conditional expecation plots inspired by R's IC

Austin Rochford 140 Dec 30, 2022
Lucid library adapted for PyTorch

Lucent PyTorch + Lucid = Lucent The wonderful Lucid library adapted for the wonderful PyTorch! Lucent is not affiliated with Lucid or OpenAI's Clarity

Lim Swee Kiat 520 Dec 26, 2022
FairML - is a python toolbox auditing the machine learning models for bias.

======== FairML: Auditing Black-Box Predictive Models FairML is a python toolbox auditing the machine learning models for bias. Description Predictive

Julius Adebayo 338 Nov 09, 2022
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)

Hierarchical neural-net interpretations (ACD) 🧠 Produces hierarchical interpretations for a single prediction made by a pytorch neural network. Offic

Chandan Singh 111 Jan 03, 2023
A data-driven approach to quantify the value of classifiers in a machine learning ensemble.

Documentation | External Resources | Research Paper Shapley is a Python library for evaluating binary classifiers in a machine learning ensemble. The

Benedek Rozemberczki 187 Dec 27, 2022
🎆 A visualization of the CapsNet layers to better understand how it works

CapsNet-Visualization For more information on capsule networks check out my Medium articles here and here. Setup Use pip to install the required pytho

Nick Bourdakos 387 Dec 06, 2022