The goal of this library is to generate more helpful exception messages for numpy/pytorch matrix algebra expressions.

Overview

Tensor Sensor

See article Clarifying exceptions and visualizing tensor operations in deep learning code.

One of the biggest challenges when writing code to implement deep learning networks, particularly for us newbies, is getting all of the tensor (matrix and vector) dimensions to line up properly. It's really easy to lose track of tensor dimensionality in complicated expressions involving multiple tensors and tensor operations. Even when just feeding data into predefined Tensorflow network layers, we still need to get the dimensions right. When you ask for improper computations, you're going to run into some less than helpful exception messages.

To help myself and other programmers debug tensor code, I built this library. TensorSensor clarifies exceptions by augmenting messages and visualizing Python code to indicate the shape of tensor variables (see figure to the right for a teaser). It works with Tensorflow, PyTorch, JAX, and Numpy, as well as higher-level libraries like Keras and fastai.

TensorSensor is currently at 0.1 (Dec 2020) so I'm happy to receive issues created at this repo or direct email.

Visualizations

For more, see examples.ipynb.

import torch
W = torch.rand(d,n_neurons)
b = torch.rand(n_neurons,1)
X = torch.rand(n,d)
with tsensor.clarify():
    Y = W @ X.T + b

Displays this in a jupyter notebook or separate window:

Instead of the following default exception message:

RuntimeError: size mismatch, m1: [764 x 100], m2: [764 x 200] at /tmp/pip-req-build-as628lz5/aten/src/TH/generic/THTensorMath.cpp:41

TensorSensor augments the message with more information about which operator caused the problem and includes the shape of the operands:

Cause: @ on tensor operand W w/shape [764, 100] and operand X.T w/shape [764, 200]

You can also get the full computation graph for an expression that includes all of these sub result shapes.

tsensor.astviz("b = [email protected] + (h+3).dot(h) + torch.abs(torch.tensor(34))", sys._getframe())

yields the following abstract syntax tree with shapes:

Install

pip install tensor-sensor             # This will only install the library for you
pip install tensor-sensor[torch]      # install pytorch related dependency
pip install tensor-sensor[tensorflow] # install tensorflow related dependency
pip install tensor-sensor[jax]        # install jax, jaxlib
pip install tensor-sensor[all]        # install tensorflow, pytorch, jax

which gives you module tsensor. I developed and tested with the following versions

$ pip list | grep -i flow
tensorflow                         2.3.0
tensorflow-estimator               2.3.0
$ pip list | grep -i numpy
numpy                              1.18.5
numpydoc                           1.1.0
$ pip list | grep -i torch
torch                              1.6.0
$ pip list | grep -i jax
jax                                0.2.6
jaxlib                             0.1.57

Graphviz for tsensor.astviz()

For displaying abstract syntax trees (ASTs) with tsensor.astviz(...), then you need the dot executable from graphviz, not just the python library.

On Mac, do this before or after tensor-sensor install:

brew install graphviz

On Windows, apparently you need

conda install python-graphviz  # Do this first; get's dot executable and py lib
pip install tensor-sensor      # Or one of the other installs

Limitations

I rely on parsing lines that are assignments or expressions only so the clarify and explain routines do not handle methods expressed like:

def bar(): b + x * 3

Instead, use

def bar():
	b + x * 3

watch out for side effects! I don't do assignments, but any functions you call with side effects will be done while I reevaluate statements.

Can't handle \ continuations.

With Python threading package, don't use multiple threads calling clarify(). multiprocessing package should be fine.

Also note: I've built my own parser to handle just the assignments / expressions tsensor can handle.

Deploy (parrt's use)

$ python setup.py sdist upload 

Or download and install locally

$ cd ~/github/tensor-sensor
$ pip install .

TODO

  • can i call pyviz in debugger?
Comments
  • Optional dependencies not working properly

    Optional dependencies not working properly

    • Issue: For some reason both pip install tensor-sensor pip install tensor-sensor[torch] attempt to install Tensorflow too.

    • Environment:

      • win10 latest (10.10.2020)
      • conda 4.8.3 virtual env
      • pytorch 1.6.0 installed via conda (the official way)
      • no tensorflow
    • Workaround: pip install tensor-sensor --no-deps pip install graphviz

    build 
    opened by ColdTeapot273K 10
  • Supporting JAX

    Supporting JAX

    Hi,

    Thanks for the awesome library! This has really made my debugging life much easier.

    Just a question. Is there any plan to support JAX? I think this can be similarly supported since the API of JAX almost looks identical to NumPy.

    compatibility 
    opened by ethanluoyc 8
  • Remove hard torch dependecies for keras/tensorflow user

    Remove hard torch dependecies for keras/tensorflow user

    Currently, in the analysis.py _shape method it always try to check if torch.Size exist. So for a keras user, if they don't have torch install, it will throw an error since analysis.py is importing it.

      File "/home/shawley/Downloads/tensor-sensor/tsensor/analysis.py", line 27, in <module>
        import torch
    ModuleNotFoundError: No module named 'torch'
    

    Related #8

    enhancement compatibility 
    opened by noklam 5
  • executing and pure_eval

    executing and pure_eval

    Hi! I stumbled across this library and noticed I could help. I've written a couple of libraries that are great for this stuff:

    • https://github.com/alexmojaki/executing
    • https://github.com/alexmojaki/pure_eval

    Here is a demo of how you could use it for this kind of project:

    import ast
    
    import executing
    import pure_eval
    import sys
    
    
    def explain_error():
        ex = executing.Source.executing(sys.exc_info()[2])
        if not (ex.node and isinstance(ex.node, ast.BinOp)):
            return
    
        evaluator = pure_eval.Evaluator.from_frame(ex.frame)
        atok = ex.source.asttokens()
    
        try:
            print(f"Cannot add "
                  f"{atok.get_text(ex.node.left)} = {evaluator[ex.node.left]!r} and "
                  f"{atok.get_text(ex.node.right)} = {evaluator[ex.node.right]!r}")
        except pure_eval.CannotEval:
            print(f"Cannot safely evaluate operands of {ex.text()}. Extract them into variables.")
    
    
    a = ["abc", 3]
    
    try:
        print(a[0] + a[1])
    except:
        explain_error()
    
    try:
        print("only print once") + 3
    except:
        explain_error()
    

    To run this you will need to pip install executing pure_eval asttokens.

    This should improve the parsing and such significantly. For example this will handle line continuations just fine. pure_eval will only evaluate simple expressions to avoid accidentally triggering side effects.

    This uses the ast module from the standard library. Is there a reason you wrote your own parser? The best place to learn about ast is here: https://greentreesnakes.readthedocs.io/en/latest/

    I'll let you integrate it into your code yourself, but let me know if you have questions.

    suggestion 
    opened by alexmojaki 5
  • pip install submodule to avoid install all dependecies

    pip install submodule to avoid install all dependecies

    Very often, people already have the tensor library they are using, keras, torch, tensorflow if they are coming for this library. Currently, the package tries to install all dependencies. For example, if I am using PyTorch, I don't really need to install the big TensorFlow library in the environment.

    Added options

     pip install tensor-sensor[all]
     pip install tensor-sensor[torch]
     pip install tensor-sensor[tensorflow]
    
    build compatibility hacktoberfest-accepted 
    opened by noklam 4
  • feat: explainer support pdf

    feat: explainer support pdf

    Enable PDF (and other supported formats) savefig in explainer by removing hardcoded extension

    import torch
    W = torch.rand(d,n_neurons)
    b = torch.rand(n_neurons,1)
    X = torch.rand(n,d)
    with ts.explain(savefig="my_inspection.pdf"):
        Y = W @ X.T + b
    
    enhancement 
    opened by sbrugman 3
  • Add tensor element type info

    Add tensor element type info

    From @sbrugman:

    Our concrete issue was with Pytorch (unexpectedly) converting tensors with only integers to float, which later in the program resulted in an error because it could not be used as an index. Another issue was changing size from 32 to 64 bit floats.

    It's indeed the element type of the matrix.

    There are multiple somewhat related issues: https://discuss.pytorch.org/t/problems-with-target-arrays-of-int-int32-types-in-loss-functions/140/2 https://discuss.pytorch.org/t/why-pytorch-is-giving-me-hard-time-with-float-long-double-tensor/14678/6

    The common denominator between dimensionality debugging is that both type and dimensionality are practically hidden from the user:

    Screenshot 2021-09-20 at 20 00 33
    import numpy as np
    import tsensor as ts
    
    x = np.arange(6, dtype=np.float32)
    
    with ts.explain(savefig="types.pdf"):
        print(x.dtype)
        print((x*x).dtype)
        print((np.sin(x)).dtype)
        print((x + np.arange(6)).dtype)
        print((np.multiply.outer(x, np.arange(2.0))).dtype)
        print((np.outer(x, np.arange(2.0))).dtype)
    
    enhancement 
    opened by parrt 3
  • Suppress visualisation of () as operator in tree

    Suppress visualisation of () as operator in tree

    Hello.

    I am using tensor-sensor to visualise DAGs of the domain specific language Slate, which is used in a code generation framework for Finite element methods called Firedrake. Slate expresses linear algebra operations on tensors. I am using tensor-sensor to visualise the DAG before and after an optimisation pass. An example would be the following:

    Before optimisation: triplemul_beforeopt.pdf After optimisation: tripleopt_afteropt.pdf

    While the visualisation of the tree is correct in both cases, I would quite like to suppress the node for the brackets (i.e. for the SubExpr node) to avoid confusion about the amount of temporaries generated. Is there already a way of controlling this as a user and if not would there be interest in supporting it?

    Best wishes and thanks in advance, Sophia

    enhancement 
    opened by sv2518 3
  • Seem a problem with np.ones() function

    Seem a problem with np.ones() function

    Hi! I really thank u for your brilliant work of tsensor which help me to debug more effectively.

    But recently, when I run this code in Jupyter or Pycharm, it always leads to a KeyError:

    code: with ts.explain(): a = np.ones(3)

    KeyError report: KeyError Traceback (most recent call last) in () 1 with ts.explain(): ----> 2 a = np.ones(3) 3

    F:\anaconda_file2\envs\test\lib\site-packages\numpy\core\numeric.py in ones(shape, dtype, order) 206 """ 207 a = empty(shape, dtype, order) --> 208 multiarray.copyto(a, 1, casting='unsafe') 209 return a 210

    <array_function internals> in copyto(*args, **kwargs)

    F:\anaconda_file2\envs\test\lib\site-packages\tsensor\analysis.py in listener(self, frame, event, arg) 266 267 def listener(self, frame, event, arg): --> 268 module = frame.f_globals['name'] 269 info = inspect.getframeinfo(frame) 270 filename, line = info.filename, info.lineno

    KeyError: 'name'

    Is there anything I can do to fix this problem? Grateful to gain any feedback!

    bug 
    opened by DemonsHunter 3
  • Unhandled statements cause exceptions (Was: Nested calls to clarify can raise stacked Exceptions)

    Unhandled statements cause exceptions (Was: Nested calls to clarify can raise stacked Exceptions)

    Hello,

    I created a decorator to call clarify around the forward function of my custom Pytorch models (derived from torch.nn.Module).

    Said decorator looks like this:

    def clarify(function: callable) -> callable:
        """ Clarify decorator."""
    
        def call_clarify(*args, **kwargs):
            with tsensor.clarify(fontname="DejaVu Sans"):
                return function(*args, **kwargs)
    
        return call_clarify
    

    When doing machine learning using Pytorch, models (derived from torch.nn.Module) can sometimes be "stacked". In a translation task, an EncoderDecoder's forward will call its Decoder's forward, itself calling the forward of an Attention module, for example.

    In such a case, this results in nested clarify calls, which raise a succession of Exceptions, because some of the topmost clarify function do not exit correctly. To be more specific, l.124 of analysis.py, self.view can be None, which then raises an Exception on self.view.show().

    A quick fix (that I did in local) was adding a check line 131:

                    if self.view:
                        if self.show=='viz':
                            self.view.show()
                        augment_exception(exc_value, self.view.offending_expr)
    

    However, I am not sure this would be the best fix possible, as I am not sure whether that is a common problem or not and how/if this is intended to be fixed. What do you think?

    enhancement 
    opened by clefourrier 3
  • Contribution Guidelines

    Contribution Guidelines

    I've a feeling that at some point many(including me) would like to contribute to this library and it would be great if it had some contribution guidelines.

    suggestion 
    opened by skat00sh 3
  • Boxes for operands packed too tightly

    Boxes for operands packed too tightly

    There is some overlap with these boxes:

    tight-1

    import torch
    import tsensor
    
    n = 200         # number of instances
    d = 764         # number of instance features
    nhidden = 256
    
    Whh = torch.eye(nhidden, nhidden)  # Identity matrix
    Uxh = torch.randn(nhidden, d)
    bh  = torch.zeros(nhidden, 1)
    h = torch.randn(nhidden, 1)         # fake previous hidden state h
    r = torch.randn(nhidden, 3)         # fake this computation
    X = torch.rand(n,d)                 # fake input
    
    with tsensor.explain(savefig):
        r*h
    
    bug 
    opened by parrt 0
  • Showing too many matrices for complicated operands

    Showing too many matrices for complicated operands

    The following code generates an exception but instead of showing the result of the operand subexpressions, it shows all bits of it:

    toomany-1

    import torch
    import tsensor
    
    n = 200         # number of instances
    d = 764         # number of instance features
    nhidden = 256
    
    Whh = torch.eye(nhidden, nhidden)  # Identity matrix
    Uxh = torch.randn(nhidden, d)
    bh  = torch.zeros(nhidden, 1)
    h = torch.randn(nhidden, 1)         # fake previous hidden state h
    # r = torch.randn(nhidden, 1)         # fake this computation
    r = torch.randn(nhidden, 3)         # fake this computation
    X = torch.rand(n,d)                 # fake input
    
    # Following code raises an exception
    with tsensor.clarify():
        h = torch.tanh(Whh @ (r*h) + Uxh @ X.T + bh)  # state vector update equation
    
    bug enhancement 
    opened by parrt 0
  • Improvement: See into nn.Sequential models

    Improvement: See into nn.Sequential models

    The following exception not only generates a huge stack trace but also TensorSensor gives an error message augmentation indicating that Y = model(X) is the issue because it does not descend into tensor library code. It would be better to allow it to see inside the model pipeline. so that it can notice that the error is actually here:

    nn.Linear(10, n_neurons)
    

    which should be

    nn.Linear(n_neurons, 10)
    

    Here's the full example:

    from torch import nn
    n = 20
    n_neurons = 50
    model = nn.Sequential(
        nn.Linear(784, n_neurons), # 28x28 flattened image
        nn.ReLU(),
        nn.Linear(10, n_neurons),  # 10 output classes (0-9) <---- ooops! reverse those
        nn.Softmax(dim=1)
    )
    X = torch.rand(n,784) # n instances of feature vectors with 784 pixels
    with tsensor.clarify():
        Y = model(X)
    

    The error message we get is here:

    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    <ipython-input-32-203c7ad8d609> in <module>
          1 with tsensor.clarify():
    ----> 2     Y = model(X)
    
    ~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
       1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
       1050                 or _global_forward_hooks or _global_forward_pre_hooks):
    -> 1051             return forward_call(*input, **kwargs)
       1052         # Do not call functions when jit is used
       1053         full_backward_hooks, non_full_backward_hooks = [], []
    
    ~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/container.py in forward(self, input)
        137     def forward(self, input):
        138         for module in self:
    --> 139             input = module(input)
        140         return input
        141 
    
    ~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
       1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
       1050                 or _global_forward_hooks or _global_forward_pre_hooks):
    -> 1051             return forward_call(*input, **kwargs)
       1052         # Do not call functions when jit is used
       1053         full_backward_hooks, non_full_backward_hooks = [], []
    
    ~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input)
         94 
         95     def forward(self, input: Tensor) -> Tensor:
    ---> 96         return F.linear(input, self.weight, self.bias)
         97 
         98     def extra_repr(self) -> str:
    
    ~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py in linear(input, weight, bias)
       1845     if has_torch_function_variadic(input, weight):
       1846         return handle_torch_function(linear, (input, weight), input, weight, bias=bias)
    -> 1847     return torch._C._nn.linear(input, weight, bias)
       1848 
       1849 
    
    RuntimeError: mat1 and mat2 shapes cannot be multiplied (20x50 and 10x50)
    Cause: model(X) tensor arg X w/shape [20, 784]
    
    enhancement 
    opened by parrt 0
  • various enhancements

    various enhancements

    great works!

    debug these math expression code is a very big problem

    not only scalar vector matrix , high rank tensor support

    Visualizing 3D tensors and beyond:

    TensorDim is contain : left to right [N,C,H,W] N->C->H->W TensorDim like flatten Onions,cabbage box in box represent dim TensorDim has real represention information (N,C,H,W etc.) image

    namedtensor support

    plaidml dsl Types Tensor TensorDim TensorIndex

    represent actual data: audio, 1D plot, text : vector image ,2D plot : matrix etc.

    expression graph:

    different color input variable ,leaf parameters/variable with temporal variable / activation : "Road width" elementwise leaf / elementwise or slice or other connection(edge) print AST forward & backward mode

    animation:

    expression step computation mode (debug) slice ,reshape, .T() other manipulate N-d array TensorDim/TensorIndex operators example : matmul = [email protected] [email protected] = (v.squeeze(0).expand_as(w) * w).sum(1,keepdim=True).unsqueeze(1) image more ... conv2d etc

    interactive:

    interactive build block(visual programing) reverse interactive(debug) : selected tensor elements and follow it in expression graph selected element : muti view , like convolution-visualizer Input(Inputgrad) Weight(Weightgrad) Output(Outputgrad) views

    NN support:

    NN module visualization (conv2d) bigger computation graph : pytorchrec multi computation graph visualization and live debug

    other useful link

    SimpNet memory map image Visualize the virtual address space of a Windows process on a Hilbert curve.

    nvidia ppt image

    tensor network image

    einops

    enhancement 
    opened by koke2c95 2
Releases(1.0)
Owner
Terence Parr
Creator of the ANTLR parser generator. Professor at Univ of San Francisco, computer science and data science. Working mostly on machine learning stuff now.
Terence Parr
TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

1k Dec 28, 2022
A PyTorch implementation of Learning to learn by gradient descent by gradient descent

Intro PyTorch implementation of Learning to learn by gradient descent by gradient descent. Run python main.py TODO Initial implementation Toy data LST

Ilya Kostrikov 300 Dec 11, 2022
Differentiable SDE solvers with GPU support and efficient sensitivity analysis.

PyTorch Implementation of Differentiable SDE Solvers This library provides stochastic differential equation (SDE) solvers with GPU support and efficie

Google Research 1.2k Jan 04, 2023
A Closer Look at Structured Pruning for Neural Network Compression

A Closer Look at Structured Pruning for Neural Network Compression Code used to reproduce experiments in https://arxiv.org/abs/1810.04622. To prune, w

Bayesian and Neural Systems Group 140 Dec 05, 2022
Pytorch implementation of Distributed Proximal Policy Optimization

Pytorch-DPPO Pytorch implementation of Distributed Proximal Policy Optimization: https://arxiv.org/abs/1707.02286 Using PPO with clip loss (from https

Alexis David Jacq 164 Jan 05, 2023
A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

Fidelity Investments 56 Sep 13, 2022
Tez is a super-simple and lightweight Trainer for PyTorch. It also comes with many utils that you can use to tackle over 90% of deep learning projects in PyTorch.

Tez: a simple pytorch trainer NOTE: Currently, we are not accepting any pull requests! All PRs will be closed. If you want a feature or something does

abhishek thakur 1.1k Jan 04, 2023
PyTorch toolkit for biomedical imaging

farabio is a minimal PyTorch toolkit for out-of-the-box deep learning support in biomedical imaging. For further information, see Wikis and Docs.

San Askaruly 47 Dec 28, 2022
PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

Cong Cai 12 Dec 19, 2021
OptNet: Differentiable Optimization as a Layer in Neural Networks

OptNet: Differentiable Optimization as a Layer in Neural Networks This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch sourc

CMU Locus Lab 428 Dec 24, 2022
Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which

ASAPP Research 2.1k Jan 01, 2023
A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch

Torchmeta A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch. Torchmeta contains popular meta-learning bench

Tristan Deleu 1.7k Jan 06, 2023
PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation.

PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation. It aims to accelerate research by providing a modular design that all

Preferred Networks, Inc. 96 Nov 28, 2022
This is an differentiable pytorch implementation of SIFT patch descriptor.

This is an differentiable pytorch implementation of SIFT patch descriptor. It is very slow for describing one patch, but quite fast for batch. It can

Dmytro Mishkin 150 Dec 24, 2022
Code for paper "Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking"

model_based_energy_constrained_compression Code for paper "Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and

Haichuan Yang 16 Jun 15, 2022
High-fidelity performance metrics for generative models in PyTorch

High-fidelity performance metrics for generative models in PyTorch

Vikram Voleti 5 Oct 24, 2021
PyGCL: Graph Contrastive Learning Library for PyTorch

PyGCL is an open-source library for graph contrastive learning (GCL), which features modularized GCL components from published papers, standardized evaluation, and experiment management.

GCL: Graph Contrastive Learning Library for PyTorch 592 Jan 07, 2023
A Pytorch Implementation for Compact Bilinear Pooling.

CompactBilinearPooling-Pytorch A Pytorch Implementation for Compact Bilinear Pooling. Adapted from tensorflow_compact_bilinear_pooling Prerequisites I

169 Dec 23, 2022
Fast, general, and tested differentiable structured prediction in PyTorch

Torch-Struct: Structured Prediction Library A library of tested, GPU implementations of core structured prediction algorithms for deep learning applic

HNLP 1.1k Jan 07, 2023
ocaml-torch provides some ocaml bindings for the PyTorch tensor library.

ocaml-torch provides some ocaml bindings for the PyTorch tensor library. This brings to OCaml NumPy-like tensor computations with GPU acceleration and tape-based automatic differentiation.

Laurent Mazare 369 Jan 03, 2023