ML models and internal tensors 3D visualizer

Related tags

Deep Learningviewer
Overview

logo

The free Zetane Viewer is a tool to help understand and accelerate discovery in machine learning and artificial neural networks. It can be used to open the AI black box by visualizing and understanding the model's architecture and internal data (feature maps, weights, biases and layers output tensors). It can be thought of as a tool to do neuroimaging or brain imaging for artificial neural networks and machine learning algorithms.

You can also launch your own Zetane workspace directly from your existing scripts or notebooks via a few commands using the Zetane Python API.

nodes tensors


Zetane Viewer

Installation

You can install the free Zetane viewer for Windows, Linux and Mac, and explore ZTN and ONNX files.

Download for Windows

Download for Linux

Download for Mac

Tutorial

In this video, we will show you how to load a Zetane or ONNX model, navigate the model and view different tensors:

Below is the step-by-step instruction of how to load and inspect a model in the Zetane viewer:

  • How to load a model

The viewer supports both .ONNX and .ZTN files. The ZTN files were generated from the Keras and Pytorch scripts shared in this Git repository. After launching the viewer, to load a Zetane model, simply click “Load Zetane Model” in the DATA I/O menu. To load an Onnx model, click on “Import ONNX Model” in the same menu. Below you can access the ZTN files for a few models to load. You can also access ONNX files from the ONNX Model Zoo.

loading

When a model is displayed in the Zetane engine, any components of the model can be accessed in a few clicks.

At the highest level, we have the model architecture which is composed of interconnected nodes and tensors. Each node represents an operator of the computational graph. Usually, an input tensor is passed to the model and as it goes through the nodes it will be transformed into intermediate tensors until we reach the output tensor of the model. In the Zetane engine, the data flows from left to right.

architecture

  • How to navigate

You may navigate the model viewer window by right clicking and dragging to explore the space and using the scroll wheel to zoom in and out. Here is the complete list of navigation instructions. You can change the behavior of the mouse wheel (either to zoom or to navigate) via the Mouse Zoom toggle in the top menu.

zoom

  • Loading custom model inputs

After loading a model you may want to send your own inputs to the model to inference. Zetane supports loading .npy, .npz, .png, .jpg, .pb (protobuf), .tiff, and .hdr files that match the input dimensions of the model. The Zetane engine will attempt to intelligently resize the file loaded (if possible) in order to send the data to the model. After loading and running the input, you will be able to explore in detail how your model interpreted the input data.

nodes tensors tensors

  • How to inspect different layers and feature maps

For each layer, you have the option to view all the feature maps and filters by clicking on the “Show Feature Maps” on each node. You may inspect the inputs and outputs and weights and biases using the tensor view bar.

featuremap

  • Tensor view bar

By clicking on the associated button, you can visualize inputs, outputs, weights and biases (if applicable) for each individual layer. You can also investigate the shape, type, mean and standard deviation of each tensor.

tensorview

Statistics about the tensor value and its distribution is given in the histogram in the top panel. You can also see the tensor name and shape. The tensor and its values is represented in the middle panel and the bottom section contains tensor visualization parameters and a refresh button which allow the user to refresh the tensor. This is useful when the input or the weights are changing in real-time.

tensorpanel

  • Styles of tensor visualization

Tensors can be inspected in different ways, including 3D view and 2D view with and without actual values.

tensorview2



Tensor View Screenshot
N-dimensional tensor projected in the 3D space tensor_viz_3d
N-dimensional tensor projected in the 2D space tensor_viz_2d
Tensor values and color representations of each value based on the gradient shown on the x-axis of the distribution histogram tensor_viz_color-values
Tensor values__ tensor_viz_values
Feature maps view when the tensor has shape of dimension 3 tensor_viz_values

Models

We have generated a few ZTN models for inspecting their architecture and internal tensors in the viewer. We have also provided the code used to generate these models.

Image Classification

Object Detection

Image Segmentation

Body, Face and Gesture Analysis

Image Manipulation

XAI

Classic Machine Learning


Installation

Install the Zetane Viewer here.


Comments
  • BUG: Viewer crashes when loading any model

    BUG: Viewer crashes when loading any model

    I've tried loading multiple models including emotion-ferplus (both onnx and ztn formats) but they always immediately crash the viewer.

    OS: Ubuntu 20.04 Zetane 1.3.2 Dump:

    LoadUniverse(): ZTN_REQUIRE_LOGIN = 1 
    online = 0 
    ================== ExposeIRnodes: ================== 
    @@@ ExposeIRnodes() n_IR_outputs = 51. 
     <- [Parameter1367_reshape1. 
     <- [Minus340_Output_0. 
     <- [Block352_Output_0. 
     <- [Convolution362_Output_0. 
     <- [Plus364_Output_0. 
     <- [ReLU366_Output_0. 
     <- [Convolution380_Output_0. 
     <- [Plus382_Output_0. 
     <- [ReLU384_Output_0. 
     <- [Pooling398_Output_0. 
     <- [Dropout408_Output_0. 
     <- [Convolution418_Output_0. 
     <- [Plus420_Output_0. 
     <- [ReLU422_Output_0. 
     <- [Convolution436_Output_0. 
     <- [Plus438_Output_0. 
     <- [ReLU440_Output_0. 
     <- [Pooling454_Output_0. 
     <- [Dropout464_Output_0. 
     <- [Convolution474_Output_0. 
     <- [Plus476_Output_0. 
     <- [ReLU478_Output_0. 
     <- [Convolution492_Output_0. 
     <- [Plus494_Output_0. 
     <- [ReLU496_Output_0. 
     <- [Convolution510_Output_0. 
     <- [Plus512_Output_0. 
     <- [ReLU514_Output_0. 
     <- [Pooling528_Output_0. 
     <- [Dropout538_Output_0. 
     <- [Convolution548_Output_0. 
     <- [Plus550_Output_0. 
     <- [ReLU552_Output_0. 
     <- [Convolution566_Output_0. 
     <- [Plus568_Output_0. 
     <- [ReLU570_Output_0. 
     <- [Convolution584_Output_0. 
     <- [Plus586_Output_0. 
     <- [ReLU588_Output_0. 
     <- [Pooling602_Output_0. 
     <- [Dropout612_Output_0. 
     <- [Dropout612_Output_0_reshape0. 
     <- [Times622_Output_0. 
     <- [Plus624_Output_0. 
     <- [ReLU636_Output_0. 
     <- [Dropout646_Output_0. 
     <- [Times656_Output_0. 
     <- [Plus658_Output_0. 
     <- [ReLU670_Output_0. 
     <- [Dropout680_Output_0. 
     <- [Times690_Output_0. 
    node_name = Node_0000000000_Times622_reshape1_Reshape. 
     -> [Parameter1367_reshape1. 
    node_name = Node_0000000001_Minus340_Sub. 
     -> [Minus340_Output_0. 
    node_name = Node_0000000002_Block352_Div. 
     -> [Block352_Output_0. 
    node_name = Node_0000000003_Convolution362_Conv. 
     -> [Convolution362_Output_0. 
    node_name = Node_0000000004_Plus364_Add. 
     -> [Plus364_Output_0. 
    node_name = Node_0000000005_ReLU366_Relu. 
     -> [ReLU366_Output_0. 
    node_name = Node_0000000006_Convolution380_Conv. 
     -> [Convolution380_Output_0. 
    node_name = Node_0000000007_Plus382_Add. 
     -> [Plus382_Output_0. 
    node_name = Node_0000000008_ReLU384_Relu. 
     -> [ReLU384_Output_0. 
    node_name = Node_0000000009_Pooling398_MaxPool. 
     -> [Pooling398_Output_0. 
    node_name = Node_0000000010_Dropout408_Dropout. 
     -> [Dropout408_Output_0. 
    node_name = Node_0000000011_Convolution418_Conv. 
     -> [Convolution418_Output_0. 
    node_name = Node_0000000012_Plus420_Add. 
     -> [Plus420_Output_0. 
    node_name = Node_0000000013_ReLU422_Relu. 
     -> [ReLU422_Output_0. 
    node_name = Node_0000000014_Convolution436_Conv. 
     -> [Convolution436_Output_0. 
    node_name = Node_0000000015_Plus438_Add. 
     -> [Plus438_Output_0. 
    node_name = Node_0000000016_ReLU440_Relu. 
     -> [ReLU440_Output_0. 
    node_name = Node_0000000017_Pooling454_MaxPool. 
     -> [Pooling454_Output_0. 
    node_name = Node_0000000018_Dropout464_Dropout. 
     -> [Dropout464_Output_0. 
    node_name = Node_0000000019_Convolution474_Conv. 
     -> [Convolution474_Output_0. 
    node_name = Node_0000000020_Plus476_Add. 
     -> [Plus476_Output_0. 
    node_name = Node_0000000021_ReLU478_Relu. 
     -> [ReLU478_Output_0. 
    node_name = Node_0000000022_Convolution492_Conv. 
     -> [Convolution492_Output_0. 
    node_name = Node_0000000023_Plus494_Add. 
     -> [Plus494_Output_0. 
    node_name = Node_0000000024_ReLU496_Relu. 
     -> [ReLU496_Output_0. 
    node_name = Node_0000000025_Convolution510_Conv. 
     -> [Convolution510_Output_0. 
    node_name = Node_0000000026_Plus512_Add. 
     -> [Plus512_Output_0. 
    node_name = Node_0000000027_ReLU514_Relu. 
     -> [ReLU514_Output_0. 
    node_name = Node_0000000028_Pooling528_MaxPool. 
     -> [Pooling528_Output_0. 
    node_name = Node_0000000029_Dropout538_Dropout. 
     -> [Dropout538_Output_0. 
    node_name = Node_0000000030_Convolution548_Conv. 
     -> [Convolution548_Output_0. 
    node_name = Node_0000000031_Plus550_Add. 
     -> [Plus550_Output_0. 
    node_name = Node_0000000032_ReLU552_Relu. 
     -> [ReLU552_Output_0. 
    node_name = Node_0000000033_Convolution566_Conv. 
     -> [Convolution566_Output_0. 
    node_name = Node_0000000034_Plus568_Add. 
     -> [Plus568_Output_0. 
    node_name = Node_0000000035_ReLU570_Relu. 
     -> [ReLU570_Output_0. 
    node_name = Node_0000000036_Convolution584_Conv. 
     -> [Convolution584_Output_0. 
    node_name = Node_0000000037_Plus586_Add. 
     -> [Plus586_Output_0. 
    node_name = Node_0000000038_ReLU588_Relu. 
     -> [ReLU588_Output_0. 
    node_name = Node_0000000039_Pooling602_MaxPool. 
     -> [Pooling602_Output_0. 
    node_name = Node_0000000040_Dropout612_Dropout. 
     -> [Dropout612_Output_0. 
    node_name = Node_0000000041_Times622_reshape0_Reshape. 
     -> [Dropout612_Output_0_reshape0. 
    node_name = Node_0000000042_Times622_MatMul. 
     -> [Times622_Output_0. 
    node_name = Node_0000000043_Plus624_Add. 
     -> [Plus624_Output_0. 
    node_name = Node_0000000044_ReLU636_Relu. 
     -> [ReLU636_Output_0. 
    node_name = Node_0000000045_Dropout646_Dropout. 
     -> [Dropout646_Output_0. 
    node_name = Node_0000000046_Times656_MatMul. 
     -> [Times656_Output_0. 
    node_name = Node_0000000047_Plus658_Add. 
     -> [Plus658_Output_0. 
    node_name = Node_0000000048_ReLU670_Relu. 
     -> [ReLU670_Output_0. 
    node_name = Node_0000000049_Dropout680_Dropout. 
     -> [Dropout680_Output_0. 
    node_name = Node_0000000050_Times690_MatMul. 
     -> [Times690_Output_0. 
    node_name = Node_0000000051_Plus692_Add. 
    @@@ ExposeIRnodes() [Outputs] = 1 --> 52. 
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
    ***************** ValidateIRnodes: ***************** 
    ====================================  
    input_dims = [ 1, 1, 64, 64, ]. 
    --> input 0[Input3]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ ]. 
    --> input 1[Constant339]: Type 1; [0 dims] tensor  
    --------------- 
    input_dims = [ ]. 
    --> input 2[Constant343]: Type 1; [0 dims] tensor  
    --------------- 
    input_dims = [ 64, 1, 3, 3, ]. 
    --> input 3[Parameter3]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 64, 1, 1, ]. 
    --> input 4[Parameter4]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 64, 64, 3, 3, ]. 
    --> input 5[Parameter23]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 64, 1, 1, ]. 
    --> input 6[Parameter24]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 128, 64, 3, 3, ]. 
    --> input 7[Parameter63]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 128, 1, 1, ]. 
    --> input 8[Parameter64]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 128, 128, 3, 3, ]. 
    --> input 9[Parameter83]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 128, 1, 1, ]. 
    --> input 10[Parameter84]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 128, 3, 3, ]. 
    --> input 11[Parameter575]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 12[Parameter576]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 13[Parameter595]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 14[Parameter596]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 15[Parameter615]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 16[Parameter616]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 17[Parameter655]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 18[Parameter656]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 19[Parameter675]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 20[Parameter676]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 21[Parameter695]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 22[Parameter696]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 2, ]. 
    --> input 23[Dropout612_Output_0_reshape0_shape]: Type 7; [1 dims] tensor  
    --------------- 
    input_dims = [ 256, 4, 4, 1024, ]. 
    --> input 24[Parameter1367]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 2, ]. 
    --> input 25[Parameter1367_reshape1_shape]: Type 7; [1 dims] tensor  
    --------------- 
    input_dims = [ 1024, ]. 
    --> input 26[Parameter1368]: Type 1; [1 dims] tensor  
    --------------- 
    input_dims = [ 1024, 1024, ]. 
    --> input 27[Parameter1403]: Type 1; [2 dims] tensor  
    --------------- 
    input_dims = [ 1024, ]. 
    --> input 28[Parameter1404]: Type 1; [1 dims] tensor  
    --------------- 
    input_dims = [ 1024, 8, ]. 
    --> input 29[Parameter1693]: Type 1; [2 dims] tensor  
    --------------- 
    input_dims = [ 8, ]. 
    --> input 30[Parameter1694]: Type 1; [1 dims] tensor  
    --------------- 
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
    output_dims = [ 1, 8, ]. 
    --> output 0/1[Plus692_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 4096, 1024, ]. 
    --> output 1/1[Parameter1367_reshape1]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1, 64, 64, ]. 
    --> output 2/1[Minus340_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 1, 64, 64, ]. 
    --> output 3/1[Block352_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 4/1[Convolution362_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 5/1[Plus364_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 6/1[ReLU366_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 7/1[Convolution380_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 8/1[Plus382_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 9/1[ReLU384_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 32, 32, ]. 
    --> output 10/1[Pooling398_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 32, 32, ]. 
    --> output 11/1[Dropout408_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 12/1[Convolution418_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 13/1[Plus420_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 14/1[ReLU422_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 15/1[Convolution436_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 16/1[Plus438_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 17/1[ReLU440_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 16, 16, ]. 
    --> output 18/1[Pooling454_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 16, 16, ]. 
    --> output 19/1[Dropout464_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 20/1[Convolution474_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 21/1[Plus476_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 22/1[ReLU478_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 23/1[Convolution492_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 24/1[Plus494_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 25/1[ReLU496_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 26/1[Convolution510_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 27/1[Plus512_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 28/1[ReLU514_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 29/1[Pooling528_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 30/1[Dropout538_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 31/1[Convolution548_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 32/1[Plus550_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 33/1[ReLU552_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 34/1[Convolution566_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 35/1[Plus568_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 36/1[ReLU570_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 37/1[Convolution584_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 38/1[Plus586_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 39/1[ReLU588_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 4, 4, ]. 
    --> output 40/1[Pooling602_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 4, 4, ]. 
    --> output 41/1[Dropout612_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 4096, ]. 
    --> output 42/1[Dropout612_Output_0_reshape0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 43/1[Times622_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 44/1[Plus624_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 45/1[ReLU636_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 46/1[Dropout646_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 47/1[Times656_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 48/1[Plus658_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 49/1[ReLU670_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 50/1[Dropout680_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 8, ]. 
    --> output 51/1[Times690_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 
    ValidateIRnodes() 52 --> 52=52=52 valid output tensors  
    --------------- 
    ----------------- ValidateIRnodes. ----------------- 
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
    *** ExposeIRnodes: 1 --> 52 Outputs. 
    *** type: [FLOAT] ~?= STRING 
    TVZ10()  input_dims = [ 1, 1, 64, 64, ]. 
    --> input [0]: Type FLOAT; [4 dims] tensor  
    --------------- 
    Warning: Could not load "/opt/zetane/lib/graphviz/libgvplugin_pango.so.6" - file not found
    terminate called after throwing an instance of 'std::invalid_argument'
      what():  stod
    /usr/bin/zetane: line 26: 37102 Aborted                 (core dumped) ./Zetane --server
    
    
    opened by paulgavrikov 4
  • Free Trial not Available

    Free Trial not Available

    After clicking "upgrade 2 pro", I arrive at your pricing page. Clicking on "free trial" redirects me to the documentation, which instructs me to click the button "upgrade 2 pro". Now I'm stuck in an infinite loop and unhappy about it.

    I'd like to successfully exit this loop and try your product. Any Tips?

    opened by Whadup 3
  • sorry, i install deb in ubuntu20.04,but when i use it to load input jpg ,it crash,how can i get the log to find result

    sorry, i install deb in ubuntu20.04,but when i use it to load input jpg ,it crash,how can i get the log to find result

    (base) [email protected]:~/下载$ sudo dpkg -i Zetane-1.7.0.deb (正在读取数据库 ... 系统当前共安装有 330200 个文件和目录。) 准备解压 Zetane-1.7.0.deb ... 正在解压 zetane (1.7.0) 并覆盖 (1.7.0) ... 正在设置 zetane (1.7.0) ... 正在处理用于 gnome-menus (3.36.0-1ubuntu1) 的触发器 ... 正在处理用于 desktop-file-utils (0.24-1ubuntu3) 的触发器 ... 正在处理用于 mime-support (3.64ubuntu1) 的触发器 ... 正在处理用于 hicolor-icon-theme (0.17-2) 的触发器 ...

    opened by mathpopo 2
  • engine is not launched after running example 'hello world' code

    engine is not launched after running example 'hello world' code

    By following the guide here https://docs.zetane.com/getting_started.html#installation, I created a scripy to run the 'hello world' code. However, the engine was launched not shown anything

    OS: Windows 10.0 Zetane 1.7.0

    Console output:

    Dialing Zetane... Did not connect! Dialing Zetane... Did not connect! Dialing Zetane... Did not connect! running process: /usr/bin/zetane --server 127.0.0.1 --port 4004 Dialing Zetane... Did not connect! Dialing Zetane... Did not connect! Dialing Zetane... Connected to Zetane Engine! image

    opened by wftubby 0
  • engine is not launched after running example 'hello world' code

    engine is not launched after running example 'hello world' code

    By following the guide here https://docs.zetane.com/getting_started.html#installation, I created a scripy to run the 'hello world' code. However, the engine was not launched but keep printing "Dialing Zetane"

    OS: Ubuntu 18.04 Zetane 1.7.0

    Console output:

    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    running process: /usr/bin/zetane --server 127.0.0.1 --port 4004
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    
    
    opened by akzing-hz 6
Releases(v1.7.4)
  • v1.7.4(Jun 1, 2022)

    Viewer Engine

    • Added support for ONNX 1.10.2
    • Added support for ONNX Runtime 1.10.0
    • Added support for Keras/TensorFlow 2.9.1
    • Improved progress notifications when loading Keras models
    • Fixed crash cause by nested Keras models.
    • Reduced Tensor viewer memory usage
    • Dropped support for Ubuntu 16.04 LTS. See the up-to-date Minimum Requirements.
    • Deprecated support for macOS 10.14 Mojave

    API

    • Added the Zetane API context manager to automate view updates and cleanup, resulting in less verbose code.
    • Added support for Python 3.9
    • Dropped support for Python 3.6
    • Fixed protobuf dependency versioning
    Source code(tar.gz)
    Source code(zip)
    Zetane-1.7.4.deb(273.45 MB)
    Zetane-1.7.4.dmg(312.91 MB)
    Zetane-1.7.4.msi(300.01 MB)
  • 1.7.0(Nov 15, 2021)

  • 1.6.2(Sep 22, 2021)

    • Added output blocks for models to prevent navigation to the end of the model graph
    • Added a Top-K output view for tensors that match certain shapes, e.g. (1, N). Classification models now have a more human understandable output.
    • Update to onnxruntime 1.8.1 to support latest ONNX opset.
    • Improve autodetection of input shapes to allow more inputs to pass inference without shape errors.
    • Fixes for RAM overuse
    • Fixes for Mesh API
    Source code(tar.gz)
    Source code(zip)
    Zetane-1.6.2.deb(347.23 MB)
    Zetane-1.6.2.dmg(326.94 MB)
    Zetane-1.6.2.msi(301.44 MB)
  • 1.5.0(Jun 16, 2021)

  • 1.4.0(May 26, 2021)

  • 1.3.0(Apr 21, 2021)

    • When ONNX models are loaded, an inference pass with sample data is run by default. That means all tensors / feature maps / weights / biases should be viewable immediately after input load. Please let us know if there are models that don't succeed at this initial pass so we can fix them!
    Screen Shot 2021-04-21 at 11 55 29 AM

    (PRO) User input nodes are now attached to the model architecture diagram. When using Zetane Viewer Pro ($15/month) you can load custom inputs and send them through the model. Currently supported formats are .npy, .npz, .pb, and the majority of image formats (jpg, png, tiff, hdr, pic). Screen Shot 2021-04-21 at 11 53 16 AM Screen Shot 2021-04-21 at 12 10 19 PM Screen Shot 2021-04-21 at 11 54 01 AM

    (PRO) When user inputs are misshapen, the engine will display an error about the model's shape expectation. Note that this feature is also usable by free users without the error popup, the input node will load the user input and show dimensions before attempting to run inference with the model. Screen Shot 2021-04-21 at 11 59 54 AM Screen Shot 2021-04-21 at 12 00 10 PM

    (PRO) Any errors during model inference will also appear in the UI. An example is the shape error above. Individual graph operations may fail at any point during the inference pass-- the engine will attempt to populate the graph outputs up until the point of the error, a stack trace of the model run.

    As always, we welcome feedback, bug reports, and any suggestions you might have.

    Source code(tar.gz)
    Source code(zip)
    Zetane-1.3.0.deb(395.53 MB)
    Zetane-1.3.0.dmg(451.85 MB)
    Zetane-1.3.0.msi(452.15 MB)
  • 1.2.0(Apr 5, 2021)

    • Shape mismatch errors for running model inference are shown in the UI, describing the expected input and the given input. (PRO)
    Screen Shot 2021-04-05 at 12 02 59 PM
    • Changed default UI interaction with a mouse wheel to zoom by default, right click to drag the UI.
    • Panels now scroll or move on hover, not just after being selected.
    • Tensor viewer displays the original shape from file or API, without reordering the dimensions to fit the view panel.
    • User notification for version upgrade now appears in the UI.
    • Mac / Linux now run in API mode by default.
    • Added a new ZTN snapshot for XAI features.
    • User inputs now show above the Model Explorer panel's input node.
    • A number of bug fixes and performance improvements
    Source code(tar.gz)
    Source code(zip)
    Zetane-1.2.0.deb(373.68 MB)
    Zetane-1.2.0.dmg(451.16 MB)
    Zetane-1.2.0.msi(438.49 MB)
  • 1.1.4(Feb 22, 2021)

Owner
Zetane Systems
Zetane Systems
Source code of our work: "Benchmarking Deep Models for Salient Object Detection"

SALOD Source code of our work: "Benchmarking Deep Models for Salient Object Detection". In this works, we propose a new benchmark for SALient Object D

22 Dec 30, 2022
PFLD pytorch Implementation

PFLD-pytorch Implementation of PFLD A Practical Facial Landmark Detector by pytorch. 1. install requirements pip3 install -r requirements.txt 2. Datas

zhaozhichao 669 Jan 02, 2023
Official implementation of "SinIR: Efficient General Image Manipulation with Single Image Reconstruction" (ICML 2021)

SinIR (Official Implementation) Requirements To install requirements: pip install -r requirements.txt We used Python 3.7.4 and f-strings which are in

47 Oct 11, 2022
InsightFace: 2D and 3D Face Analysis Project on MXNet and PyTorch

InsightFace: 2D and 3D Face Analysis Project on MXNet and PyTorch

Deep Insight 13.2k Jan 06, 2023
LSSY量化交易系统

LSSY量化交易系统 该项目是本人3年来研究量化慢慢积累开发的一套系统,属于早期作品慢慢修改而来,仅供学习研究,回测分析,实盘交易部分未公开

55 Oct 04, 2022
Self-supervised Augmentation Consistency for Adapting Semantic Segmentation (CVPR 2021)

Self-supervised Augmentation Consistency for Adapting Semantic Segmentation This repository contains the official implementation of our paper: Self-su

Visual Inference Lab @TU Darmstadt 132 Dec 21, 2022
Pytorch implementation of the paper Time-series Generative Adversarial Networks

TimeGAN-pytorch Pytorch implementation of the paper Time-series Generative Adversarial Networks presented at NeurIPS'19. Jinsung Yoon, Daniel Jarrett

Zhiwei ZHANG 21 Nov 24, 2022
Clinica is a software platform for clinical research studies involving patients with neurological and psychiatric diseases and the acquisition of multimodal data

Clinica Software platform for clinical neuroimaging studies Homepage | Documentation | Paper | Forum | See also: AD-ML, AD-DL ClinicaDL About The Proj

ARAMIS Lab 165 Dec 29, 2022
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Shipeng Wang 34 Dec 21, 2022
SPTAG: A library for fast approximate nearest neighbor search

SPTAG: A library for fast approximate nearest neighbor search SPTAG SPTAG (Space Partition Tree And Graph) is a library for large scale vector approxi

Microsoft 4.3k Jan 01, 2023
The fundamental package for scientific computing with Python.

NumPy is the fundamental package needed for scientific computing with Python. Website: https://www.numpy.org Documentation: https://numpy.org/doc Mail

NumPy 22.4k Jan 09, 2023
Avatarify Python - Avatars for Zoom, Skype and other video-conferencing apps.

Avatarify Python - Avatars for Zoom, Skype and other video-conferencing apps.

Ali Aliev 15.3k Jan 05, 2023
A spherical CNN for weather forecasting

DeepSphere-Weather - Deep Learning on the sphere for weather/climate applications. The code in this repository provides a scalable and flexible framew

DeepSphere 47 Dec 25, 2022
Language Used: Python . Made in Jupyter(Anaconda) notebook.

FACE-DETECTION-ATTENDENCE-SYSTEM Made in Jupyter(Anaconda) notebook. Language Used: Python Steps to perform before running the program : Install Anaco

1 Jan 12, 2022
COPA-SSE contains crowdsourced explanations for the Balanced COPA dataset

COPA-SSE Repository for COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning. COPA-SSE contains crowdsourced explanations for the Balanced

Ana Brassard 5 Jul 31, 2022
Few-Shot Graph Learning for Molecular Property Prediction

Few-shot Graph Learning for Molecular Property Prediction Introduction This is the source code and dataset for the following paper: Few-shot Graph Lea

Zhichun Guo 94 Dec 12, 2022
A New Approach to Overgenerating and Scoring Abstractive Summaries

We provide the source code for the paper "A New Approach to Overgenerating and Scoring Abstractive Summaries" accepted at NAACL'21. If you find the code useful, please cite the following paper.

Kaiqiang Song 4 Apr 03, 2022
Python library containing BART query generation and BERT-based Siamese models for neural retrieval.

Neural Retrieval Embedding-based Zero-shot Retrieval through Query Generation leverages query synthesis over large corpuses of unlabeled text (such as

Amazon Web Services - Labs 35 Apr 14, 2022
PPO Lagrangian in JAX

PPO Lagrangian in JAX This repository implements PPO in JAX. Implementation is tested on the safety-gym benchmark. Usage Install dependencies using th

Karush Suri 2 Sep 14, 2022
Adaptive Denoising Training (ADT) for Recommendation.

DenoisingRec Adaptive Denoising Training for Recommendation. This is the pytorch implementation of our paper at WSDM 2021: Denoising Implicit Feedback

Wenjie Wang 51 Dec 30, 2022