Lolviz - A simple Python data-structure visualization tool for lists of lists, lists, dictionaries; primarily for use in Jupyter notebooks / presentations

Related tags

Deep Learninglolviz
Overview

lolviz

By Terence Parr. See Explained.ai for more stuff.

A very nice looking javascript lolviz port with improvements by Adnan M.Sagar.

A simple Python data-structure visualization tool that started out as a List Of Lists (lol) visualizer but now handles arbitrary object graphs, including function call stacks! lolviz tries to look out for and format nicely common data structures such as lists, dictionaries, linked lists, and binary trees. This package is primarily for use in teaching and presentations with Jupyter notebooks, but could also be used for debugging data structures. Useful for devoting machine learning data structures, such as decision trees, as well.

It seems that I'm always trying to describe how data is laid out in memory to students. There are really great data structure visualization tools but I wanted something I could use directly via Python in Jupyter notebooks.

The look and idea was inspired by the awesome Python tutor. The graphviz/dot tool does all of the heavy lifting underneath for layout; my contribution is primarily making graphviz display objects in a nice way.

Functionality

There are currently a number of functions of interest that return graphviz.files.Source objects:

  • listviz(): Horizontal list visualization
  • lolviz(): List of lists visualization with the first list vertical and the nested lists horizontal.
  • treeviz(): Binary trees visualized top-down ala computer science.
  • objviz(): Generic object graph visualization that knows how to find lists of lists (like lolviz()) and linked lists. Trees are also displayed reasonably, but with left to right orientation instead of top-down (a limitation of graphviz). Here is an example linked list and dictionary:

  • callsviz(): Visualize the call stack and anything pointed to by globals, locals, or parameters. You can limit the variables displayed by passing in a list of varnames as an argument.
  • callviz(): Same as callsviz() but displays only the current function's frame or you can pass in a Python stack frame object to display.
  • matrixviz(data): Display numpy ndarray; only 1D and 2D at moment.
  • strviz(): Show a string like an array.

Given the return value in generic Python, simply call method view() on the returned object to display the visualization. From jupyter, call function IPython.display.display() with the returned object as an argument. Function arguments are in italics.

Check out the examples.

Installation

First you need graphviz (more specifically the dot executable). On a mac it's easy:

$ brew install graphviz

Then just install the lolviz Python package:

$ pip install lolviz

or upgrade to the latest version:

$ pip install -U lolviz

Usage

From within generic Python, you can get a window to pop up using the view() method:

from lolviz import *
data = ['hi','mom',{3,4},{"parrt":"user"}]
g = listviz(data)
print(g.source) # if you want to see the graphviz source
g.view() # render and show graphviz.files.Source object

From within Jupyter notebooks you can avoid the render() call because Jupyter knows how to display graphviz.files.Source objects:

For more examples that you can cut-and-paste, please see the jupyter notebook full of examples.

Preferences

There are global preferences you can set that affect the display for long values:

  • prefs.max_str_len (Default 20). How many chars in a string representation of a value before we abbreviate with .... E.g.,:
  • prefs.max_horiz_array_len (Default 70) Lists can quickly become too wide and distort the visualization. This preference lets you set how long the combined string representations of the list values can get before we use a vertical representation of the list. E.g.,:
  • prefs.max_list_elems. Horizontal and vertical lists and sets show maximum of 10 (default) elements.
  • prefs.float_precision. How many decimal places to show for floats (default is 5).

Implementation notes

Mostly notes for parrt to remember things.

Graphviz

  • Ugh. shape=record means html-labels can't use ports. warning!

  • warning: <td> and </td> must be on same line or row is super wide!

Deploy

$ python setup.py sdist upload 

Or to install locally

$ cd ~/github/lolviz
$ pip install .
Comments
  • name 'unicode' is not defined with Python 3.6

    name 'unicode' is not defined with Python 3.6

    Hi, thanks for this great python package.

    I noticed the follow error when I do this in jupyter on python-3.6.2

    import lolviz
    lolviz.objviz('1234567890')
    
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-4-7a8f0cba785c> in <module>()
          1 import lolviz
    ----> 2 lolviz.objviz('1234567890')
    
    /usr/lib/python3.6/site-packages/lolviz.py in objviz(o, orientation)
        217 """ % orientation
        218     reachable = closure(o)
    --> 219     s += obj_nodes(reachable)
        220     s += obj_edges(reachable)
        221     s += "}\n"
    
    /usr/lib/python3.6/site-packages/lolviz.py in obj_nodes(nodes)
        229     # currently only making subgraph cluster for linked lists
        230     # otherwise it squishes trees.
    --> 231     max_edges_for_type,subgraphs = connected_subgraphs(nodes)
        232     c = 1
        233     for g in subgraphs:
    
    /usr/lib/python3.6/site-packages/lolviz.py in connected_subgraphs(reachable, varnames)
        785     of sets containing the id()s of all nodes in a specific subgraph
        786     """
    --> 787     max_edges_for_type = max_edges_in_connected_subgraphs(reachable, varnames)
        788 
        789     reachable = closure(reachable, varnames)
    
    /usr/lib/python3.6/site-packages/lolviz.py in max_edges_in_connected_subgraphs(reachable, varnames)
        839     """
        840     max_edges_for_type = defaultdict(int)
    --> 841     reachable = closure(reachable, varnames)
        842     reachable = [p for p in reachable if isplainobj(p)]
        843     for p in reachable:
    
    /usr/lib/python3.6/site-packages/lolviz.py in closure(p, varnames)
        706     from but don't include frame objects.
        707     """
    --> 708     return closure_(p, varnames, set())
        709 
        710 
    
    /usr/lib/python3.6/site-packages/lolviz.py in closure_(p, varnames, visited)
        710 
        711 def closure_(p, varnames, visited):
    --> 712     if p is None or isatom(p):
        713         return []
        714     if id(p) in visited:
    
    /usr/lib/python3.6/site-packages/lolviz.py in isatom(p)
        691 
        692 
    --> 693 def isatom(p): return type(p) == int or type(p) == float or type(p) == str or type(p) == unicode
        694 
        695 
    
    NameError: name 'unicode' is not defined
    
    py2py3 compatibility 
    opened by faultylee 5
  • Hidden values of nested table

    Hidden values of nested table

    I find table in lolviz can not show values if cell in one row is a list, as:

    
    T = [
        ['11','12','13','14',['a','b','c'],'16']
    ]
    objviz(T)
    

    Only the a, b, c are shown and 11, 12, 13, 14, and 16 are not shown, is this configurable? Thanks!

    question 
    opened by pytkr 2
  • Invalid syntax on Python 3.6 and lolviz 1.2.1

    Invalid syntax on Python 3.6 and lolviz 1.2.1

    Contd. from https://github.com/parrt/lolviz/issues/11#issuecomment-326474125

    Traceback (most recent call last):
    
      File "/Users/srid/code/ipython/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2862, in run_code
        exec(code_obj, self.user_global_ns, self.user_ns)
    
      File "<ipython-input-1-ae470ca34f62>", line 1, in <module>
        from lolviz import *
    
      File "/Users/srid/code/ipython/lib/python3.6/site-packages/lolviz.py", line 442
        print "hashcode =", hashcode(key)
                         ^
    SyntaxError: invalid syntax
    
    
    py2py3 compatibility 
    opened by srid 1
  • Can't install on Python 3

    Can't install on Python 3

    This package is currently registered as Python 2.7 only in PyPI. This is what happens if you try to install it on Python 3.6:

    (venv) $ pip install lolviz
    Collecting lolviz
      Using cached lolviz-1.2.tar.gz
    lolviz requires Python '<3' but the running Python is 3.6.0
    

    I think it's just a matter of adding the Python 3 classifier in setup.py, because as far as I can see the code is fine for Python 3.

    duplicate 
    opened by miguelgrinberg 1
  • Dictionaries with tuple values are rendered incorrectly

    Dictionaries with tuple values are rendered incorrectly

    from lolviz import *
    
    dict_tuple_values = dictviz({'a': (1, 2)})
    dict_tuple_values.render(view=True, cleanup=True)
    

    Is rendered as: image

    This only happens with 2-tuples. Any other tuple length is rendered as expected.

    This is the offending line: https://github.com/parrt/lolviz/blob/a6fc29b008a16993738416e793de71c3bff4175d/lolviz.py#L159

    What is the significance of ... and len(el) == 2 ?

    enhancement 
    opened by DeepSpace2 1
  • implement multi child tree

    implement multi child tree

    Hi, I was looking for a package to visualize Tree Search Algorithm Recently and I found this repository and really liked it. But for tree visualization, the treeviz function can only support binary tree with child name left and right.

    So I made some modification to support multi children and specifying child name. I add 2 new parameters for treeviz(), childfields and show_all_children. The variable name in childfields will be recognized as child node. And if show_all_children=False, it will only visualize the child names exist, else it will show all the names in childfieds.

    I know you may be busy and this repository hasn't been updated for a long time. You can check these modification anytime free. I am so glad to receive any suggestions from you.

    enhancement 
    opened by sunyiwei24601 5
  • Could we add a

    Could we add a "super" display function that chooses the best one based on the datatype?

    Hello @parrt! I used your lolviz project a few years ago, and I rediscovered it today. It's awesome!

    Could we add a "super" display function that chooses the best one based on the datatype?

    When reading the documentation, it shows like 8 different functions, and I don't want to spend my time thinking about the name of one or another function for one or another datatype. What is described in this documentation is almost trivially translated to Python code.

    modes = [ "str", "matrix", "call", "calls", "obj", "tree", "lol", "list" ]
    
    def unified_lolviz(obj, mode=None):
        """ Unified function to display `obj` with lolviz, in Jupyter notebook only."""
        if mode == "str" or isinstance(obj):
            return strviz(obj)
        if mode == "matrix" or "<class 'numpy.ndarray'>" == str(type(obj)):
            # can't use isinstance(obj, np.ndarray) without import numpy!
            return matrixviz(obj)
        if mode == "call": return callviz()
        if mode == "calls": return callviz()
        if mode == "lol" or isinstance(obj, list) and obj and isinstance(obj[0], list):
            # obj is a list, is non empty, and obj[0] is a list!
            return lolviz(obj)
        if mode == "list" or isinstance(obj, list):
            return listviz(obj)
        return objviz(obj)  # default
    

    So I'm opening this ticket: if you think this could be added to the library, can we discuss it here, and then I can take care of writing it, testing it, sending a pull-request, and you can merge, and then update on Pypi! What do you think?

    Regards from France, @Naereen

    enhancement 
    opened by Naereen 11
  • Create typed Class-Structure Diagram

    Create typed Class-Structure Diagram

    This isn't a bug, but a question for help / hints.

    I would like to use lolviz to create a graph of my python class structure. Every class-attribute is type annotated, so it should be possible to like their interactions without creating class instances. Here is a minimal example:

    from copy import deepcopy
    from lolviz import *
    class Workout:
        def __init__(
            self,
            date: dt.date,
            name: str = "",
            duration: int = 0
        ):
            # assert isinstance(tss, (np.number, int))
            self.date: dt.date = date
            self.name: str = name
            self.duration: int = duration  # in seconds
            self.done: boolean = False
    
    class Athlete:
        def __init__(
            self,
            name: str,
            birthday: dt.date,
            sports: List[str]
        ):
            self.name: str = name
            self.birthday: dt.date = birthday
            self.sports: List[str] = sports
    
    
    class DataContainer:
        def __init__(self, 
                     athlete: Athlete, 
                     tasks: List[Workout] = [], 
                     fulfilled: List[Workout] = []):
            self.athelete: Athlete = athlete
            self.tasks: List[Workout] = [w for w in tasks if isinstance(w, Workout)]
            self.fulfilled: List[Workout] = [w for w in fulfilled if isinstance(w, Workout)]
    
    me = Athlete("nico", dt(1990,3,1).date(), sports=["running","climbing"])
    t1 = Workout(date=dt(2020,1,1).date(), name="5k Run")
    f1 = deepcopy(t1)
    f1.done = True
    dc1 = DataContainer(me, tasks=[t1], fulfilled=[f1])
    

    Now I can use objviz(dc1) to create the following diagram: Screenshot 2021-01-13 at 16 47 07

    What I actually would like to achieve is a command like classviz(DataContainer) which will give me a similar chart, but not with the actual attribute values but their types. For sure there will be other small changes, but that's the basic idea.

    What I already can do is something like:

    def get_types(annotated_class):
        return (annotated_class.__name__, {k: v.__name__ for k,v in annotated_class.__init__.__annotations__.items()})
    
    get_types(Workout)
    

    which gives me something like: ('Workout', {'date': 'date', 'name': 'str', 'duration': 'int'}). How ever I don't find a proper way to create similar table-elements which contain Workout in the header and the name-attribute mapping in it's body.

    Can someone give me a hint, how to create such tables manually? I am also happy for any additional advices

    feature 
    opened by krlng 0
  • visualization fails when variables contain " chars">

    visualization fails when variables contain "<" and ">" chars

    from lolviz import objviz a={"hello":"<"} objviz(a).render() Error: Source.gv: syntax error in line 16 scanning a HTML string (missing '>'? bad nesting? longer than 16384?) String starting:<

    opened by ami-navon 1
    Releases(1.4)
    Owner
    Terence Parr
    Creator of the ANTLR parser generator. Professor at Univ of San Francisco, computer science and data science. Working mostly on machine learning stuff now.
    Terence Parr
    An implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.

    relational-rnn-pytorch An implementation of DeepMind's Relational Recurrent Neural Networks (Santoro et al. 2018) in PyTorch. Relational Memory Core (

    Sang-gil Lee 241 Nov 18, 2022
    A PyTorch implementation of SlowFast based on ICCV 2019 paper "SlowFast Networks for Video Recognition"

    SlowFast A PyTorch implementation of SlowFast based on ICCV 2019 paper SlowFast Networks for Video Recognition. Requirements Anaconda PyTorch conda in

    Hao Ren 8 Dec 23, 2022
    A little software to generate and save Julia or Mandelbrot's Fractals.

    Julia-Mandelbrot-s-Fractals A little software to generate and save Julia or Mandelbrot's Fractals. Dependencies : Python 3.7 or more. (Also possible t

    Olivier 0 Jul 09, 2022
    A set of tools to pre-calibrate and calibrate (multi-focus) plenoptic cameras (e.g., a Raytrix R12) based on the libpleno.

    COMPOTE: Calibration Of Multi-focus PlenOpTic camEra. COMPOTE is a set of tools to pre-calibrate and calibrate (multifocus) plenoptic cameras (e.g., a

    ComSEE - Computers that SEE 4 May 10, 2022
    The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

    The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. Website • Key Features • How To Use • Docs •

    Pytorch Lightning 21.1k Jan 01, 2023
    This package contains deep learning models and related scripts for RoseTTAFold

    RoseTTAFold This package contains deep learning models and related scripts to run RoseTTAFold This repository is the official implementation of RoseTT

    1.6k Jan 03, 2023
    This is the repository for Learning to Generate Piano Music With Sustain Pedals

    SusPedal-Gen This is the official repository of Learning to Generate Piano Music With Sustain Pedals Demo Page Dataset The dataset used in this projec

    Joann Ching 12 Sep 02, 2022
    Annotate datasets with a semi-trained or fully trained YOLOv5 model

    YOLOv5 Auto Annotator Annotate datasets with a semi-trained or fully trained YOLOv5 model Prerequisites Ubuntu =20.04 Python =3.7 System dependencie

    Akash James 3 May 14, 2022
    An alarm clock coded in Python 3 with Tkinter

    Tkinter-Alarm-Clock An alarm clock coded in Python 3 with Tkinter. Run python3 Tkinter Alarm Clock.py in a terminal if you have Python 3. NOTE: This p

    CodeMaster7000 1 Dec 25, 2021
    Individual Treatment Effect Estimation

    CAPE Individual Treatment Effect Estimation Run CAPE python train_causal.py --loop 10 -m cape_cau -d NI --i_t 1 Run a baseline model python train_cau

    S. Deng 4 Sep 02, 2022
    Aggragrating Nested Transformer Official Jax Implementation

    NesT is a simple method, which aggragrates nested local transformers on image blocks. The idea makes vision transformers attain better accuracy, data efficiency, and convergence on the ImageNet bench

    Google Research 169 Dec 20, 2022
    Joint Detection and Identification Feature Learning for Person Search

    Person Search Project This repository hosts the code for our paper Joint Detection and Identification Feature Learning for Person Search. The code is

    712 Dec 17, 2022
    Código de um painel de auto atendimento feito em Python.

    Painel de Auto-Atendimento O intuito desse projeto era fazer em Python um programa que simulasse um painel de auto atendimento, no maior estilo Mac Do

    Calebe Alves Evangelista 2 Nov 09, 2022
    Python library for science observations from the James Webb Space Telescope

    JWST Calibration Pipeline JWST requires Python 3.7 or above and a C compiler for dependencies. Linux and MacOS platforms are tested and supported. Win

    Space Telescope Science Institute 386 Dec 30, 2022
    An end-to-end machine learning library to directly optimize AUC loss

    LibAUC An end-to-end machine learning library for AUC optimization. Why LibAUC? Deep AUC Maximization (DAM) is a paradigm for learning a deep neural n

    Andrew 75 Dec 12, 2022
    [ICLR 2021] Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization

    Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization Kaidi Cao, Yining Chen, Junwei Lu, Nikos Arechiga, Adrien Gaidon, Tengyu Ma

    Kaidi Cao 29 Oct 20, 2022
    ✨✨✨An awesome open source toolbox for stereo matching.

    OpenStereo This is an awesome open source toolbox for stereo matching. Supported Methods: BM SGM(T-PAMI'07) GCNet(ICCV'17) PSMNet(CVPR'18) StereoNet(E

    Wang Qingyu 6 Nov 04, 2022
    TagLab: an image segmentation tool oriented to marine data analysis

    TagLab: an image segmentation tool oriented to marine data analysis TagLab was created to support the activity of annotation and extraction of statist

    Visual Computing Lab - ISTI - CNR 49 Dec 29, 2022
    Repo for 2021 SDD assessment task 2, by Felix, Anna, and James.

    SoftwareTask2 Repo for 2021 SDD assessment task 2, by Felix, Anna, and James. File/folder structure: helloworld.py - demonstrates various map backgrou

    3 Dec 13, 2022
    Prototype-based Incremental Few-Shot Semantic Segmentation

    Prototype-based Incremental Few-Shot Semantic Segmentation Fabio Cermelli, Massimiliano Mancini, Yongqin Xian, Zeynep Akata, Barbara Caputo -- BMVC 20

    Fabio Cermelli 21 Dec 29, 2022