A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen.

Overview

NimTorch

Master Release
Build Status Build Status

Pytorch - Py + Nim

A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen.

Because Nim compiles to C++, this is not a wrapper or binding library. It generates 1-to-1 native ATen code.

The only requirement from pytorch is ATen's core tensor library. Because of this, nimtorch is extremely versatile and can compile on any kind of device.

Current status

Early stage

  • Automatically generated, from Declarations.yaml, the full ATen API
  • Cuda support ( add -d:cuda when compiling with nim )
  • WASM support ( add -d:wasm when compiling with nim )
  • Automatically generated, from derivatives.yaml, gradient procs
  • Autograd
  • Add missing derivatives
  • More high level pytorch API (Module, Models etc)
  • ...

The final aim is to be as compatible as possible with the pytorch API.

Why

Ease of use of the python language while keeping fully bare metal native C++ performance

Python code

# GRUCell
gi = x.matmul(w_input.t()) + b_input
gh = hidden.matmul(w_recur.t()) + b_recur
i_r, i_i, i_n = gi.chunk(3, 1)
h_r, h_i, h_n = gh.chunk(3, 1)
resetgate = (i_r + h_r).sigmoid()
inputgate = torch.sigmoid(i_i + h_i)
newgate = (i_n + resetgate * h_n).tanh()
hy = newgate + inputgate * (hidden - newgate)

Nim code

# GRUCell
let
  gi = x.matmul(w_input.t()) + b_input
  gh = hidden.matmul(w_recur.t()) + b_recur
  (i_r, i_i, i_nn) = gi.chunk(3, 1)
  (h_r, h_i, h_n)  = gh.chunk(3, 1)
  resetgate = (i_r + h_r).sigmoid()
  inputgate = torch.sigmoid(i_i + h_i)
  newgate = (i_nn + resetgate * h_n).tanh()
  hy = newgate + inputgate * (hidden - newgate)

Getting started

Requirements

Linux: A recent distribution on par with ubuntu 18.04 in terms of libc and basic libraries, gcc compiler

macOS: We compile with 10.13 min version flags but might work even on lower versions, XCode for the compilers

Windows: Windows 10, Visual Studio Runtime 2017 and Visual Studio 2017 (any edition)

WASM: Latest Emscripten compiler and tools

Super easy, using conda

Linux, macOS and Windows

conda create -n nimtorch -c fragcolor nimtorch (add cuda10.0 for cuda 10 linux only or add wasm for wasm version)

source activate nimtorch or on windows: conda activate nimtorch

This will install: nim and ATen binaries, fragments and nimtorch all in one command, nothing else needed.

Make sure you use a recent version of conda and have a compiler installed in your system, on windows you have to add --cc:vcc and be on a developer prompt.

Make sure your system is recent (ubuntu 18.04 reference / macOS High Sierra / Windows 10) and you have cuda 9.2 installed (if you need cuda, linux only, more cuda versions coming, please open a issue if you need a specific version).

Test with with something like:

nim cpp -o:test -r $ATEN/dist/pkgs/nimtorch-\#head/tests/test_xor.nim

or on windows... (because dlls need to be side by side)

nim cpp -o:%ATEN%/lib/test.exe -r %ATEN%/dist/pkgs/nimtorch-#head/tests/test_xor.nim

Semi manual way

Linux, macOS and Windows

Check what version of ATen/PyTorch we need in conda/nimtorch/meta.yaml - should be something like aten ==2018.10.10.1089

Note the version as you will need it in the next step

conda create -n aten -c fragcolor aten={version}

or

WASM

conda create -n aten -c fragcolor aten={version} wasm

or Cuda 10.0 (linux only)

conda create -n aten -c fragcolor aten={version} cuda10.0

activate aten environment

source activate aten or on windows: conda activate aten

  1. Make sure you have a recent Nim and Nimble version in your path
  1. clone the release branch git clone -b release https://github.com/fragcolor-xyz/nimtorch.git
  2. cd nimtorch
  3. nimble develop

finally

run self test nim cpp -o:test -r torch.nim (use -o:%ATEN%/lib/test.exe instead on windows because of dll location)

in the case of WASM:

run self test nim cpp -d:wasm -o:test.js torch.nim && node test.js (needs node.js)

Manual way without requiring conda

Build ATEN

pip2 install pyyaml typing
git clone -b fragcolor-devel https://github.com/fragcolor-xyz/pytorch.git
cd pytorch
git reset --hard <commit hash> # from torch/commit.txt
git submodule update --init
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release -DUSE_CUDA=OFF -DBUILD_ATEN_ONLY=ON -DCMAKE_INSTALL_PREFIX=`pwd`/output ../
make -j4
make install

# also copy derivatives if we want to run generator.nim or nimble test
# notice generator.nim might need python3 and pyyaml
cp ../tools/autograd/derivatives.yaml `pwd`/output/share/

Test the build

cd 
   
    
ATEN=
    
      nim cpp -r -f -o:/tmp/z01 torch.nim # for eg: ATEN=pathto/pytorch/build/output/

    
   

Notes

  • We suggest setting OMP_WAIT_POLICY environment variable to PASSIVE when running on CPU.
Comments
  • OSX: `nim cpp -r torch.nim` fails

    OSX: `nim cpp -r torch.nim` fails

    after building ATEN via https://github.com/fragcolor-xyz/nimtorch/issues/5#issuecomment-427937945:

    ATEN=/tmp/d11/Users/timothee/git_clone/nim/pytorch/built/output/lib nim cpp -r torch.nim
    error: unknown type name 'constexpr'
    
    ATEN=/tmp/d11/Users/timothee/git_clone/nim/pytorch/built/output nim cpp -r --passC:-std=c++11 torch.nim
    /Users/timothee/.cache/nim/torch_d/torch_tensors.cpp:206:14: error: no matching constructor for initialization of 'at::IntList' (aka 'ArrayRef<long long>')
            at::IntList temp(((long*) ((&self[((NI) 0)]))), selfLen_0);
                        ^    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    /tmp/d11/Users/timothee/git_clone/nim/pytorch/built/output/include/ATen/core/ArrayRef.h:67:13: note: candidate constructor not viable: no known conversion from 'long *' to 'const long long *' for 1st argument
      constexpr ArrayRef(const T* data, size_t length)
    
    
    question macOS 
    opened by timotheecour 11
  • OSX: building ATEN via `docker build -t docker_aten_native .` fails

    OSX: building ATEN via `docker build -t docker_aten_native .` fails

    Download ATen binaries or build it (instructions under)

    => no OSX option

    ATen build instructions

    cd docker && cd docker-aten-native

    is that a typo?

    is my best bet to try to follow instructions from https://github.com/pytorch/pytorch or https://github.com/pytorch/pytorch/tree/master/aten ?

    enhancement macOS 
    opened by timotheecour 11
  • SIGSEGV when running examples

    SIGSEGV when running examples

    Using the latest nimtorch, I'm getting a SIGSEGV when trying to compile one of the examples (test_xor).

    I'm not sure if this is a compiler bug or a problem with Aten?

    Dockerfile:

    FROM continuumio/miniconda
    
    RUN conda install -y -c fragcolor nimtorch
    
    ADD test.nim .
    
    docker build -t nimtorch_conda .
    docker run --rm nimtorch_conda nim c test.nim
    #Hint: used config file '/opt/conda/config/nim.cfg' [Conf]
    #Hint: used config file '/opt/conda/config/config.nims' [Conf]
    #Hint: system [Processing]
    #Hint: widestrs [Processing]
    #Hint: io [Processing]
    #Hint: test [Processing]
    #Hint: torch [Processing]
    #Hint: macros [Processing]
    #Hint: cpp [Processing]
    #Hint: nimline [Processing]
    #Hint: tables [Processing]
    #Hint: hashes [Processing]
    #Hint: strutils [Processing]
    #Hint: parseutils [Processing]
    #Hint: math [Processing]
    #Hint: bitops [Processing]
    #Hint: algorithm [Processing]
    #Hint: unicode [Processing]
    #Hint: os [Processing]
    #Hint: pathnorm [Processing]
    #Hint: osseps [Processing]
    #Hint: posix [Processing]
    #Hint: times [Processing]
    #Hint: options [Processing]
    #Hint: typetraits [Processing]
    #Hint: torch_cpp [Processing]
    #Hint: tensors [Processing]
    #Hint: sequtils [Processing]
    #Hint: sets [Processing]
    #Hint: strformat [Processing]
    #Hint: tensor_ops [Processing]
    #Hint: autograd_macro [Processing]
    #Hint: autograd_backward [Processing]
    #Hint: nn [Processing]
    #Hint: modules [Processing]
    #Hint: init [Processing]
    #Hint: python_helpers [Processing]
    #Hint: functional [Processing]
    #SIGSEGV: Illegal storage access. (Attempt to read from nil?)
    

    test.nim:

    import torch
    import torch/[nn, optim]
    
    let inputs = torch.tensor([
      [0.0, 0.0],
      [0.0, 1.0],
      [1.0, 0.0],
      [1.0, 1.0],
    ])
    
    let targets = torch.tensor([
      [0.0],
      [1.0],
      [1.0],
      [0.0],
    ])
    
    let
      fc1 = nn.Linear(2, 4)
      fc2 = nn.Linear(4, 1)
      loss_fn = nn.MSELoss()
      optimizer = optim.SGD(fc1.parameters & fc2.parameters , lr = 0.01, momentum = 0.1)
    
    set_num_threads(1)
    
    when defined gperftools:
      discard ProfilerStart("test_xor.log")
    
    for i in 0 ..< 50000:
      optimizer.zero_grad()
    
      let predictions = inputs.fc1.relu.fc2.sigmoid
    
      let loss = loss_fn(predictions, targets)
      loss.backward()
      optimizer.step()
    
      if i mod 5000 == 0:
        print(loss)
    
    when defined gperftools:
      ProfilerStop()
    
    opened by singularperturbation 4
  • can't install with nimble

    can't install with nimble

    $ nim --version Nim Compiler Version 0.18.1 [Windows: amd64] Compiled at 2018-08-18 Copyright (c) 2006-2018 by Andreas Rumpf

    git hash: b5171f57ef00bffb12387d7daf3487c5e07645f9 active boot switches: -d:release

    $ nimble install nimtorch Prompt: nimtorch not found in any local packages.json, check internet for updated packages? [y/N] y Answer: Downloading Official package list Success Package list downloaded. Tip: 3 messages have been suppressed, use --verbose to show them. Error: Package not found.

    opened by retsyo 3
  • Cannot compile nimtorch tests on Windows 10

    Cannot compile nimtorch tests on Windows 10

    C:/Program Files/mingw-w64/x86_64-8.1.0-posix-seh-rt_v6-rev0/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/8.1.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib\ATen_cpu.lib
    C:/Program Files/mingw-w64/x86_64-8.1.0-posix-seh-rt_v6-rev0/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/8.1.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib\cpuinfo.lib
    collect2.exe: error: ld returned 1 exit status
    Error: execution of an external program failed: 'g++.exe   -o C:\Users\vsagar200\work\soft\nimtorch\tests\test_xor.exe  C:\Users\vsagar200\nimcache\test_xor_d\torch_test_xor.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_system.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_torch.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\fragments_cpp.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_macros.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_tables.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_hashes.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_math.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_strutils.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_parseutils.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_bitops.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_algorithm.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_unicode.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_ospaths.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_winlean.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_dynlib.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_torch_cpp.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_os.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_times.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_options.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_typetraits.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_strformat.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_tensors.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_sequtils.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\stdlib_sets.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_tensor_ops.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_autograd_macro.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_autograd_backward.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_nn.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_init.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_python_helpers.cpp.o C:\Users\vsagar200\nimcache\test_xor_d\torch_optim.cpp.o  -LC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib -LC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib64 -lC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib\ATen_cpu.lib -lC:\Users\vsagar200\work\bin\nim-0.19.0_x64\ATen-windows10-cpu\lib\cpuinfo.lib '
    

    Although the ATEN path is correctly set. Tried on nim 0.18.1 and 0.19.0 the .lib files present at the same location the error is saying it cannot find.

    question 
    opened by eshitasagar 2
  • install Aten with nimtorch

    install Aten with nimtorch

    thanks for the library. are there any plans to have Aten distributed with nim-torch? This will make it easier to build command-line tools that depend on it.

    opened by brentp 1
  • [TODO] [offtopic] discussion regarding nimble limitation (from comments in #6)

    [TODO] [offtopic] discussion regarding nimble limitation (from comments in #6)

    moved to here the discussion started here https://github.com/fragcolor-xyz/nimtorch/issues/6#issuecomment-430511494 to keep each topic separate

    @sinkingsugar

    Nimble has a major flaw, which is it applies all the packages it has on every project you have by default. That's my major concern that easy can create a mess if not considered.

    I will give you an example exactly with nimtorch: Have nimtorch installed as nimble package Also work on the repository Did not run nimble develop Rename a file in the repository Forget to update the module import myrenamedfile into import newlocation/myrenamedfile Build will succeed yet will use import myrenamedfile from nimble this time..

    but it all works if you run nimble develop, right? I think that's expected and I don't see a limitation here (in the sense of making it impossible to do certain things). if you really feel something is ill-designed with nimble it really should be a bug report in https://github.com/nim-lang/nimble/issues/ otherwise it will never get fixed (if there's anything to fix).

    That being said, one possibility would be (and that's doable, not a fundamental flaw IMO): if you call nimble build inside a local package foo, nimble could remove from search path an installed package named foo

    opened by timotheecour 1
  • Add a Gitter chat badge to README.md

    Add a Gitter chat badge to README.md

    fragcolor-xyz/nimtorch now has a Chat Room on Gitter

    @sinkingsugar has just created a chat room. You can visit it here: https://gitter.im/nimtorch/Lobby.

    This pull-request adds this badge to your README.md:

    Gitter

    If my aim is a little off, please let me know.

    Happy chatting.

    PS: Click here if you would prefer not to receive automatic pull-requests from Gitter in future.

    opened by gitter-badger 0
  • Is this project still active

    Is this project still active

    Hi,

    I think a working binding to pytorch from nim could be very valuable to support the use of nim in data science. This project seems to be inactive for a long time now and it is not using the current version of pytorch - any chance that it will be updated?

    opened by bitstormFA 5
  • Question. Import model trained under a Python / Torch library.

    Question. Import model trained under a Python / Torch library.

    Would it be possible or even advisable to import a pth or pkl, which was trained using FastAI, into NimTorch, for the purpose of exposing in a backend written in Nim (for efficiency and speed)?

    opened by UNIcodeX 17
  • Add

    Add "install fragments" to the non-conda installation doc

    add nimble install fragments to this part of the readme As a beginner I wasted an hour trying to figure out what did I do wrong, turns out it was just a package error

    opened by mritunjaymusale 0
  • Error: expression 'step(optimizer)' has no type (or is ambiguous)

    Error: expression 'step(optimizer)' has no type (or is ambiguous)

    I am trying to use nimtorch (I am new to pytorch as well). I am struggling to run a first example.

    I have installed in Windows 10 like this:

    conda create -n aten -c fragcolor aten=2019.02.16.2841
    nimble install fragments
    nimble install torch@#head
    

    And then I tried to compile the code from here:

    import torch
    import torch/[nn, optim]
    
    let
      inputs = torch.tensor([[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]])
      targets = torch.tensor([[0.0], [1.0], [1.0], [0.0]])
    
    let
      fc1 = nn.Linear(2, 4)
      fc2 = nn.Linear(4, 1)
      loss_fn = nn.MSELoss()
      optimizer = optim.SGD(fc1.parameters & fc2.parameters, lr = 0.01, momentum = 0.1)
    
    for i in 0 ..< 50000:
      optimizer.zero_grad()
    
      var predictions = fc1(inputs).relu()
      predictions = fc2(predictions).sigmoid()
    
      let loss = loss_fn(predictions, targets)
      loss.backward()
      discard optimizer.step()
    
      if i mod 5000 == 0:
        print(loss)
    

    I compile by doing:

    c:> conda activate aten
    c.> nim cpp ex01
    ....
    C:\Users\mantielero\Documents\src\torch\ex01.nim(22, 25) Error: expression 'step(optimizer)' has no type (or is ambiguous)
    

    which is the line:

    discard optimizer.step()
    

    What am I doing wrong?

    opened by mantielero 0
  • Nimble installation fails

    Nimble installation fails

    Its been a while since the last commit, pls dont tell me nimtorch is dead, its wonderful, and i want to start using it, and libraries like this would pump up nim's popularity, are you leaving it for a while but planning on taking it back or just completely abandoned?

    opened by RecruitMain707 6
Releases(v0.2.0)
Owner
Giovanni Petrantoni
Founder and CEO @ Fragcolor
Giovanni Petrantoni
Point detection through multi-instance deep heatmap regression for sutures in endoscopy

Suture detection PyTorch This repo contains the reference implementation of suture detection model in PyTorch for the paper Point detection through mu

artificial intelligence in the area of cardiovascular healthcare 3 Jul 16, 2022
FLSim a flexible, standalone library written in PyTorch that simulates FL settings with a minimal, easy-to-use API

Federated Learning Simulator (FLSim) is a flexible, standalone core library that simulates FL settings with a minimal, easy-to-use API. FLSim is domain-agnostic and accommodates many use cases such a

Meta Research 162 Jan 02, 2023
GLM (General Language Model)

GLM GLM is a General Language Model pretrained with an autoregressive blank-filling objective and can be finetuned on various natural language underst

THUDM 421 Jan 04, 2023
Tensorflow implementation of "BEGAN: Boundary Equilibrium Generative Adversarial Networks"

BEGAN in Tensorflow Tensorflow implementation of BEGAN: Boundary Equilibrium Generative Adversarial Networks. Requirements Python 2.7 or 3.x Pillow tq

Taehoon Kim 922 Dec 21, 2022
Generate pixel-style avatars with python.

face2pixel Generate pixel-style avatars with python. Run: Clone the project: git clone https://github.com/theodorecooper/face2pixel install requiremen

Theodore Cooper 2 May 11, 2022
[ICML 2020] "When Does Self-Supervision Help Graph Convolutional Networks?" by Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen

When Does Self-Supervision Help Graph Convolutional Networks? PyTorch implementation for When Does Self-Supervision Help Graph Convolutional Networks?

Shen Lab at Texas A&M University 106 Nov 11, 2022
Ensemble Learning Priors Driven Deep Unfolding for Scalable Snapshot Compressive Imaging [PyTorch]

Ensemble Learning Priors Driven Deep Unfolding for Scalable Snapshot Compressive Imaging [PyTorch] Abstract Snapshot compressive imaging (SCI) can rec

integirty 6 Nov 01, 2022
DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.

dm_control: DeepMind Infrastructure for Physics-Based Simulation. DeepMind's software stack for physics-based simulation and Reinforcement Learning en

DeepMind 3k Dec 31, 2022
Minimal implementation and experiments of "No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging".

No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging Minimal implementation and experiments of "No-Transaction Band N

19 Jan 03, 2023
Densely Connected Search Space for More Flexible Neural Architecture Search (CVPR2020)

DenseNAS The code of the CVPR2020 paper Densely Connected Search Space for More Flexible Neural Architecture Search. Neural architecture search (NAS)

Jamin Fong 291 Nov 18, 2022
Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue

Realtime Unsupervised Depth Estimation from an Image This is the caffe implementation of our paper "Unsupervised CNN for single view depth estimation:

Ravi Garg 227 Nov 28, 2022
Automatic 2D-to-3D Video Conversion with CNNs

Deep3D: Automatic 2D-to-3D Video Conversion with CNNs How To Run To run this code. Please install MXNet following the official document. Deep3D requir

Eric Junyuan Xie 1.2k Dec 30, 2022
Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

66 Dec 15, 2022
The Codebase for Causal Distillation for Language Models.

Causal Distillation for Language Models Zhengxuan Wu*,Atticus Geiger*, Josh Rozner, Elisa Kreiss, Hanson Lu, Thomas Icard, Christopher Potts, Noah D.

Zen 20 Dec 31, 2022
code for Image Manipulation Detection by Multi-View Multi-Scale Supervision

MVSS-Net Code and models for ICCV 2021 paper: Image Manipulation Detection by Multi-View Multi-Scale Supervision Update 22.02.17, Pretrained model for

dong_chengbo 131 Dec 30, 2022
Official implementation of Few-Shot and Continual Learning with Attentive Independent Mechanisms

Few-Shot and Continual Learning with Attentive Independent Mechanisms This repository is the official implementation of Few-Shot and Continual Learnin

Chikan_Huang 25 Dec 08, 2022
Propose a principled and practically effective framework for unsupervised accuracy estimation and error detection tasks with theoretical analysis and state-of-the-art performance.

Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles This project is for the paper: Detecting Errors and Estimating

Jiefeng Chen 13 Nov 21, 2022
Shōgun

The SHOGUN machine learning toolbox Unified and efficient Machine Learning since 1999. Latest release: Cite Shogun: Develop branch build status: Donat

Shōgun ML 2.9k Jan 04, 2023
Official implementation of Densely connected normalizing flows

Densely connected normalizing flows This repository is the official implementation of NeurIPS 2021 paper Densely connected normalizing flows. Poster a

Matej Grcić 31 Dec 12, 2022