Neural Turing Machine (NTM) & Differentiable Neural Computer (DNC) with pytorch & visdom

Overview

Neural Turing Machine (NTM) &

Differentiable Neural Computer (DNC) with

pytorch & visdom


  • Sample on-line plotting while training(avg loss)/testing(write/read weights & memory) NTM on the copy task (top 2 rows, 1st row converges to sequentially write to lower locations, 2nd row converges to sequentially write to upper locations) and DNC on the repeat-copy task (3rd row) (the write/read weights here are after location focus so are no longer necessarily normalized within each head by design):

  • Sample loggings while training DNC on the repeat-copy task (we use WARNING as the logging level currently to get rid of the INFO printouts from visdom):
[WARNING ] (MainProcess) <===================================>
[WARNING ] (MainProcess) bash$: python -m visdom.server
[WARNING ] (MainProcess) http://localhost:8097/env/daim_17051000
[WARNING ] (MainProcess) <===================================> Agent:
[WARNING ] (MainProcess) <-----------------------------======> Env:
[WARNING ] (MainProcess) Creating {repeat-copy | } w/ Seed: 123
[WARNING ] (MainProcess) Word     {length}:   {4}
[WARNING ] (MainProcess) Words #  {min, max}: {1, 2}
[WARNING ] (MainProcess) Repeats  {min, max}: {1, 2}
[WARNING ] (MainProcess) <-----------------------------======> Circuit:    {Controller, Accessor}
[WARNING ] (MainProcess) <--------------------------------===> Controller:
[WARNING ] (MainProcess) LSTMController (
  (in_2_hid): LSTMCell(70, 64, bias=1)
)
[WARNING ] (MainProcess) <--------------------------------===> Accessor:   {WriteHead, ReadHead, Memory}
[WARNING ] (MainProcess) <-----------------------------------> WriteHeads: {1 heads}
[WARNING ] (MainProcess) DynamicWriteHead (
  (hid_2_key): Linear (64 -> 16)
  (hid_2_beta): Linear (64 -> 1)
  (hid_2_alloc_gate): Linear (64 -> 1)
  (hid_2_write_gate): Linear (64 -> 1)
  (hid_2_erase): Linear (64 -> 16)
  (hid_2_add): Linear (64 -> 16)
)
[WARNING ] (MainProcess) <-----------------------------------> ReadHeads:  {4 heads}
[WARNING ] (MainProcess) DynamicReadHead (
  (hid_2_key): Linear (64 -> 64)
  (hid_2_beta): Linear (64 -> 4)
  (hid_2_free_gate): Linear (64 -> 4)
  (hid_2_read_mode): Linear (64 -> 12)
)
[WARNING ] (MainProcess) <-----------------------------------> Memory:     {16(batch_size) x 16(mem_hei) x 16(mem_wid)}
[WARNING ] (MainProcess) <-----------------------------======> Circuit:    {Overall Architecture}
[WARNING ] (MainProcess) DNCCircuit (
  (controller): LSTMController (
    (in_2_hid): LSTMCell(70, 64, bias=1)
  )
  (accessor): DynamicAccessor (
    (write_heads): DynamicWriteHead (
      (hid_2_key): Linear (64 -> 16)
      (hid_2_beta): Linear (64 -> 1)
      (hid_2_alloc_gate): Linear (64 -> 1)
      (hid_2_write_gate): Linear (64 -> 1)
      (hid_2_erase): Linear (64 -> 16)
      (hid_2_add): Linear (64 -> 16)
    )
    (read_heads): DynamicReadHead (
      (hid_2_key): Linear (64 -> 64)
      (hid_2_beta): Linear (64 -> 4)
      (hid_2_free_gate): Linear (64 -> 4)
      (hid_2_read_mode): Linear (64 -> 12)
    )
  )
  (hid_to_out): Linear (128 -> 5)
)
[WARNING ] (MainProcess) No Pretrained Model. Will Train From Scratch.
[WARNING ] (MainProcess) <===================================> Training ...
[WARNING ] (MainProcess) Reporting       @ Step: 500 | Elapsed Time: 30.609361887
[WARNING ] (MainProcess) Training Stats:   avg_loss:         0.014866309287
[WARNING ] (MainProcess) Evaluating      @ Step: 500
[WARNING ] (MainProcess) Evaluation        Took: 1.6457400322
[WARNING ] (MainProcess) Iteration: 500; loss_avg: 0.0140423600748
[WARNING ] (MainProcess) Saving Model    @ Step: 500: /home/zhang/ws/17_ws/pytorch-dnc/models/daim_17051000.pth ...
[WARNING ] (MainProcess) Saved  Model    @ Step: 500: /home/zhang/ws/17_ws/pytorch-dnc/models/daim_17051000.pth.
[WARNING ] (MainProcess) Resume Training @ Step: 500
...

What is included?

This repo currently contains the following algorithms:

  • Neural Turing Machines (NTM) [1]
  • Differentiable Neural Computers (DNC) [2]

Tasks:

  • copy
  • repeat-copy

Code structure & Naming conventions

NOTE: we follow the exact code structure as pytorch-rl so as to make the code easily transplantable.

  • ./utils/factory.py

We suggest the users refer to ./utils/factory.py, where we list all the integrated Env, Circuit, Agent into Dict's. All of the core classes are implemented in ./core/. The factory pattern in ./utils/factory.py makes the code super clean, as no matter what type of Circuit you want to train, or which type of Env you want to train on, all you need to do is to simply modify some parameters in ./utils/options.py, then the ./main.py will do it all (NOTE: this ./main.py file never needs to be modified).

  • namings

To make the code more clean and readable, we name the variables using the following pattern:

  • *_vb: torch.autograd.Variable's or a list of such objects
  • *_ts: torch.Tensor's or a list of such objects
  • otherwise: normal python datatypes

Dependencies


How to run:

You only need to modify some parameters in ./utils/options.py to train a new configuration.

  • Configure your training in ./utils/options.py:
  • line 12: add an entry into CONFIGS to define your training (agent_type, env_type, game, circuit_type)
  • line 28: choose the entry you just added
  • line 24-25: fill in your machine/cluster ID (MACHINE) and timestamp (TIMESTAMP) to define your training signature (MACHINE_TIMESTAMP), the corresponding model file and the log file of this training will be saved under this signature (./models/MACHINE_TIMESTAMP.pth & ./logs/MACHINE_TIMESTAMP.log respectively). Also the visdom visualization will be displayed under this signature (first activate the visdom server by type in bash: python -m visdom.server &, then open this address in your browser: http://localhost:8097/env/MACHINE_TIMESTAMP)
  • line 28: to train a model, set mode=1 (training visualization will be under http://localhost:8097/env/MACHINE_TIMESTAMP); to test the model of this current training, all you need to do is to set mode=2 (testing visualization will be under http://localhost:8097/env/MACHINE_TIMESTAMP_test).
  • Run:

python main.py


Implementation Notes:

The difference between NTM & DNC is stated as follows in the DNC[2] paper:

Comparison with the neural Turing machine. The neural Turing machine (NTM) was the predecessor to the DNC described in this work. It used a similar architecture of neural network controller with read–write access to a memory matrix, but differed in the access mechanism used to interface with the memory. In the NTM, content-based addressing was combined with location-based addressing to allow the network to iterate through memory locations in order of their indices (for example, location n followed by n+1 and so on). This allowed the network to store and retrieve temporal sequences in contiguous blocks of memory. However, there were several drawbacks. First, the NTM has no mechanism to ensure that blocks of allocated memory do not overlap and interfere—a basic problem of computer memory management. Interference is not an issue for the dynamic memory allocation used by DNCs, which provides single free locations at a time, irrespective of index, and therefore does not require contiguous blocks. Second, the NTM has no way of freeing locations that have already been written to and, hence, no way of reusing memory when processing long sequences. This problem is addressed in DNCs by the free gates used for de-allocation. Third, sequential information is preserved only as long as the NTM continues to iterate through consecutive locations; as soon as the write head jumps to a different part of the memory (using content-based addressing) the order of writes before and after the jump cannot be recovered by the read head. The temporal link matrix used by DNCs does not suffer from this problem because it tracks the order in which writes were made.

We thus make some effort to put those two together in a combined codebase. The classes implemented have the following hierarchy:

  • Agent
    • Env
    • Circuit
      • Controller
      • Accessor
        • WriteHead
        • ReadHead
        • Memory

The part where NTM & DNC differs is the Accessor, where in the code NTM uses the StaticAccessor(may not be an appropriate name but we use this to make the code more consistent) and DNC uses the DynamicAccessor. Both Accessor classes use _content_focus() and _location_focus()(may not be an appropriate name for DNC but we use this to make the code more consistent). The _content_focus() is the same for both classes, but the _location_focus() for DNC is much more complicated as it uses dynamic allocation additionally for write and temporal link additionally for read. Those focus (or attention) mechanisms are implemented in Head classes, and those focuses output a weight vector for each head (write/read). Those weight vectors are then used in _access() to interact with the external memory.

A side note:

The sturcture for Env might look strange as this class was originally designed for reinforcement learning settings as in pytorch-rl; here we use it for providing datasets for supervised learning, so the reward, action and terminal are always left blank in this repo.


Repos we referred to during the development of this repo:


The following paper might be interesting to take a look:)

Neural SLAM: We present an approach for agents to learn representations of a global map from sensor data, to aid their exploration in new environments. To achieve this, we embed procedures mimicking that of traditional Simultaneous Localization and Mapping (SLAM) into the soft attention based addressing of external memory architectures, in which the external memory acts as an internal representation of the environment. This structure encourages the evolution of SLAM-like behaviors inside a completely differentiable deep neural network. We show that this approach can help reinforcement learning agents to successfully explore new environments where long-term memory is essential. We validate our approach in both challenging grid-world environments and preliminary Gazebo experiments. A video of our experiments can be found at: \url{https://goo.gl/RfiSxo}.

@article{zhang2017neural,
  title={Neural SLAM},
  author={Zhang, Jingwei and Tai, Lei and Boedecker, Joschka and Burgard, Wolfram and Liu, Ming},
  journal={arXiv preprint arXiv:1706.09520},
  year={2017}
}


Citation

If you find this library useful and would like to cite it, the following would be appropriate:

@misc{pytorch-dnc,
  author = {Zhang, Jingwei},
  title = {jingweiz/pytorch-dnc},
  url = {https://github.com/jingweiz/pytorch-dnc},
  year = {2017}
}
Owner
Jingwei Zhang
Jingwei Zhang
Half Instance Normalization Network for Image Restoration

HINet Half Instance Normalization Network for Image Restoration, based on https://github.com/megvii-model/HINet. Dependencies NumPy PyTorch, preferabl

Holy Wu 4 Jun 06, 2022
Repo for our ICML21 paper Unsupervised Learning of Visual 3D Keypoints for Control

Unsupervised Learning of Visual 3D Keypoints for Control [Project Website] [Paper] Boyuan Chen1, Pieter Abbeel1, Deepak Pathak2 1UC Berkeley 2Carnegie

Boyuan Chen 34 Jul 22, 2022
Tello Drone Trajectory Tracking

With this library you can track the trajectory of your tello drone or swarm of drones in real time.

Kamran Asgarov 2 Oct 12, 2022
A PyTorch implementation of the continual learning experiments with deep neural networks

Brain-Inspired Replay A PyTorch implementation of the continual learning experiments with deep neural networks described in the following paper: Brain

182 Dec 27, 2022
A general-purpose, flexible, and easy-to-use simulator alongside an OpenAI Gym trading environment for MetaTrader 5 trading platform (Approved by OpenAI Gym)

gym-mtsim: OpenAI Gym - MetaTrader 5 Simulator MtSim is a simulator for the MetaTrader 5 trading platform alongside an OpenAI Gym environment for rein

Mohammad Amin Haghpanah 184 Dec 31, 2022
Bayesian Inference Tools in Python

BayesPy Bayesian Inference Tools in Python Our goal is, given the discrete outcomes of events, estimate the distribution of categories. Using gradient

Max Sklar 99 Dec 14, 2022
Code for the ICCV'21 paper "Context-aware Scene Graph Generation with Seq2Seq Transformers"

ICCV'21 Context-aware Scene Graph Generation with Seq2Seq Transformers Authors: Yichao Lu*, Himanshu Rai*, Cheng Chang*, Boris Knyazev†, Guangwei Yu,

Layer6 Labs 37 Dec 18, 2022
A python script to convert images to animated sus among us crewmate twerk jifs as seen on r/196

img_sussifier A python script to convert images to animated sus among us crewmate twerk jifs as seen on r/196 Examples How to use install python pip i

41 Sep 30, 2022
[CVPR 2022] Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper

template-pose Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions

Van Nguyen Nguyen 92 Dec 28, 2022
Single/multi view image(s) to voxel reconstruction using a recurrent neural network

3D-R2N2: 3D Recurrent Reconstruction Neural Network This repository contains the source codes for the paper Choy et al., 3D-R2N2: A Unified Approach f

Chris Choy 1.2k Dec 27, 2022
HMLLDB is a collection of LLDB commands to assist in the debugging of iOS apps.

HMLLDB is a collection of LLDB commands to assist in the debugging of iOS apps. 中文介绍 Features Non-intrusive. Your iOS project does not need to be modi

mao2020 47 Oct 22, 2022
CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability

This is the official repository of the paper: CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability A private copy of the

Fadi Boutros 33 Dec 31, 2022
Supervised forecasting of sequential data in Python.

Supervised forecasting of sequential data in Python. Intro Supervised forecasting is the machine learning task of making predictions for sequential da

The Alan Turing Institute 54 Nov 15, 2022
A Multi-modal Model Chinese Spell Checker Released on ACL2021.

ReaLiSe ReaLiSe is a multi-modal Chinese spell checking model. This the office code for the paper Read, Listen, and See: Leveraging Multimodal Informa

DaDa 106 Dec 29, 2022
🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥

face.evoLVe: High-Performance Face Recognition Library based on PaddlePaddle & PyTorch Evolve to be more comprehensive, effective and efficient for fa

Zhao Jian 3.1k Jan 04, 2023
SGPT: Multi-billion parameter models for semantic search

SGPT: Multi-billion parameter models for semantic search This repository contains code, results and pre-trained models for the paper SGPT: Multi-billi

Niklas Muennighoff 182 Dec 29, 2022
Code for the paper "Adversarial Generator-Encoder Networks"

This repository contains code for the paper "Adversarial Generator-Encoder Networks" (AAAI'18) by Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky. Pr

Dmitry Ulyanov 279 Jun 26, 2022
This repository contains the code for TABS, a 3D CNN-Transformer hybrid automated brain tissue segmentation algorithm using T1w structural MRI scans

This repository contains the code for TABS, a 3D CNN-Transformer hybrid automated brain tissue segmentation algorithm using T1w structural MRI scans. TABS relies on a Res-Unet backbone, with a Vision

6 Nov 07, 2022
Semi-supervised learning for object detection

Source code for STAC: A Simple Semi-Supervised Learning Framework for Object Detection STAC is a simple yet effective SSL framework for visual object

Google Research 348 Dec 25, 2022
Projecting interval uncertainty through the discrete Fourier transform

Projecting interval uncertainty through the discrete Fourier transform This repo

1 Mar 02, 2022