An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"

Related tags

Deep LearningRASP
Overview

RASP

Setup

Mac or Linux

Run ./setup.sh . It will create a python3 virtual environment and install the dependencies for RASP. It will also try to install graphviz (the non-python part) and rlwrap on your machine. If these fail, you will still be able to use RASP, however: the interface will not be as nice without rlwrap, and drawing s-op computation flows will not be possible without graphviz. After having set up, you can run ./rasp.sh to start the RASP read-evaluate-print-loop.

Windows

Follow the instructions given in windows instructions.txt

The REPL

After having set up, if you are in mac/linux, you can run ./rasp.sh to start the RASP REPL. Otherwise, run python3 RASP_support/REPL.py Use Ctrl+C to quit a partially entered command, and Ctrl+D to exit the REPL.

Initial Environment

RASP starts with the base s-ops: tokens, indices, and length. It also has the base functions select, aggregate, and selector_width as described in the paper, a selector full_s created through select(1,1,==) that creates a "full" attention pattern, and several other library functions (check out RASP_support/rasplib.rasp to see them).

Additionally, the REPL begins with a base example, "hello", on which it shows the output for each created s-op or selector. This example can be changed, and toggled on and off, through commands to the REPL.

All RASP commands end with a semicolon. Commands to the REPL -- such as changing the base example -- do not.

Start by following along with the examples -- they are kept at the bottom of this readme.

Note on input types:

RASP expects inputs in four forms: strings, integers, floats, or booleans, handled respectively by tokens_str, tokens_int, tokens_float, and tokens_bool. Initially, RASP loads with tokens set to tokens_str, this can be changed by assignment, e.g.: tokens=tokens_int;. When changing the input type, you will also want to change the base example, e.g.: set example [0,1,2].

Note that assignments do not retroactively change the computation trees of existing s-ops!

Writing and Loading RASP files

To keep and load RASP code from files, save them with .rasp as the extension, and use the 'load' command without the extension. For example, you can load the examples file paper_examples.rasp in this repository to the REPL as follows:

>> load "paper_examples";

This will make (almost) all values in the file available in the loading environment (whether the REPL, or a different .rasp file): values whose names begin with an underscore remain private to the file they are written in. Loading files in the REPL will also print a list of all loaded values.

Syntax Highlighting

For the Sublime Text editor, you can get syntax highlighting for .rasp files as follows:

  1. Install package control for sublime (you might already have it: look in the menu [Sublime Text]->[Preferences] and see if it's there. If not, follow the instructions at https://packagecontrol.io/installation).
  2. Install the 'packagedev' package through package control ([Sublime Text]->[Preferences]->[Package Control], then type [install package], then [packagedev])
  3. After installing PackageDev, create a new syntax definition file through [Tools]->[Packages]->[Package Development]->[New Syntax Definition].
  4. Copy the contents of RASP_support/RASP.sublime-syntax into the new syntax definition file, and save it as RASP.sublime-syntax.

[Above is basically following the instructions in http://ilkinulas.github.io/programming/2016/02/05/sublime-text-syntax-highlighting.html , and then copying in the contents of the provided RASP.sublime-syntax file]

Examples

Play around in the REPL!

Try simple elementwise manipulations of s-ops:

>>  threexindices =3 * indices;
     s-op: threexindices
 	 Example: threexindices("hello") = [0, 3, 6, 9, 12] (ints)
>> indices+indices;
     s-op: out
 	 Example: out("hello") = [0, 2, 4, 6, 8] (ints)

Change the base example, and create a selector that focuses each position on all positions before it:

>> set example "hey"
>> prevs=select(indices,indices,<);
     selector: prevs
 	 Example:
 			     h e y
 			 h |      
 			 e | 1    
 			 y | 1 1  

Check the output of an s-op on your new base example:

>> threexindices;
     s-op: threexindices
 	 Example: threexindices("hey") = [0, 3, 6] (ints)

Or on specific inputs:

>> threexindices(["hi","there"]);
	 =  [0, 3] (ints)
>> threexindices("hiya");
	 =  [0, 3, 6, 9] (ints)

Aggregate with the full selection pattern (loaded automatically with the REPL) to compute the proportion of a letter in your input:

>> full_s;
     selector: full_s
 	 Example:
 			     h e y
 			 h | 1 1 1
 			 e | 1 1 1
 			 y | 1 1 1
>> my_frac=aggregate(full_s,indicator(tokens=="e"));
     s-op: my_frac
 	 Example: my_frac("hey") = [0.333]*3 (floats)

Note: when an s-op's output is identical in all positions, RASP simply prints the output of one position, followed by " * X" (where X is the sequence length) to mark the repetition.

Check if a letter is in your input at all:

>> "e" in tokens;
     s-op: out
 	 Example: out("hey") = [T]*3 (bools)

Alternately, in an elementwise fashion, check if each of your input tokens belongs to some group:

>> vowels = ["a","e","i","o","u"];
     list: vowels = ['a', 'e', 'i', 'o', 'u']
>> tokens in vowels;
     s-op: out
 	 Example: out("hey") = [F, T, F] (bools)

Draw the computation flow for an s-op you have created, on an input of your choice: (this will create a pdf in a subfolder comp_flows of the current directory)

>> draw(my_frac,"abcdeeee");
	 =  [0.5]*8 (floats)

Or simply on the base example:

>> draw(my_frac);
	 =  [0.333]*3 (floats)

If they bother you, turn the examples off, and bring them back when you need them:

>> examples off
>> indices;
     s-op: indices
>> full_s;
     selector: full_s
>> examples on
>> indices;
     s-op: indices
 	 Example: indices("hey") = [0, 1, 2] (ints)

You can also do this selectively, turning only selector or s-op examples on and off, e.g.: selector examples off.

Create a selector that focuses each position on all other positions containing the same token. But first, set the base example to "hello" for a better idea of what's happening:

>> set example "hello"
>> same_token=select(tokens,tokens,==);
     selector: same_token
 	 Example:
 			     h e l l o
 			 h | 1        
 			 e |   1      
 			 l |     1 1  
 			 l |     1 1  
 			 o |         1

Then, use selector_width to compute, for each position, how many other positions the selector same_token focuses it on. This effectively computes an in-place histogram over the input:

>> histogram=selector_width(same_token);
     s-op: histogram
 	 Example: histogram("hello") = [1, 1, 2, 2, 1] (ints)

For more complicated examples, check out paper_examples.rasp!

Experiments on Transformers

The transformers in the paper were trained, and their attention heatmaps visualised, using the code in this repository: https://github.com/tech-srl/RASP-exps

KaziText is a tool for modelling common human errors.

KaziText KaziText is a tool for modelling common human errors. It estimates probabilities of individual error types (so called aspects) from grammatic

ÚFAL 3 Nov 24, 2022
Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs Check out the paper on arXiv: https://arxiv.org/abs/2103.13744 This repo cont

Christian Reiser 373 Dec 20, 2022
Cross-modal Retrieval using Transformer Encoder Reasoning Networks (TERN). With use of Metric Learning and FAISS for fast similarity search on GPU

Cross-modal Retrieval using Transformer Encoder Reasoning Networks This project reimplements the idea from "Transformer Reasoning Network for Image-Te

Minh-Khoi Pham 5 Nov 05, 2022
Sinkformers: Transformers with Doubly Stochastic Attention

Code for the paper : "Sinkformers: Transformers with Doubly Stochastic Attention" Paper You will find our paper here. Compat This package has been dev

Michael E. Sander 31 Dec 29, 2022
[SDM 2022] Towards Similarity-Aware Time-Series Classification

SimTSC This is the PyTorch implementation of SDM2022 paper Towards Similarity-Aware Time-Series Classification. We propose Similarity-Aware Time-Serie

Daochen Zha 49 Dec 27, 2022
Erpnext app for make employee salary on payroll entry based on one or more project with percentage for all project equal 100 %

Project Payroll this app for make payroll for employee based on projects like project on 30 % and project 2 70 % as account dimension it makes genral

Ibrahim Morghim 8 Jan 02, 2023
一个多语言支持、易使用的 OCR 项目。An easy-to-use OCR project with multilingual support.

AgentOCR 简介 AgentOCR 是一个基于 PaddleOCR 和 ONNXRuntime 项目开发的一个使用简单、调用方便的 OCR 项目 本项目目前包含 Python Package 【AgentOCR】 和 OCR 标注软件 【AgentOCRLabeling】 使用指南 Pytho

AgentMaker 98 Nov 10, 2022
An implementation of Deep Graph Infomax (DGI) in PyTorch

DGI Deep Graph Infomax (Veličković et al., ICLR 2019): https://arxiv.org/abs/1809.10341 Overview Here we provide an implementation of Deep Graph Infom

Petar Veličković 491 Jan 03, 2023
[CVPR'21] Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation

Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation Weixiang Yang, Qi Li, Wenxi Liu, Yuanlong Yu, Y

118 Dec 26, 2022
Woosung Choi 63 Nov 14, 2022
NNR conformation conditional and global probabilities estimation and analysis in peptides or proteins fragments

NNR and global probabilities estimation and analysis in peptides or protein fragments This module calculates global and NNR conformation dependent pro

0 Jul 15, 2021
Inteligência artificial criada para realizar interação social com idosos.

IA SONIA 4.0 A SONIA foi inspirada no assistente mais famoso do mundo e muito bem conhecido JARVIS. Todo mundo algum dia ja sonhou em ter o seu própri

Vinícius Azevedo 2 Oct 21, 2021
KDD CUP 2020 Automatic Graph Representation Learning: 1st Place Solution

KDD CUP 2020: AutoGraph Team: aister Members: Jianqiang Huang, Xingyuan Tang, Mingjian Chen, Jin Xu, Bohang Zheng, Yi Qi, Ke Hu, Jun Lei Team Introduc

96 May 30, 2022
Baleen: Robust Multi-Hop Reasoning at Scale via Condensed Retrieval (NeurIPS'21)

Baleen Baleen is a state-of-the-art model for multi-hop reasoning, enabling scalable multi-hop search over massive collections for knowledge-intensive

Stanford Future Data Systems 22 Dec 05, 2022
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples

Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples This repository is the official implementation of paper [Qimera: Data-free Q

Kanghyun Choi 21 Nov 03, 2022
PyTorch implementation for ACL 2021 paper "Maria: A Visual Experience Powered Conversational Agent".

Maria: A Visual Experience Powered Conversational Agent This repository is the Pytorch implementation of our paper "Maria: A Visual Experience Powered

Jokie 22 Dec 12, 2022
Machine Learning Model deployment for Container (TensorFlow Serving)

try_tf_serving ├───dataset │ ├───testing │ │ ├───paper │ │ ├───rock │ │ └───scissors │ └───training │ ├───paper │ ├───rock

Azhar Rizki Zulma 5 Jan 07, 2022
[CVPR 2019 Oral] Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation

SelectionGAN for Guided Image-to-Image Translation CVPR Paper | Extended Paper | Guided-I2I-Translation-Papers Citation If you use this code for your

Hao Tang 424 Dec 02, 2022
The official implementation of Equalization Loss for Long-Tailed Object Recognition (CVPR 2020) based on Detectron2

Equalization Loss for Long-Tailed Object Recognition Jingru Tan, Changbao Wang, Buyu Li, Quanquan Li, Wanli Ouyang, Changqing Yin, Junjie Yan ⚠️ We re

Jingru Tan 197 Dec 25, 2022
Scikit-learn compatible estimation of general graphical models

skggm : Gaussian graphical models using the scikit-learn API In the last decade, learning networks that encode conditional independence relationships

213 Jan 02, 2023