This repository contains source code for the Situated Interactive Language Grounding (SILG) benchmark

Related tags

Deep Learningsilg
Overview

SILG

This repository contains source code for the Situated Interactive Language Grounding (SILG) benchmark. If you find this work helpful, please consider citing this work:

@inproceedings{ zhong2021silg,
  title={ {SILG}: The Multi-environment Symbolic InteractiveLanguage Grounding Benchmark },
  author={ Victor Zhong and Austin W. Hanjie and Karthik Narasimhan and Luke Zettlemoyer },
  booktitle={ NeurIPS },
  year={ 2021 }
}

Please also consider citing the individual tasks included in SILG. They are RTFM, Messenger, NetHack Learning Environment, AlfWorld, and Touchdown.

RTFM

RTFM

Messenger

Messenger

SILGNethack

SILGNethack

ALFWorld

ALFWorld

SILGSymTouchdown

SILGSymTouchdown

How to install

You have to install the individual environments in order for SILG to work. The GitHub repository for each environment are found at

Our dockerfile also provides an example of how to install the environments in Ubuntu. You can also try using our install_envs.sh, which has only been tested in Ubuntu and MacOS.

bash install_envs.sh

Once you have installed the individual environments, install SILG as follows

pip install -r requirements.txt
pip install -e .

Some environments have (potentially a large quantity of) data files. Please download these via

bash download_env_data.sh  # if you do not want to use VisTouchdown, feel free to comment out its very large feature file

As a part of this download, we will symlink a ./cache directory from ./mycache. SILG environments will pull data files from this directory. If you are on NFS, you might want to move mycache to local disk and then relink the cache directory to avoid hitting NFS.

Docker

We provide a Docker container for this project. You can build the Docker image via docker build -t vzhong/silg . -f docker/Dockerfile. Alternatively you can pull my build from docker pull vzhong/silg. This contains the environments as well as SILG, but doesn't contain the large data download. You will still have to download the environment data and then mount the cache folder to the container. You may need to specify --platform linux/amd64 to Docker if you are running a M1 Mac.

Because some of the environments require that you install them first before downloading their data files, you want to download using the Docker container as well. You can do

docker run --rm --user "$(id -u):$(id -g)" -v $PWD/download_env_data.sh:/opt/silg/download_env_data.sh -v $PWD/mycache:/opt/silg/cache vzhong/silg bash download_env_data.sh

Once you have downloaded the environment data, you can use the container by doing something like

docker run --rm --user "$(id -u):$(id -g)" -it -v $PWD/mycache:/opt/silg/cache vzhong/silg /bin/bash

Visualizing environments

We provide a script to play SILG environments in the terminal. You can access it via

silg_play --env silg:rtfm_train_s1-v0  # use -h to see options

# docker variant
docker run --rm -it -v $PWD/mycache:/opt/silg/cache vzhong/silg silg_play --env silg:rtfm_train_s1-v0

These recordings are shown at the start of this document and are created using asciinema.

How to run experiments

The entrypoint to experiments is run_exp.py. We provide a slurm script to run experiments in launch.py. These scripts can also run jobs locally (e.g. without slurm). For example, to run RTFM:

python launch.py --local --envs rtfm

You can also log to WanDB with the --wandb option. For more, use the -h flag.

How to add a new environment

First, create a wrapper class in silg/envs/ .py . This wrapper will wrap the real environment and provide APIs used by the baseline models and the training script. silg/envs/rtfm.py contains an example of how to do this for RTFM. Once you have made the wrapper, don't forget to include its file in silg/envs/__init__.py.

The wrapper class must subclass silg.envs.base.SILGEnv and implement:

# return the list of text fields in the observation space
def get_text_fields(self):
    ...

# return max number of actions
def get_max_actions(self):
    ...

# return observation space
def get_observation_space(self):
    ...

# resets the environment
def my_reset(self):
    ...

# take a step in the environment
def my_step(self, action):
    ...

Additionally, you may want to implemnt rendering functions such as render_grid, parse_user_action, and get_user_actions so that it can be played with silg_play.

Note There is an implementation detail right now in that the Torchbeast code considers a "win" to be equivalent to the environment returning a reward >0.8. We hope to change this in the future (likely by adding another tensor field denoting win state) but please keep this in mind when implementing your environment. You likely want to keep the reward between -1 and +1, which high rewards >0.8 reserved for winning if you would like to use the training code as-is.

Changelog

Version 1.0

Initial release.

Owner
Victor Zhong
I am a PhD student at the University of Washington. Formerly Salesforce Research / MetaMind, @stanfordnlp, and ECE at UToronto.
Victor Zhong
Generic image compressor for machine learning. Pytorch code for our paper "Lossy compression for lossless prediction".

Lossy Compression for Lossless Prediction Using: Training: This repostiory contains our implementation of the paper: Lossy Compression for Lossless Pr

Yann Dubois 84 Jan 02, 2023
Code for our paper "Interactive Analysis of CNN Robustness"

Perturber Code for our paper "Interactive Analysis of CNN Robustness" Datasets Feature visualizations: Google Drive Fine-tuning checkpoints as saved m

Stefan Sietzen 0 Aug 17, 2021
Biomarker identification for COVID-19 Severity in BALF cells Single-cell RNA-seq data

scBALF Covid-19 dataset Analysis Here is the Github page that has the codes for the bioinformatics pipeline described in the paper COVID-Datathon: Bio

Nami Niyakan 2 May 21, 2022
The hippynn python package - a modular library for atomistic machine learning with pytorch.

The hippynn python package - a modular library for atomistic machine learning with pytorch. We aim to provide a powerful library for the training of a

Los Alamos National Laboratory 37 Dec 29, 2022
Spatial Intention Maps for Multi-Agent Mobile Manipulation (ICRA 2021)

spatial-intention-maps This code release accompanies the following paper: Spatial Intention Maps for Multi-Agent Mobile Manipulation Jimmy Wu, Xingyua

Jimmy Wu 70 Jan 02, 2023
MVFNet: Multi-View Fusion Network for Efficient Video Recognition (AAAI 2021)

MVFNet: Multi-View Fusion Network for Efficient Video Recognition (AAAI 2021) Overview We release the code of the MVFNet (Multi-View Fusion Network).

2 Jan 29, 2022
PaSST: Efficient Training of Audio Transformers with Patchout

PaSST: Efficient Training of Audio Transformers with Patchout This is the implementation for Efficient Training of Audio Transformers with Patchout Pa

165 Dec 26, 2022
Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks (MAPDN)

Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks (MAPDN) This is the implementation of the paper Multi-Age

Future Power Networks 83 Jan 06, 2023
RoBERTa Marathi Language model trained from scratch during huggingface 🤗 x flax community week

RoBERTa base model for Marathi Language (मराठी भाषा) Pretrained model on Marathi language using a masked language modeling (MLM) objective. RoBERTa wa

Nipun Sadvilkar 23 Oct 19, 2022
This repository is an implementation of our NeurIPS 2021 paper (Stylized Dialogue Generation with Multi-Pass Dual Learning) in PyTorch.

MPDL---TODO This repository is an implementation of our NeurIPS 2021 paper (Stylized Dialogue Generation with Multi-Pass Dual Learning) in PyTorch. Ci

CodebaseLi 3 Nov 27, 2022
Finetune SSL models for MOS prediction

Finetune SSL models for MOS prediction This is code for our paper under review for ICASSP 2022: "Generalization Ability of MOS Prediction Networks" Er

Yamagishi and Echizen Laboratories, National Institute of Informatics 32 Nov 22, 2022
Learnable Boundary Guided Adversarial Training (ICCV2021)

Learnable Boundary Guided Adversarial Training This repository contains the implementation code for the ICCV2021 paper: Learnable Boundary Guided Adve

DV Lab 27 Sep 25, 2022
PyTorch code for 'Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning'

Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning This repository is for EMSRDPN introduced in the foll

7 Feb 10, 2022
PyTorch implementation of normalizing flow models

PyTorch implementation of normalizing flow models

Vincent Stimper 242 Jan 02, 2023
Train an RL agent to execute natural language instructions in a 3D Environment (PyTorch)

Gated-Attention Architectures for Task-Oriented Language Grounding This is a PyTorch implementation of the AAAI-18 paper: Gated-Attention Architecture

Devendra Chaplot 234 Nov 05, 2022
Orthogonal Jacobian Regularization for Unsupervised Disentanglement in Image Generation (ICCV 2021)

Orthogonal Jacobian Regularization for Unsupervised Disentanglement in Image Generation Home | PyTorch BigGAN Discovery | TensorFlow ProGAN Regulariza

Yuxiang Wei 54 Dec 30, 2022
Airbus Ship Detection Challenge

Airbus Ship Detection Challenge This is an open solution to the Airbus Ship Detection Challenge. Our goals We are building entirely open solution to t

minerva.ml 55 Nov 29, 2022
Simulation environments for the CrazyFlie quadrotor: Used for Reinforcement Learning and Sim-to-Real Transfer

Phoenix-Drone-Simulation An OpenAI Gym environment based on PyBullet for learning to control the CrazyFlie quadrotor: Can be used for Reinforcement Le

Sven Gronauer 8 Dec 07, 2022
RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

184 Jan 04, 2023
Constructing Neural Network-Based Models for Simulating Dynamical Systems

Constructing Neural Network-Based Models for Simulating Dynamical Systems Note this repo is work in progress prior to reviewing This is a companion re

Christian Møldrup Legaard 21 Nov 25, 2022