Neural models of common sense. 🤖

Related tags

Deep Learningrainbow
Overview

Unicorn on Rainbow

Neural models of common sense.

This repository is for the paper: Unicorn on Rainbow: A Universal Commonsense Reasoning Model on a New Multitask Benchmark. Unicorn on Rainbow introduces a new evaluation, the cost equivalent curve, which compares models in terms of their cost-benefit trade offs. Using cost equivalent curves, we conduct a large-scale empirical study of intermediate-task transfer for common sense on a new benchmark collection of commonsense reasoning datasets, Rainbow. With findings from this study, we create a new state-of-the-art model for commonsense reasoning: Unicorn.

Jump to a section of the readme to accomplish different goals:

  • Rainbow: Read about and download data for Rainbow, our new commonsense reasoning benchmark.
  • Unicorn: Get up and running with Unicorn, our state-of-the-art commonsense reasoning model.
  • Cost Equivalent Curves: Learn how to generate cost equivalent curves for your own predictions.
  • Experimental Results: Download and analyze the results from our hundreds of experiments.
  • Setup: Get set up to run the code in this repository.
  • Quickstart: Run the code in this repo.
  • Citation: Cite the Unicorn on Rainbow paper.
  • Contact: Reach out with questions or comments.

Note: This repository is intended for research. There is no intention for ongoing maintenance.

Rainbow

Rainbow brings together six pre-existing commonsense reasoning benchmarks: aNLI, Cosmos QA, HellaSWAG, Physical IQa, Social IQa, and WinoGrande. These commonsense reasoning benchmarks span both social and physical common sense.

Note: Rainbow pins these datasets to specific versions. To make sure you're using the correct data, please download those versions below.

Getting the Data

Rainbow preprocesses all of the datasets into a text-to-text format for ease of modeling.

Alternatively, you can download the individual tasks and preprocess them yourself.

All checksums are sha256. To compute the checksum with openssl, run:

$ openssl sha256 $FILE_PATH

Submitting to the Leaderboard

If you develop a model for Rainbow, please feel free to submit to the leaderboard!

Unicorn

Unicorn (a UNIversal COmmonsense Reasoning Model) solves commonsense reasoning tasks in the text-to-text format. In principle, Unicorn may be trained on any NLP task, simply feed it text input and ask it to predict text output. Unicorn derives from T5, supercharging it for commonsense reasoning tasks and achieving state-of-the-art across a number of popular benchmarks, including Rainbow and CommonsenseQA.

To try Unicorn on your own data, first download the weights then fine-tune and evaluate it on your own data.

Downloading the Weights

To run Unicorn, you'll first need to download its weight files into a directory or path on Google Cloud. Using gsutil:

gsutil cp -r \
  gs://ai2-mosaic-public/projects/rainbow/v1.0/unicorns/lr-2e-3_batch-size-32
  $DST

Where $DST is the destination directory.

Reproducing our Results

In Unicorn on Rainbow, we trained different Unicorns that were first multitasked on Rainbow using different hyper-parameters. The checkpoint we've made available had the best performance most often. If you need the other checkpoints, please email the authors.

Cost Equivalent Curves

Cost equivalent curves compare the cost-benefit trade offs different techniques offer. In particular, cost equivalent curves plot the baseline and new technique's equivalent costs, or the costs where they achieve the same performance. For example, if the cost is measured as the number of examples and performance is measured by accuracy, then the cost equivalent curve shows how many examples the baseline needs to match the new technique's accuracy.

The plot_cost_equivalent_curves function in bin/create-multi-experiment-figures.py offers example code for how to create cost equivalent curves in Python.

Stay Tuned! We'll soon be releasing an easy-to-use, standalone package for creating cost equivalent curves. Check back here for it in the future.

Experimental Results

For Unicorn on Rainbow, we ran hundreds of experiments. We've made available the results from all those experiments in order to facilitate future research. For example, you may want those thousands of training curves to study hyper-parameter tuning or how loss evolves over training.

Among other things, you'll find:

  • predictions on dev from every checkpoint saved during training
  • training curves (training step vs. loss)
  • learning curves (dataset size vs. accuracy)
  • hyper-parameter tuning
  • all tables and figures from the paper
  • and more...

Our hope is that researchers can reuse this large collection of experiments to derive new practical and research insights.

Downloading the Results

Five collections of results are available:

All checksums are sha256. To compute the checksum with openssl, run:

$ openssl sha256 $FILE_PATH

NOTE: The learning curves experiments varied the number of training examples up to 16,000; however, CommonsenseQA has fewer than 16,000 training examples. Thus, for CommonsenseQA numbers higher than 9,741 are truncated to that size. This subtlety is taken care of by the data processing pipeline when the experiments are processed into the results tables, so it only affects rainbow-predictions.tar.gz and rainbow-experiments.tar.gz.

Replicating Our Analysis Pipeline

All the scripts to replicate our analysis pipeline reside in bin/. In order to run the scripts, you'll need to get set up for development.

The overall pipeline is as follows:

+----------------------------+
| rainbow-predictions.tar.gz |
+----------------------------+
              |
              | (bin/organize-experiments)
              V
+----------------------------+
| rainbow-experiments.tar.gz |
+----------------------------+
              |
              | (bin/generate-tables.py)
              V
  +------------------------+
  | rainbow-results.tar.gz |
  +------------------------+
         |         |
         |         | (bin/generate-latex-tables.py)
         |         V
         |     +-----------------------------+
         |     | rainbow-latex-tables.tar.gz |
         |     +-----------------------------+
         |
         | (bin/create-single-experiment-figures.py)
         | (bin/create-multi-experiment-figures.py)
         V
+------------------------+
| rainbow-figures.tar.gz |
+------------------------+

To run the pipeline, start by downloading rainbow-predictions.tar.gz (see Downloading the Results above).

Use bin/organize-experiments to produce rainbow-experiments.tar.gz:

$ tar -xf rainbow-predictions.tar.gz
$ bin/organize-experiments rainbow-predictions $DST

Where $DST is the desired destination directory (for example the current directory, .).

Use bin/generate-tables.py to produce rainbow-results.tar.gz:

$ bin/generate-tables.py rainbow-experiments rainbow-results

Use bin/create-single-experiment-figures.py and bin/create-multi-experiment-figures.py to create rainbow-figures.tar.gz:

$ bin/create-single-experiment-figures.py rainbow-results rainbow-figures/single-experiment
$ bin/create-multi-experiment-figures.py rainbow-results rainbow-figures/multi-experiment

And use bin/generate-latex-tables.py to produce rainbow-latex-tables.tar.gz:

$ bin/generate-latex-tables.py rainbow-results rainbow-latex-tables

All scripts except bin/organize-experiments are also self-documenting, so pass --help to any of them for more information.

Setup

This project requires Python 3.6 or above.

First, install the project's dependencies:

./bin/install

Next, make sure you have the following environment variables set:

  1. RAINBOW_DATASETS_DIR: The directory for storing all relevant datasets.
  2. RAINBOW_PREPROCESSED_DATASETS_DIR: The directory for storing the preprocessed dataset split files.
  3. RAINBOW_TFDS_DATASETS_DIR: The directory for storing the TFDS (tensorflow datasets) datasets.

Training requires TPUs. For training, all directories should point to Google Cloud Storage prefixes. Additionally, you'll need the following environment variables:

  1. PROJECT: Your Google Cloud project's ID.
  2. ZONE: Your Google Cloud virtual machine's zone.
  3. TPU_NAME: Your TPU's name.
  4. TPU_TOPOLOGY: Your TPU's topology.

Then, download and prepare all the datasets for text-to-text modeling:

$ ./bin/prepare.py --help
Usage: prepare.py [OPTIONS]

  Prepare all relevant datasets for text-to-text modeling.

  Download to and read the datasets from --src, transform them into CSVs
  suitable for text-to-text models, then write the results to --dst. Google
  storage paths are supported.

Options:
  --src TEXT        The directory to which to download all the relevant
                    datasets. Defaults to the RAINBOW_DATASETS_DIR environment
                    variable.  [required]
  --dst TEXT        The directory to which to write the preprocessed dataset
                    files. Defaults to the RAINBOW_PREPROCESSED_DATASETS_DIR
                    environment variable.  [required]
  --force-download  Force downloads of all the datasets, otherwise only
                    missing datasets will be downloaded.
  --help            Show this message and exit.

Finally, verify your installation:

./bin/verify

Quickstart

Before following this section, make sure you've done the Setup.

Fine-tuning

To fine-tune the model, use bin/fine-tune.py:

$ ./bin/fine-tune.py --help
Usage: fine-tune.py [OPTIONS] MIXTURE RESULTS_DIR

  Fine-tune the model on MIXTURE, writing results to RESULTS_DIR.

Options:
  --pretrained-model TEXT         The path to or name of the pretrained model.
                                  Defaults to 3B.
  --n-steps INTEGER               The number of gradient updates. Defaults to
                                  25,000.
  --learning-rate FLOAT           The learning rate to use for training.
                                  Defaults to 3e-3.
  --batch-size INTEGER            The batch size to use for training. For
                                  efficient training on the TPU, choose a
                                  multiple of either 8 or 128. Defaults to 16.
  --model-parallelism INTEGER     The degree of model parallelism to use.
                                  Defaults to 8.
  --save-checkpoints-steps INTEGER
                                  The number of steps to take before saving a
                                  checkpoint. Defaults to 5000.
  --n-checkpoints-to-keep INTEGER
                                  The number of checkpoints to keep during
                                  fine-tuning. Defaults to 4.
  --tpu-name TEXT                 The name of the TPU. Defaults to the
                                  TPU_NAME environment variable.  [required]
  --tpu-topology TEXT             The topology of the TPU. Defaults to the
                                  TPU_TOPOLOGY environment variable.
                                  [required]
  --help                          Show this message and exit.

Evaluation

To evaluate the model, use bin/evaluate.py:

$ ./bin/evaluate.py --help
Usage: evaluate.py [OPTIONS] MIXTURE RESULTS_DIR

  Evaluate the model located at RESULTS_DIR on MIXTURE.

Options:
  --batch-size INTEGER         The batch size to use for prediction. For
                               efficient prediction on the TPU, choose a
                               multiple of either 8 or 128. Defaults to 64.
  --model-parallelism INTEGER  The degree of model parallelism to use.
                               Defaults to 8.
  --tpu-name TEXT              The name of the TPU. Defaults to the TPU_NAME
                               environment variable.  [required]
  --tpu-topology TEXT          The topology of the TPU. Defaults to the
                               TPU_TOPOLOGY environment variable.  [required]
  --help                       Show this message and exit.

Tests and Code Quality

The code is formatted with black. You can run the formatter using the bin/format script:

$ ./bin/format

To run code quality checks, use the bin/verify script:

$ ./bin/verify

For fine-grained control of which tests to run, use pytest directly:

$ pytest

You can also skip slower tests by passing the --skip-slow (-s) flag:

$ pytest --skip-slow

Citation

Unicorn on Rainbow is a AAAI 2021 paper. Please check back here soon for the bibtex citation.

Contact

For public, non-sensitive questions and concerns, please file an issue on this repository.

For private or sensitive inquiries email mosaic on the allenai.org website.

AI Face Mesh: This is a simple face mesh detection program based on Artificial intelligence.

AI Face Mesh: This is a simple face mesh detection program based on Artificial Intelligence which made with Python. It's able to detect 468 different

Md. Rakibul Islam 1 Jan 13, 2022
MoViNets PyTorch implementation: Mobile Video Networks for Efficient Video Recognition;

MoViNet-pytorch Pytorch unofficial implementation of MoViNets: Mobile Video Networks for Efficient Video Recognition. Authors: Dan Kondratyuk, Liangzh

189 Dec 20, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

185 Dec 26, 2022
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

Jiezhang Cao 225 Nov 13, 2022
A Multi-modal Perception Tracker (MPT) for speaker tracking using both audio and visual modalities

MPT A Multi-modal Perception Tracker (MPT) for speaker tracking using both audio and visual modalities. Implementation for our AAAI 2022 paper: Multi-

yidiLi 4 May 08, 2022
StyleSwin: Transformer-based GAN for High-resolution Image Generation

StyleSwin This repo is the official implementation of "StyleSwin: Transformer-based GAN for High-resolution Image Generation". By Bowen Zhang, Shuyang

Microsoft 349 Dec 28, 2022
Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation

Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation This paper has been accepted and early accessed

Yun Liu 39 Sep 20, 2022
[CVPR2021 Oral] FFB6D: A Full Flow Bidirectional Fusion Network for 6D Pose Estimation.

FFB6D This is the official source code for the CVPR2021 Oral work, FFB6D: A Full Flow Biderectional Fusion Network for 6D Pose Estimation. (Arxiv) Tab

Yisheng (Ethan) He 201 Dec 28, 2022
Code repository for "Free View Synthesis", ECCV 2020.

Free View Synthesis Code repository for "Free View Synthesis", ECCV 2020. Setup Install the following Python packages in your Python environment - num

Intelligent Systems Lab Org 253 Dec 07, 2022
This initial strategy was developed specifically for larger pools and is based on taking a moving average and deriving Bollinger Bands to create a projected active liquidity range.

Gamma's Strategy One This initial strategy was developed specifically for larger pools and is based on taking a moving average and deriving Bollinger

Gamma Strategies 46 Dec 02, 2022
NVIDIA container runtime

nvidia-container-runtime A modified version of runc adding a custom pre-start hook to all containers. If environment variable NVIDIA_VISIBLE_DEVICES i

NVIDIA Corporation 938 Jan 06, 2023
Real-time 3D multi-person detection made easy with OpenPose and the ZED

OpenPose ZED This sample show how to simply use the ZED with OpenPose, the deep learning framework that detects the skeleton from a single 2D image. T

blanktec 5 Nov 06, 2020
PPO is a very popular Reinforcement Learning algorithm at present.

PPO is a very popular Reinforcement Learning algorithm at present. OpenAI takes PPO as the current baseline algorithm. We use the PPO algorithm to train a policy to give the best action in any situat

Rosefintech 11 Aug 23, 2021
Simple STAC Catalogs discovery tool.

STAC Catalog Discovery Simple STAC discovery tool. Just paste the STAC Catalog link and press Enter. Details STAC Discovery tool enables discovering d

Mykola Kozyr 21 Oct 19, 2022
Caffe implementation for Hu et al. Segmentation for Natural Language Expressions

Segmentation from Natural Language Expressions This repository contains the Caffe reimplementation of the following paper: R. Hu, M. Rohrbach, T. Darr

10 Jul 27, 2021
CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images

Code and result about CCAFNet(IEEE TMM) 'CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images' IEE

zyrant丶 14 Dec 29, 2021
The official repository for Deep Image Matting with Flexible Guidance Input

FGI-Matting The official repository for Deep Image Matting with Flexible Guidance Input. Paper: https://arxiv.org/abs/2110.10898 Requirements easydict

Hang Cheng 51 Nov 10, 2022
Pytorch implementation of Generative Models as Distributions of Functions 🌿

Generative Models as Distributions of Functions This repo contains code to reproduce all experiments in Generative Models as Distributions of Function

Emilien Dupont 117 Dec 29, 2022
Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)

Taming Visually Guided Sound Generation • [Project Page] • [ArXiv] • [Poster] • • Listen for the samples on our project page. Overview We propose to t

Vladimir Iashin 226 Jan 03, 2023
Use evolutionary algorithms instead of gridsearch in scikit-learn

sklearn-deap Use evolutionary algorithms instead of gridsearch in scikit-learn. This allows you to reduce the time required to find the best parameter

rsteca 709 Jan 03, 2023