Multi-angle c(q)uestion answering

Related tags

Deep Learningmacaw
Overview

Macaw

Introduction

Macaw (Multi-angle c(q)uestion answering) is a ready-to-use model capable of general question answering, showing robustness outside the domains it was trained on. It has been trained in "multi-angle" fashion, which means it can handle a flexible set of input and output "slots" (like question, answer, explanation) .

Macaw was built on top of T5 and comes in different sizes: macaw-11b, macaw-3b, and macaw-large, as well as an answer-focused version featured on various leaderboards: macaw-answer-11b (see below).

Examples

Some suggestive examples from the Macaw (11B) model, for different angles:

  • (Q→A) Given a question, what's the answer?
    Q: James went camping in the woods, but forgot to bring a hammer to bang the tent pegs in. What else might he use?
    → A: rocks

  • (QM→A) Given a question and answer choices, what's the answer?
    Q: James went camping in the woods, but forgot to bring a hammer to bang the tent pegs in. What else might he use?
    M: (A) a leaf (B) a log (C) a worm
    → A: a log

  • (Q→AE) Given a question, what's the answer and an explanation?
    Q: Which force pulls objects to the ground?
    → A: gravity
    → E: Gravitational force causes objects that have mass to be pulled down on a planet.

  • (A→QE) Given an answer, what's a plausible question and explanation?
    A: elephant
    → Q: Which animal has the largest ears?
    → E: The ears of an elephant are the largest.

  • (C→QA) Given a context, what's a plausible question and answer?
    C: A car needs a battery to start.
    → Q: What is required for a car to start?
    → A: battery

For many more examples of the basic Q→A angle, see examples.md.

Usage examples

Macaw can easily be used in the Hugging Face transformers library, as shown here for the smallest model (the smallest model is not generally recommended, but has much smaller footprint), where given a question we want to return an answer and suggested multiple-choice answer options.

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("allenai/macaw-large")
model = AutoModelForSeq2SeqLM.from_pretrained("allenai/macaw-large")
input_string = "$answer$ ; $mcoptions$ ; $question$ = What is the color of a cloudy sky?"
input_ids = tokenizer.encode(input_string, return_tensors="pt")
output = model.generate(input_ids, max_length=200)

>>> tokenizer.batch_decode(output, skip_special_tokens=True)
['$answer$ = gray ; $mcoptions$ = (A) blue (B) white (C) grey (D) white']

(run pip install -r requirements.txt if any dependencies are missing). Note there's no guarantee the different slots are fully coherent, as in gray/grey (and duplicate "white") here, more so for the macaw-large model vs the larger ones.

The code in macaw/utils.py includes some convenience wrappers, such as load_model and run_macaw, here are some examples loading the macaw-11b model onto two GPUs (need around 48GB total GPU memory for the largest model to work):

from macaw.utils import load_model, run_macaw
model_dict = load_model("allenai/macaw-11b", cuda_devices=[0,1])
res1 = run_macaw("Q: Which force pulls objects to the ground?\nA\nE", model_dict)
# Alternate input syntax
res2 = run_macaw({"Q:":"Which force causes a compass needle to point north?", "A":""}, model_dict)
# Add sampling options for the output
res3 = run_macaw("Q: Which force pulls objects to the ground?\nA\nE", model_dict, {"do_sample": True, "temperature": 2.0})

>>> [print(res["output_slots_list"][0]) for res in [res1, res2, res3]]
{'answer': 'gravity', 'explanation': 'Gravitational force causes objects that have mass to be pulled down on a planet.'}
{'answer': 'magnetism'}
{'answer': 'gravitional force', 'explanation': 'Gravitational force causes objects that have mass to be pulled down on a planet.'}

For batch evaluation of instances at various angles, see macaw/batch_eval.py for pointers.

Supported slots

Here are the slots available in Macaw, generally applicable for both input and output:

Slot name Description Example
question (Q) Question text What is the color of a cloudy sky?
answer (A) Answer text The sky is blue
mcoptions (M) Multiple-choice answer options (A) blue (B) white (C) grey
context (C) Potentially relevant context (noisy IR) The sky looks blue to us because...
explanation (E) Sentences explaining the answer A cloudy sky is usually gray in color...

An angle is a specific set of input/output slots, for instance QM->AE is the task of producing answer and explanation, given a question and multiple-choice options. Macaw is trained on a wide variety of angles and handles unseen angles as well, one exception is that the context (C) only appears as an input slot in the training data.

The Challenge300 dataset of probing questions

The Challenge300 dataset of 300 diverse probing examples can be found in challenge300-probes-v1.jsonl. The basic Q→A output from Macaw (at different sizes), as well as outputs from GPT3, Jurassic-1 and alternate T5 models trained on NaturalQuestions, can be seen in examples.md.

Demo

See DEMO.md for instructions and code to host an interactive version of Macaw.

Training data

Macaw was trained in two steps from the text-to-text transformer model T5:

  1. Multi-angle version of UnifiedQA by fine-tuning T5 on the following 7 datasets and associated angles:

  2. Further fine-tuning of Multi-Angle UnifiedQA on multiple-choice and direct-answer elementary science questions, along with (up to 5) explanation sentences from WorldTreeV2:

    • ARC: QMC→AE, AQC→M, QMEC→A, QME→A, QE→A, QMC→A, QC→AE, QM→AE, QMAC→E, QMA→E
    • ARC-DA: QC→AE, Q→AE, QC→A, Q→A, QEC→A, QE→A, AE→Q, AC→Q, QA→E, AQC→E
  3. A specialized answer-focused model, macaw-answer-11b (called "UnifiedQA + ARC MC/DA + IR" on the leaderboards for ARC, ARC-Easy, and ARC-DA) was trained on a smaller set of angles, not including explanations:

    • ARC: QMC→A, QAC→M, QC→A, QM→A, MAC→Q, AC→QM, M→QA
    • ARC-DA: QC→A, Q→A, AC→Q, C→QA

Available models

The Macaw models can be accessed from the Hugging Face model hub:

For a sense of the degradation in performance for the smaller sizes, here are baseline scores on the ARC Challenge and ARC Easy multiple-choice development questions. Included are variants with and without IR context from a large science corpus (corresponding to angles QMC→A and QM→A respectively).

Model ARC Challenge ARC Challenge (no IR) ARC Easy ARC Easy (no IR)
Macaw (11B) 76.9 74.6 91.2 84.9
Macaw-3B 68.2 67.9 87.9 77.7
Macaw-large 57.2 50.5 82.5 63.9
Macaw-answer (11B) 79.9 75.2 90.5 85.8

Disclaimer

As a model capable of generating free form text, the output of the model is not guaranteed to be free of offensive material, so appropriate caution is advised when using the model.

Citation

If you use Macaw in your work, please reference the related paper using

@article{Tafjord2021Macaw,
  title={General-Purpose Question-Answering with {M}acaw},
  author={Oyvind Tafjord and Peter Clark},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.02593}
}
This repo is about implementing different approaches of pose estimation and also is a sub-task of the smart hospital bed project :smile:

Pose-Estimation This repo is a sub-task of the smart hospital bed project which is about implementing the task of pose estimation 😄 Many thanks to th

Max 11 Oct 17, 2022
A Python framework for conversational search

Chatty Goose Multi-stage Conversational Passage Retrieval: An Approach to Fusing Term Importance Estimation and Neural Query Rewriting Installation Ma

Castorini 36 Oct 23, 2022
The Official PyTorch Implementation of "LSGM: Score-based Generative Modeling in Latent Space" (NeurIPS 2021)

The Official PyTorch Implementation of "LSGM: Score-based Generative Modeling in Latent Space" (NeurIPS 2021) Arash Vahdat*   ·   Karsten Kreis*   ·  

NVIDIA Research Projects 238 Jan 02, 2023
Code for the paper "Reinforcement Learning as One Big Sequence Modeling Problem"

Trajectory Transformer Code release for Reinforcement Learning as One Big Sequence Modeling Problem. Installation All python dependencies are in envir

Michael Janner 269 Jan 05, 2023
On-device speech-to-intent engine powered by deep learning

Rhino Made in Vancouver, Canada by Picovoice Rhino is Picovoice's Speech-to-Intent engine. It directly infers intent from spoken commands within a giv

Picovoice 510 Dec 30, 2022
Code for paper [ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot] (ICCV 2021, oral))

ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot This repository is the official PyTorch implementation of ICCV-21 pape

Jiarui 21 May 09, 2022
Code for database and frontend of webpage for Neural Fields in Visual Computing and Beyond.

Neural Fields in Visual Computing—Complementary Webpage This is based on the amazing MiniConf project from Hendrik Strobelt and Sasha Rush—thank you!

Brown University Visual Computing Group 29 Nov 30, 2022
Machine Learning Model deployment for Container (TensorFlow Serving)

try_tf_serving ├───dataset │ ├───testing │ │ ├───paper │ │ ├───rock │ │ └───scissors │ └───training │ ├───paper │ ├───rock

Azhar Rizki Zulma 5 Jan 07, 2022
Non-Vacuous Generalisation Bounds for Shallow Neural Networks

This package requires jax, tensorflow, and numpy. Either tensorflow or scikit-learn can be used for loading data. To run in a nix-shell with required

Felix Biggs 0 Feb 04, 2022
Compute execution plan: A DAG representation of work that you want to get done. Individual nodes of the DAG could be simple python or shell tasks or complex deeply nested parallel branches or embedded DAGs themselves.

Hello from magnus Magnus provides four capabilities for data teams: Compute execution plan: A DAG representation of work that you want to get done. In

12 Feb 08, 2022
Learning nonlinear operators via DeepONet

DeepONet: Learning nonlinear operators The source code for the paper Learning nonlinear operators via DeepONet based on the universal approximation th

Lu Lu 239 Jan 02, 2023
This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.

Pruning Self-attentions into Convolutional Layers in Single Path This is the official repository for our paper: Pruning Self-attentions into Convoluti

Zhuang AI Group 77 Dec 26, 2022
Code for the AAAI 2022 paper "Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-Sentence Dependency Graph".

multilingual-mrc-isdg Code for the AAAI 2022 paper "Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-Sentence Dependency Graph". This r

Liyan 5 Dec 07, 2022
This project uses Template Matching technique for object detecting by detection of template image over base image.

Object Detection Project Using OpenCV This project uses Template Matching technique for object detecting by detection the template image over base ima

Pratham Bhatnagar 7 May 29, 2022
An official implementation of "SFNet: Learning Object-aware Semantic Correspondence" (CVPR 2019, TPAMI 2020) in PyTorch.

PyTorch implementation of SFNet This is the implementation of the paper "SFNet: Learning Object-aware Semantic Correspondence". For more information,

CV Lab @ Yonsei University 87 Dec 30, 2022
Multivariate Boosted TRee

Multivariate Boosted TRee What is MBTR MBTR is a python package for multivariate boosted tree regressors trained in parameter space. The package can h

SUPSI-DACD-ISAAC 61 Dec 19, 2022
Event queue (Equeue) dialect is an MLIR Dialect that models concurrent devices in terms of control and structure.

Event Queue Dialect Event queue (Equeue) dialect is an MLIR Dialect that models concurrent devices in terms of control and structure. Motivation The m

Cornell Capra 23 Dec 08, 2022
Official PyTorch implementation of the paper "TEMOS: Generating diverse human motions from textual descriptions"

TEMOS: TExt to MOtionS Generating diverse human motions from textual descriptions Description Official PyTorch implementation of the paper "TEMOS: Gen

Mathis Petrovich 187 Dec 27, 2022
Learning to Draw: Emergent Communication through Sketching

Learning to Draw: Emergent Communication through Sketching This is the official code for the paper "Learning to Draw: Emergent Communication through S

19 Jul 22, 2022
Pre-trained NFNets with 99% of the accuracy of the official paper

NFNet Pytorch Implementation This repo contains pretrained NFNet models F0-F6 with high ImageNet accuracy from the paper High-Performance Large-Scale

Benjamin Schmidt 133 Dec 09, 2022