Official repository for "PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation"

Overview

pair-emnlp2020

Official repository for the paper:

Xinyu Hua and Lu Wang: PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation

If you find our work useful, please cite:

@inproceedings{hua-wang-2020-pair,
    title = "PAIR: Planning and Iterative Refinement in Pre-trained Transformersfor Long Text Generation",
    author = "Hua, Xinyu  and
      Wang, Lu",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
}

Requirements

  • Python 3.7
  • PyTorch 1.4.0
  • PyTorchLightning 0.9.0
  • transformers 3.3.0
  • numpy
  • tqdm
  • pycorenlp (for preprocessing nytimes data)
  • nltk (for preprocessing nytimes data)

Data

We release the data sets in the following link(1.2G uncompressed) Please download and uncompress the file, and put under ./data directory. For opinion and news domains, the The New York Times Annotated Corpus is licensed by LDC. We therefore only provide the ids for train/dev/test. Please follow the instructions to generate the dataset.

Text Planning

To train a BERT planner:

cd planning
python train.py \
    --data-path=../data/ \
    --domain=[arggen,opinion,news] \
    --exp-name=demo \
    --save-interval=1 \ # how frequent to save checkpoints 
    --max-epoch=30 \
    --lr=5e-4 \
    --warmup-updates=5000 \
    --train-set=train \
    --valid-set=dev \
    --tensorboard-logdir=tboard/ \
    --predict-keyphrase-offset \
    --max-samples=32 \ # max number of samples per batch
    [--quiet] \ # whether to print intermediate information

The checkpoints will be dumped to checkpoints/planning/[domain]/[exp-name]. Tensorboard will be available under planning/tboard/.

To run inference using a trained model, with greedy decoding:

cd planning
python decode.py \
    --data-path=../data/ \
    --domain=arggen \
    --test-set=test \
    --max-samples=32 \
    --predict-keyphrase-offset \
    --exp-name=demo \
    [--quiet]

The results will be saved to planning/output/.

Iterative Refinement

We provide implementations for four different setups:

  • Seq2seq: prompt -> tgt
  • KPSeq2seq: prompt + kp-set -> tgt
  • PAIR-light: prompt + kp-plan + masks -> tgt
  • PAIR-full: prompt + kp-plan + template -> tgt

To train a model:

cd refinement
python train.py \
    --domain=[arggen,opinion,news] \
    --setup=[seq2seq,kpseq2seq,pair-light,pair-full] \
    --train-set=train \
    --valid-set=dev \
    --train-batch-size=10 \
    --valid-batch-size=5 \
    --num-train-epochs=20 \
    --ckpt-dir=../checkpoints/[domain]/[setup]/demo \
    --tensorboard-dir=demo \
    [--quiet]

To run iterative refinement:

cd refinement
python generate.py \
    --domain=[arggen,opinion,news] \
    --setup=[seq2seq,kpseq2seq,pair-light,pair-full] \
    --test-set=test \
    --output-name=test_demo \
    --enforce-template-strategy=flexible \
    --do-sampling \
    --sampling-topk=100 \
    --sampling-topp=0.9 \
    --sample-times=3 \
    --ckpt-dir=../checkpoints/[domain]/[setup]/demo

Contact

Xinyu Hua (hua.x [at] northeastern.edu)

License

See the LICENSE file for details.

Owner
Xinyu Hua
PhD student at Northeastern University
Xinyu Hua
Notes taking website build with Docker + Django + React.

Notes website. Try it in browser! / But how to run? Description. This is monorepository with notes website. Website provides web interface for creatin

Kirill Zhosul 2 Jul 27, 2022
This reposityory contains the PyTorch implementation of our paper "Generative Dynamic Patch Attack".

Generative Dynamic Patch Attack This reposityory contains the PyTorch implementation of our paper "Generative Dynamic Patch Attack". Requirements PyTo

Xiang Li 8 Nov 17, 2022
Software for Multimodalty 2D+3D Facial Expression Recognition (FER) UI

EmotionUI Software for Multimodalty 2D+3D Facial Expression Recognition (FER) UI. demo screenshot (with RealSense) required packages Python = 3.6 num

Yang Jiao 2 Dec 23, 2021
Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

ImageProcessingTransformer Third party Pytorch implement of Image Processing Transformer (Pre-Trained Image Processing Transformer arXiv:2012.00364v2)

61 Jan 01, 2023
Main Results on ImageNet with Pretrained Models

This repository contains Pytorch evaluation code, training code and pretrained models for the following projects: SPACH (A Battle of Network Structure

Microsoft 151 Dec 14, 2022
[CVPR2022] Representation Compensation Networks for Continual Semantic Segmentation

RCIL [CVPR2022] Representation Compensation Networks for Continual Semantic Segmentation Chang-Bin Zhang1, Jia-Wen Xiao1, Xialei Liu1, Ying-Cong Chen2

Chang-Bin Zhang 71 Dec 28, 2022
Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing"

One-Shot Free-View Neural Talking Head Synthesis Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Vide

ZLH 406 Dec 23, 2022
OOD Generalization and Detection (ACL 2020)

Pretrained Transformers Improve Out-of-Distribution Robustness How does pretraining affect out-of-distribution robustness? We create an OOD benchmark

littleRound 57 Jan 09, 2023
An Open-Source Tool for Automatic Disease Diagnosis..

OpenMedicalChatbox An Open-Source Package for Automatic Disease Diagnosis. Overview Due to the lack of open source for existing RL-base automated diag

8 Nov 08, 2022
Very large and sparse networks appear often in the wild and present unique algorithmic opportunities and challenges for the practitioner

Sparse network learning with snlpy Very large and sparse networks appear often in the wild and present unique algorithmic opportunities and challenges

Andrew Stolman 1 Apr 30, 2021
Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.

NN Template Generic template to bootstrap your PyTorch project. Click on Use this Template and avoid writing boilerplate code for: PyTorch Lightning,

Luca Moschella 520 Dec 30, 2022
Plover-tapey-tape: an alternative to Plover’s built-in paper tape

plover-tapey-tape plover-tapey-tape is an alternative to Plover’s built-in paper

7 May 29, 2022
《A-CNN: Annularly Convolutional Neural Networks on Point Clouds》(2019)

A-CNN: Annularly Convolutional Neural Networks on Point Clouds Created by Artem Komarichev, Zichun Zhong, Jing Hua from Department of Computer Science

Artёm Komarichev 44 Feb 24, 2022
Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch

Enformer - Pytorch (wip) Implementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch. The original tensorflow

Phil Wang 235 Dec 27, 2022
This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures using receptive field analysis (RFA) and create graph visualizations of your architecture.

ReceptiveFieldAnalysisToolbox This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures usin

84 Nov 23, 2022
Level Based Customer Segmentation

level_based_customer_segmentation Level Based Customer Segmentation Persona Veri Seti kullanılarak müşteri segmentasyonu yapılmıştır. KOLONLAR : PRICE

Buse Yıldırım 6 Dec 21, 2021
ViewFormer: NeRF-free Neural Rendering from Few Images Using Transformers

ViewFormer: NeRF-free Neural Rendering from Few Images Using Transformers Official implementation of ViewFormer. ViewFormer is a NeRF-free neural rend

Jonáš Kulhánek 169 Dec 30, 2022
RCDNet: A Model-driven Deep Neural Network for Single Image Rain Removal (CVPR2020)

RCDNet: A Model-driven Deep Neural Network for Single Image Rain Removal (CVPR2020) Hong Wang, Qi Xie, Qian Zhao, and Deyu Meng [PDF] [Supplementary M

Hong Wang 6 Sep 27, 2022
Road Crack Detection Using Deep Learning Methods

Road-Crack-Detection-Using-Deep-Learning-Methods This is my Diploma Thesis ¨Road Crack Detection Using Deep Learning Methods¨ under the supervision of

Aggelos Katsaliros 3 May 03, 2022
Towards End-to-end Video-based Eye Tracking

Towards End-to-end Video-based Eye Tracking The code accompanying our ECCV 2020 publication and dataset, EVE. Authors: Seonwook Park, Emre Aksan, Xuco

Seonwook Park 76 Dec 12, 2022