Implementation of PersonaGPT Dialog Model

Overview

PersonaGPT

An open-domain conversational agent with many personalities

PersonaGPT is an open-domain conversational agent cpable of decoding personalized and controlled responses based on user input. It is built on the pretrained DialoGPT-medium model, following the GPT-2 architecture. PersonaGPT is fine-tuned on the Persona-Chat dataset, with added special tokens to better distinguish between conversational history and personality traits for dyadic conversations. Furthermore, some active learning was used to train the model to do controlled decoding based on certain "action codes" (e.g., talk about work, ask about music interests).


Personalized Dialog Generation

Also called persona modeling, personalized dialog generation takes as input a set of personality facts and dialog history to generate responses that are (ideally) sensible and diverse, yet consistent with past responses and the given persona profile.

Example

Persona Facts
1 i am 64 years old but feel quite young.
2 i try to eat healthy but limit mcdonalds to once a week.
3 i regret working as a doctor for the last 20 years.
4 my secret hobby is making self-help youtube videos.
5 i've been to spain many times and wish i could go again.
Role Response
Human yo whats good fam?
Bot hello, how are you doing?
Human not bad what you doing today?
Bot eating a lot healthier now that i'm in my 60s
Human i'm more of a fast food guy
Bot me too but i'm very healthy so i try to avoid it
Human what do you do for fun?
Bot i make videos for fun and eat very healthy
Human nice what kind of videos?
Bot i love making videos about self help stuff
Human i think i need some help
Bot what kind of help would you like to get
Human i'm 21 and feel lost in life
Bot i can relate, i'm only 65

Controlled Response Generation

Another feature of PersonaGPT is generating responses that directs conversations toward certain topics of interests. The following is a self-play episode between 2 PersonaGPT agents, one interviews the other.

Bot 2's Persona Facts
1 i am a quiet engineer.
2 i'm single and am looking for love.
3 sadly, i don't have any relatable hobbies.
4 luckily, however, i am tall and athletic.
5 on friday nights, i watch re-runs of the simpsons alone.
Role Action Response
Bot 1 talk about movies hello do you like movies?
Bot 2 - i love the simpsons. how about you?
Bot 1 talk about music i like the walking dead. do you play an instrument?
Bot 2 - yes i play the violin.
Bot 1 ask about marital status are you married or single?
Bot 2 - i am single but i am looking for love.
Bot 1 talk about work my job is working in a factory. what's your occupation?
Bot 2 - engineer. i'm very quiet so no one hears me.
Bot 1 talk about hobbies do you have any hobbies?
Bot 2 - i watch reruns of the simpsons.
Bot 1 ask about favorite food what's your favorite food?
Bot 2 - i love pizza. how about yourself?
Bot 1 ask about pets i also love pizza. do you like animals?
Bot 2 - i have two dogs. what is your occupation?
Bot 1 talk about work i'm a factory worker. what's your dream job?
Bot 2 - i'd love to be a writer one day.

Interactive Environment

To interact with PersonaGPT, run interact.py --mode 0 to manually enter responses and see how the bot responds. You will first be prompted to enter a set of persona facts for your partner and then engage in a short conversation.

Use -turns or --num_turns to adjust the length of conversation (default = 8). To adjust the decoding parameters of PersonaGPT, use --help to bring up available settings to tune. For example, --top_k (default = 10) controls the number of candidate tokens at each decoding step, and --top_p controls the nucleus sampling. See how to generate text from Huggingface for nuances of the GPT decoding process.

To engage in controlled response generation, run interact.py --mode 1. At each turn, you will be prompted to select from a set of topics from which to steer the conversation toward. The output response by PersonaGPT may not always steer the conversation to the topic of interest, depending on the dialog history up to the current turn.


Training

Below are instructions to reconstruct the PersonaGPT from "scratch" (i.e., from pretrained DialoGPT or GPT-2, either of which are feasible as starting pre-trained models).

Requirements:

  • Python 3.6+
  • Pytorch (GPU preferred)
  • transformers
  • dotenv
  • tqdm
  • (optional) apex for fp16 training It is highly recommended that the pytorch and transformers packages are installed under a virtual environment.

After cloning this repository, follow the directions below to set up the training environment.

Instructions:

  1. Go to the .env file and set the save_path to your desired local repository to store model, scheduler and optimizer checkpoints. Point data_path to the ~/data folder of the cloned repository. The .env file also contains the hyperparameter configurations:
epochs = 3
learn_rate = 5e-5
gradient_accumulation_steps = 64
batch_size = 1
weight_decay = 0.0
logging_steps = 10
save_steps = 250

Replace epochs, batch_size, gradient_accumulation_steps and learn_rate with the desired hyperparameters of choice. Please use batch_size = 1 and change gradient accumulation steps to adjust the training batch size. This current repo version does not support parallel batching at the moment (TODO).

  1. Run preprocess_dataset.py to preprocess ~/data/train_both_original_no_cands.txt and ~/data/valid_both_original_no_cands.txt. The original .txt files are obtained from the ConvAI2 Challenge, which may no longer be available since the ConvAI3 challenge has taken place. The ConvAI2 challenge data uses the Persona-Chat dataset which is what is provided under the ~/data folder.

  2. Run train.py to train the PersonaGPT model. Results (e.g., pretrain_loss, persona_loss, ctrl_loss) will be saved under [save_path]/samples/. Model checkpoints are saved under [save_path]/checkpoint/model.

Currently there are 2 training loops, pretrain() and train_loop(). pretrain() first trains model on the Persona-Chat dataset and saves the performance under pretrain_stats. train_loop() then fine-tunes the model on active learning data, which examples of using action codes (e.g., "talk about hobbies", "ask about pets") to do controlled response generation. The pretrained model can be used as as stand-alone dialog model for personalized dialog generation without fine-tuning on the actively learned actions.

  • pretrain_loss: tracks the training loss on Persona-Chat dataset during pretrain().
  • persona_loss: tracks the training loss on Persona-Chat during train_loop().
  • ctrl_loss: tracks the training loss on actively learned action codes during train_loop().

Active Learning

Currently, there are 11 possible turn-level goals that can be used for controlled response generation.

Turn-level Goals
1. ask about family. 4. talk about traveling. 7. talk about music.
2. ask about pets. 5. ask about age and gender. 8. talk about food.
3. talk about work. 6. talk about hobbies. 9. talk about movies.
10. talk about politics. 11. ask about marital status. -

These turn-level goals are handcrafted based on the personachat dataset to cover most of the conversational directions at the turn-level.

To actively learn new turn-level goals, use the convogym repo.


Evaluation

After training, an evaluation loop will run and print out a set of scores saved under eval_stats. Below is a comparison of PersonaGPT vs. other baselines on the Persona-Chat dataset using automatic evaluation metrics. Your results should look something like:

Model Perplexity F1 Score
Seq2seq Baseline [3] 29.8 16.2
Wolf et al. [5] 16.3 19.5
GPT-2 baseline 99.5 5.8
DialoGPT baseline 56.6 12.6
DialoGPT finetuned 11.4 22.7
PersonaGPT 10.2 43.4

Cite Us

Our full paper is now up on arXiv.

@misc{tang2021persona,
      title={Persona Authentication through Generative Dialogue}, 
      author={Fengyi Tang and Lifan Zeng and Fei Wang and Jiayu Zhou},
      year={2021},
      eprint={2110.12949},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

References

  1. Radford, Alec, et al. "Language models are unsupervised multitask learners." OpenAI Blog 1.8 (2019): 9.

  2. Zhang, Yizhe, et al. "Dialogpt: Large-scale generative pre-training for conversational response generation." arXiv preprint arXiv:1911.00536 (2019).

  3. Zhang, Saizheng, et al. "Personalizing dialogue agents: I have a dog, do you have pets too?." arXiv preprint arXiv:1801.07243 (2018).

  4. Dinan et al., "The Second Conversational Intelligence Challenge (ConvAI2)." arXiv preprint arXiv:1902.00098 (2019).

  5. Thomas Wolf et al. "Transfertransfo: A transfer learning approach for neural network based conversational agents." arXiv preprint328arXiv:1901.08149, 2019

Owner
ILLIDAN Lab
Intelligent Data Analytics Lab @ MSU : Creating Large-Scale Machine Learning Algorithms for Big Data Analytics
ILLIDAN Lab
Unofficial Tensorflow Implementation of ConvNeXt from A ConvNet for the 2020s

Tensorflow Implementation of "A ConvNet for the 2020s" This is the unofficial Tensorflow Implementation of ConvNeXt from "A ConvNet for the 2020s" pap

DK 11 Oct 12, 2022
Code for the paper "Balancing Training for Multilingual Neural Machine Translation, ACL 2020"

Balancing Training for Multilingual Neural Machine Translation Implementation of the paper Balancing Training for Multilingual Neural Machine Translat

Xinyi Wang 21 May 18, 2022
An Straight Dilated Network with Wavelet for image Deblurring

SDWNet: A Straight Dilated Network with Wavelet Transformation for Image Deblurring(offical) 1. Introduction This repo is not only used for our paper(

FlyEgle 41 Jan 04, 2023
A basic reminder tool written in Python.

A simple Python Reminder Here's a basic reminder tool written in Python that speaks to the user and sends a notification. Run pip3 install pyttsx3 w

Sachit Yadav 4 Feb 05, 2022
Tutorials, assignments, and competitions for MIT Deep Learning related courses.

MIT Deep Learning This repository is a collection of tutorials for MIT Deep Learning courses. More added as courses progress. Tutorial: Deep Learning

Lex Fridman 9.5k Jan 07, 2023
Social Network Ads Prediction

Social network advertising, also social media targeting, is a group of terms that are used to describe forms of online advertising that focus on social networking services.

Khazar 2 Jan 28, 2022
TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning

TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning Authors: Yixuan Su, Fangyu Liu, Zaiqiao Meng, Lei Shu, Ehsan Shareghi, and Nig

Yixuan Su 79 Nov 04, 2022
I tried to apply the CAM algorithm to YOLOv4 and it worked.

YOLOV4:You Only Look Once目标检测模型在pytorch当中的实现 2021年2月7日更新: 加入letterbox_image的选项,关闭letterbox_image后网络的map得到大幅度提升。 目录 性能情况 Performance 实现的内容 Achievement

55 Dec 05, 2022
Corruption Invariant Learning for Re-identification

Corruption Invariant Learning for Re-identification The official repository for Benchmarks for Corruption Invariant Person Re-identification (NeurIPS

Minghui Chen 73 Dec 08, 2022
Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021 [Projec

Zhengqi Li 583 Dec 30, 2022
[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

Rex Cheng 364 Jan 03, 2023
The aim of this project is to build an AI bot that can play the Wordle game, or more generally Squabble

Wordle RL The aim of this project is to build an AI bot that can play the Wordle game, or more generally Squabble I know there are more deterministic

Aditya Arora 3 Feb 22, 2022
Creating predictive checklists from data using integer programming.

Learning Optimal Predictive Checklists A Python package to learn simple predictive checklists from data subject to customizable constraints. For more

Healthy ML 5 Apr 19, 2022
Unofficial PyTorch Implementation for HifiFace (https://arxiv.org/abs/2106.09965)

HifiFace — Unofficial Pytorch Implementation Image source: HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping (figure 1, pg. 1)

MINDs Lab 218 Jan 04, 2023
Predict stock movement with Machine Learning and Deep Learning algorithms

Project Overview Stock market movement prediction using LSTM Deep Neural Networks and machine learning algorithms Software and Library Requirements Th

Naz Delam 46 Sep 13, 2022
Deep Multi-Magnification Network for multi-class tissue segmentation of whole slide images

Deep Multi-Magnification Network This repository provides training and inference codes for Deep Multi-Magnification Network published here. Deep Multi

Computational Pathology 12 Aug 06, 2022
A project which aims to protect your privacy using inexpensive hardware and easily modifiable software

Protecting your privacy using an ESP32, an IR sensor and a python script This project, which I personally call the "never-gonna-catch-me-in-the-act-ev

8 Oct 10, 2022
TrackFormer: Multi-Object Tracking with Transformers

TrackFormer: Multi-Object Tracking with Transformers This repository provides the official implementation of the TrackFormer: Multi-Object Tracking wi

Tim Meinhardt 321 Dec 29, 2022
DimReductionClustering - Dimensionality Reduction + Clustering + Unsupervised Score Metrics

Dimensionality Reduction + Clustering + Unsupervised Score Metrics Introduction

11 Nov 15, 2022
Codes of paper "Unseen Object Amodal Instance Segmentation via Hierarchical Occlusion Modeling"

Unseen Object Amodal Instance Segmentation (UOAIS) Seunghyeok Back, Joosoon Lee, Taewon Kim, Sangjun Noh, Raeyoung Kang, Seongho Bak, Kyoobin Lee This

GIST-AILAB 92 Dec 13, 2022