💛 Code and Dataset for our EMNLP 2021 paper: "Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes"

Overview

Perspective-taking and Pragmatics for Generating
Empathetic Responses Focused on Emotion Causes

figure

Official PyTorch implementation and EmoCause evaluation set of our EMNLP 2021 paper 💛
Hyunwoo Kim, Byeongchang Kim, and Gunhee Kim. Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes. EMNLP, 2021 [Paper coming soon!]

  • TL;DR: In order to express deeper empathy in dialogues, we argue that responses should focus on the cause of emotions. Inspired by perspective-taking of humans, we propose a generative emotion estimator (GEE) which can recognize emotion cause words solely based on sentence-level emotion labels without word-level annotations (i.e., weak-supervision). To evaluate our approach, we annotate emotion cause words and release the EmoCause evaluation set. We also propose a pragmatics-based method for generating responses focused on targeted words from the context.

Reference

If you use the materials in this repository as part of any published research, we ask you to cite the following paper:

@inproceedings{Kim:2021:empathy,
  title={Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes},
  author={Kim, Hyunwoo and Kim, Byeongchang and Kim, Gunhee},
  booktitle={EMNLP},
  year=2021
}

Implementation

System Requirements

  • Python 3.7.9
  • Pytorch 1.6.0
  • CUDA 10.2 supported GPU with at least 24GB memory
  • See environment.yml for details

Environment setup

Our code is built on the ParlAI framework. We recommend you create a conda environment as follows

conda env create -f environment.yml

and activate it with

conda activate focused-empathy
python -m spacey download en

EmoCause evaluation set for weakly-supervised emotion cause recognition

EmoCause is a dataset of annotated emotion cause words in emotional situations from the EmpatheticDialogues valid and test set. The goal is to recognize emotion cause words in sentences by training only on sentence-level emotion labels without word-level labels (i.e., weakly-supervised emotion cause recognition). EmoCause is based on the fact that humans do not recognize the cause of emotions with supervised learning on word-level cause labels. Thus, we do not provide a training set.

figure

You can download the EmoCause eval set [here].
Note, the dataset will be downloaded automatically when you run the experiment command below.

Data statistics and structure

#Emotion Label type #Label/Utterance #Utterance
EmoCause 32 Word 2.3 4.6K
{
  "original_situation": the original situations in the EmpatheticDialogues,
  "tokenized_situation": tokenized situation utterances using spacy,
  "emotion": emotion labels,
  "conv_id": id for each corresponding conversation in EmpatheticDialogues,
  "annotation": list of tuples: (emotion cause word, index),
  "labels": list of strings containing the emotion cause words
}

Running Experiments

All corresponding models will be downloaded automatically when running the following commands.
We also provide manual download links: [GEE] [Finetuned Blender]

Weakly-supervised emotion cause word recognition with GEE on EmoCause

You can evaluate our proposed Generative Emotion Estimator (GEE) on the EmoCause eval set.

python eval_emocause.py --model agents.gee_agent:GeeCauseInferenceAgent --fp16 False

Focused empathetic response generation with finetuned Blender on EmpatheticDialogues

You can evaluate our approach for generating focused empathetic responses on top of a finetuned Blender (Not familiar with Blender? See here!).

python eval_empatheticdialogues.py --model agents.empathetic_gee_blender:EmpatheticBlenderAgent --model_file data/models/finetuned_blender90m/model --fp16 False --empathy-score False

Adding the --alpha 0 flag will run the Blender without pragmatics. You can also try the random distractor (Plain S1) by adding --distractor-type random.

💡 To measure the Interpretation and Exploration scores also, set the --empathy-score to True. It will automatically download the RoBERTa models finetuned on EmpatheticDialogues. For more details on empathy scores, visit the original repo.

Acknowledgements

We thank the anonymous reviewers for their helpful comments on this work.

This research was supported by Samsung Research Funding Center of Samsung Electronics under project number SRFCIT210101. The compute resource and human study are supported by Brain Research Program by National Research Foundation of Korea (NRF) (2017M3C7A1047860).

Have any question?

Please contact Hyunwoo Kim at hyunw.kim at vl dot snu dot ac dot kr.

License

This repository is MIT licensed. See the LICENSE file for details.

Owner
Hyunwoo Kim
PhD student at Seoul National University CSE
Hyunwoo Kim
Rhyme with AI

Local development Create a conda virtual environment and activate it: conda env create --file environment.yml conda activate rhyme-with-ai Install the

GoDataDriven 28 Nov 21, 2022
PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".

LXMERT: Learning Cross-Modality Encoder Representations from Transformers Our servers break again :(. I have updated the links so that they should wor

Hao Tan 838 Dec 19, 2022
A toolkit for document-level event extraction, containing some SOTA model implementations

Document-level Event Extraction via Heterogeneous Graph-based Interaction Model with a Tracker Source code for ACL-IJCNLP 2021 Long paper: Document-le

84 Dec 15, 2022
Poetry PEP 517 Build Backend & Core Utilities

Poetry Core A PEP 517 build backend implementation developed for Poetry. This project is intended to be a light weight, fully compliant, self-containe

Poetry 293 Jan 02, 2023
Takes a string and puts it through different languages in Google Translate a requested amount of times, returning nonsense.

PythonTextObfuscator Takes a string and puts it through different languages in Google Translate a requested amount of times, returning nonsense. Requi

2 Aug 29, 2022
TLA - Twitter Linguistic Analysis

TLA - Twitter Linguistic Analysis Tool for linguistic analysis of communities TLA is built using PyTorch, Transformers and several other State-of-the-

Tushar Sarkar 47 Aug 14, 2022
Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.

Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra. What is Lightning Tran

Pytorch Lightning 581 Dec 21, 2022
Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch

N-Grammer - Pytorch Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch Install $ pip install n-grammer-pytorch Usage

Phil Wang 66 Dec 29, 2022
The implementation of Parameter Differentiation based Multilingual Neural Machine Translation

The implementation of Parameter Differentiation based Multilingual Neural Machine Translation .

Qian Wang 21 Dec 17, 2022
A repo for materials relating to the tutorial of CS-332 NLP

CS-332-NLP A repo for materials relating to the tutorial of CS-332 NLP Contents Tutorial 1: Introduction Corpus Regular expression Tokenization Tutori

Alok singh 9 Feb 15, 2022
The PyTorch based implementation of continuous integrate-and-fire (CIF) module.

CIF-PyTorch This is a PyTorch based implementation of continuous integrate-and-fire (CIF) module for end-to-end (E2E) automatic speech recognition (AS

Minglun Han 24 Dec 29, 2022
A simple Flask site that allows users to create, update, and delete posts in a database, as well as perform basic NLP tasks on the posts.

A simple Flask site that allows users to create, update, and delete posts in a database, as well as perform basic NLP tasks on the posts.

Ian 1 Jan 15, 2022
Extracting Summary Knowledge Graphs from Long Documents

GraphSum This repo contains the data and code for the G2G model in the paper: Extracting Summary Knowledge Graphs from Long Documents. The other basel

Zeqiu (Ellen) Wu 10 Oct 21, 2022
OceanScript is an Esoteric language used to encode and decode text into a formulation of characters

OceanScript is an Esoteric language used to encode and decode text into a formulation of characters - where the final result looks like waves in the ocean.

Telegram bot to auto post messages of one channel in another channel as soon as it is posted, without the forwarded tag.

Channel Auto-Post Bot This bot can send all new messages from one channel, directly to another channel (or group, just in case), without the forwarded

Aditya 128 Dec 29, 2022
VD-BERT: A Unified Vision and Dialog Transformer with BERT

VD-BERT: A Unified Vision and Dialog Transformer with BERT PyTorch Code for the following paper at EMNLP2020: Title: VD-BERT: A Unified Vision and Dia

Salesforce 44 Nov 01, 2022
PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

VAENAR-TTS - PyTorch Implementation PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

Keon Lee 67 Nov 14, 2022
LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language

LegalNLP - Natural Language Processing Methods for the Brazilian Legal Language ⚖️ The library of Natural Language Processing for Brazilian legal lang

Felipe Maia Polo 125 Dec 20, 2022
Guide to using pre-trained large language models of source code

Large Models of Source Code I occasionally train and publicly release large neural language models on programs, including PolyCoder. Here, I describe

Vincent Hellendoorn 947 Dec 28, 2022
Super easy library for BERT based NLP models

Fast-Bert New - Learning Rate Finder for Text Classification Training (borrowed with thanks from https://github.com/davidtvs/pytorch-lr-finder) Suppor

Utterworks 1.8k Dec 27, 2022