💛 Code and Dataset for our EMNLP 2021 paper: "Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes"

Overview

Perspective-taking and Pragmatics for Generating
Empathetic Responses Focused on Emotion Causes

figure

Official PyTorch implementation and EmoCause evaluation set of our EMNLP 2021 paper 💛
Hyunwoo Kim, Byeongchang Kim, and Gunhee Kim. Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes. EMNLP, 2021 [Paper coming soon!]

  • TL;DR: In order to express deeper empathy in dialogues, we argue that responses should focus on the cause of emotions. Inspired by perspective-taking of humans, we propose a generative emotion estimator (GEE) which can recognize emotion cause words solely based on sentence-level emotion labels without word-level annotations (i.e., weak-supervision). To evaluate our approach, we annotate emotion cause words and release the EmoCause evaluation set. We also propose a pragmatics-based method for generating responses focused on targeted words from the context.

Reference

If you use the materials in this repository as part of any published research, we ask you to cite the following paper:

@inproceedings{Kim:2021:empathy,
  title={Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes},
  author={Kim, Hyunwoo and Kim, Byeongchang and Kim, Gunhee},
  booktitle={EMNLP},
  year=2021
}

Implementation

System Requirements

  • Python 3.7.9
  • Pytorch 1.6.0
  • CUDA 10.2 supported GPU with at least 24GB memory
  • See environment.yml for details

Environment setup

Our code is built on the ParlAI framework. We recommend you create a conda environment as follows

conda env create -f environment.yml

and activate it with

conda activate focused-empathy
python -m spacey download en

EmoCause evaluation set for weakly-supervised emotion cause recognition

EmoCause is a dataset of annotated emotion cause words in emotional situations from the EmpatheticDialogues valid and test set. The goal is to recognize emotion cause words in sentences by training only on sentence-level emotion labels without word-level labels (i.e., weakly-supervised emotion cause recognition). EmoCause is based on the fact that humans do not recognize the cause of emotions with supervised learning on word-level cause labels. Thus, we do not provide a training set.

figure

You can download the EmoCause eval set [here].
Note, the dataset will be downloaded automatically when you run the experiment command below.

Data statistics and structure

#Emotion Label type #Label/Utterance #Utterance
EmoCause 32 Word 2.3 4.6K
{
  "original_situation": the original situations in the EmpatheticDialogues,
  "tokenized_situation": tokenized situation utterances using spacy,
  "emotion": emotion labels,
  "conv_id": id for each corresponding conversation in EmpatheticDialogues,
  "annotation": list of tuples: (emotion cause word, index),
  "labels": list of strings containing the emotion cause words
}

Running Experiments

All corresponding models will be downloaded automatically when running the following commands.
We also provide manual download links: [GEE] [Finetuned Blender]

Weakly-supervised emotion cause word recognition with GEE on EmoCause

You can evaluate our proposed Generative Emotion Estimator (GEE) on the EmoCause eval set.

python eval_emocause.py --model agents.gee_agent:GeeCauseInferenceAgent --fp16 False

Focused empathetic response generation with finetuned Blender on EmpatheticDialogues

You can evaluate our approach for generating focused empathetic responses on top of a finetuned Blender (Not familiar with Blender? See here!).

python eval_empatheticdialogues.py --model agents.empathetic_gee_blender:EmpatheticBlenderAgent --model_file data/models/finetuned_blender90m/model --fp16 False --empathy-score False

Adding the --alpha 0 flag will run the Blender without pragmatics. You can also try the random distractor (Plain S1) by adding --distractor-type random.

💡 To measure the Interpretation and Exploration scores also, set the --empathy-score to True. It will automatically download the RoBERTa models finetuned on EmpatheticDialogues. For more details on empathy scores, visit the original repo.

Acknowledgements

We thank the anonymous reviewers for their helpful comments on this work.

This research was supported by Samsung Research Funding Center of Samsung Electronics under project number SRFCIT210101. The compute resource and human study are supported by Brain Research Program by National Research Foundation of Korea (NRF) (2017M3C7A1047860).

Have any question?

Please contact Hyunwoo Kim at hyunw.kim at vl dot snu dot ac dot kr.

License

This repository is MIT licensed. See the LICENSE file for details.

Owner
Hyunwoo Kim
PhD student at Seoul National University CSE
Hyunwoo Kim
Yes it's true :broken_heart:

Information WARNING: No longer hosted If you would like to be on this repo's readme simply fork or star it! Forks 1 - Flowzii 2 - Errorcrafter 3 - vk-

Dropout 66 Dec 31, 2022
Implementation of Natural Language Code Search in the project CodeBERT: A Pre-Trained Model for Programming and Natural Languages.

CodeBERT-Implementation In this repo we have replicated the paper CodeBERT: A Pre-Trained Model for Programming and Natural Languages. We are interest

Tanuj Sur 4 Jul 01, 2022
Just a Basic like Language for Zeno INC

zeno-basic-language Just a Basic like Language for Zeno INC This is written in 100% python. this is basic language like language. so its not for big p

Voidy Devleoper 1 Dec 18, 2021
[ICLR'19] Trellis Networks for Sequence Modeling

TrellisNet for Sequence Modeling This repository contains the experiments done in paper Trellis Networks for Sequence Modeling by Shaojie Bai, J. Zico

CMU Locus Lab 460 Oct 13, 2022
Automated question generation and question answering from Turkish texts using text-to-text transformers

Turkish Question Generation Offical source code for "Automated question generation & question answering from Turkish texts using text-to-text transfor

Open Business Software Solutions 29 Dec 14, 2022
PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset

PyTorch Large-Scale Language Model A Large-Scale PyTorch Language Model trained on the 1-Billion Word (LM1B) / (GBW) dataset Latest Results 39.98 Perp

Ryan Spring 114 Nov 04, 2022
Code for paper "Which Training Methods for GANs do actually Converge? (ICML 2018)"

GAN stability This repository contains the experiments in the supplementary material for the paper Which Training Methods for GANs do actually Converg

Lars Mescheder 884 Nov 11, 2022
FewCLUE: 为中文NLP定制的小样本学习测评基准

FewCLUE: 为中文NLP定制的小样本学习测评基准

CLUE benchmark 387 Jan 04, 2023
p-tuning for few-shot NLU task

p-tuning_NLU Overview 这个小项目是受乐于分享的苏剑林大佬这篇p-tuning 文章启发,也实现了个使用P-tuning进行NLU分类的任务, 思路是一样的,prompt实现方式有不同,这里是将[unused*]的embeddings参数抽取出用于初始化prompt_embed后

3 Dec 29, 2022
All the code I wrote for Overwatch-related projects that I still own the rights to.

overwatch_shit.zip This is (eventually) going to contain all the software I wrote during my five-year imprisonment stay playing Overwatch. I'll be add

zkxjzmswkwl 2 Dec 31, 2021
AutoGluon: AutoML for Text, Image, and Tabular Data

AutoML for Text, Image, and Tabular Data AutoGluon automates machine learning tasks enabling you to easily achieve strong predictive performance in yo

Amazon Web Services - Labs 5.2k Dec 29, 2022
"Investigating the Limitations of Transformers with Simple Arithmetic Tasks", 2021

transformers-arithmetic This repository contains the code to reproduce the experiments from the paper: Nogueira, Jiang, Lin "Investigating the Limitat

Castorini 33 Nov 16, 2022
Stand-alone language identification system

langid.py readme Introduction langid.py is a standalone Language Identification (LangID) tool. The design principles are as follows: Fast Pre-trained

2k Jan 04, 2023
Neural Lexicon Reader: Reduce Pronunciation Errors in End-to-end TTS by Leveraging External Textual Knowledge

Neural Lexicon Reader: Reduce Pronunciation Errors in End-to-end TTS by Leveraging External Textual Knowledge This is an implementation of the paper,

Mutian He 19 Oct 14, 2022
This repository contains the code for EMNLP-2021 paper "Word-Level Coreference Resolution"

Word-Level Coreference Resolution This is a repository with the code to reproduce the experiments described in the paper of the same name, which was a

79 Dec 27, 2022
NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles

NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles NewsMTSC is a dataset for target-dependent sentiment classification (TSC)

Felix Hamborg 79 Dec 30, 2022
InferSent sentence embeddings

InferSent InferSent is a sentence embeddings method that provides semantic representations for English sentences. It is trained on natural language in

Facebook Research 2.2k Dec 27, 2022
天池中药说明书实体识别挑战冠军方案;中文命名实体识别;NER; BERT-CRF & BERT-SPAN & BERT-MRC;Pytorch

天池中药说明书实体识别挑战冠军方案;中文命名实体识别;NER; BERT-CRF & BERT-SPAN & BERT-MRC;Pytorch

zxx飞翔的鱼 751 Dec 30, 2022
texlive expressions for documents

tex2nix Generate Texlive environment containing all dependencies for your document rather than downloading gigabytes of texlive packages. Installation

Jörg Thalheim 70 Dec 26, 2022
SimpleChinese2 集成了许多基本的中文NLP功能,使基于 Python 的中文文字处理和信息提取变得简单方便。

SimpleChinese2 SimpleChinese2 集成了许多基本的中文NLP功能,使基于 Python 的中文文字处理和信息提取变得简单方便。 声明 本项目是为方便个人工作所创建的,仅有部分代码原创。

Ming 30 Dec 02, 2022