Th2En & Th2Zh: The large-scale datasets for Thai text cross-lingual summarization

Overview

Th2En & Th2Zh: The large-scale datasets for Thai text cross-lingual summarization

📥 Download Datasets
📥 Download Trained Models

INTRODUCTION

TH2ZH (Thai-to-Simplified Chinese) and TH2EN (Thai-to-English) are cross-lingual summarization (CLS) datasets. The source articles of these datasets are from TR-TPBS dataset, a monolingual Thai text summarization dataset. To create CLS dataset out of TR-TPBS, we used a neural machine translation service to translate articles into target languages. For some reasons, we were strongly recommended not to mention the name of the service that we used 🥺 . We will refer to the service we used as ‘main translation service’.

Cross-lingual summarization (cross-sum) is a task to summarize a given document written in one language to another language short summary.

crosslingual summarization

Traditional cross-sum approaches are based on two techniques namely early translation technique and late translation technique. Early translation can be explained easily as translate-then-summarize method. Late translation, in reverse, is summarize-then-translate method.

However, classical cross-sum methods tend to carry errors from monolingual summarization process or translation process to final cross-language output summary. Several end-to-end approaches have been proposed to tackle problems of traditional ones. Couple of end-to-end models are available to download as well.

DATASET CONSTRUCTION

💡 Important Note In contrast to Zhu, et al, in our experiment, we found that filtering out articles using RTT technique worsen the overall performance of the end-to-end models significantly. Therefore, full datasets are highly recommended.

We used TR-TPBS as source documents for creating cross-lingual summarization dataset. In the same way as Zhu, et al., we constructed Th2En and Th2Zh by translating the summary references into target languages using translation service and filtered out those poorly-translated summaries using round-trip translation technique (RTT). The overview of cross-lingual summarization dataset construction is presented in belowe figure. Please refer to the corresponding paper for more details on RTT.

crosslingual summarization In our experiment, we set 𝑇1 and 𝑇2 equal to 0.45 and 0.2 respectively, backtranslation technique filtered out 27.98% from Th2En and 56.79% documents from Th2Zh.

python3 src/tools/cls_dataset_construction.py \
--dataset th2en \
--input_csv path/to/full_dataset.csv \
--output_csv path/to/save/filtered_csv \
--r1 0.45 \
--r2 0.2
  • --dataset can be {th2en, th2zh}.
  • --r1 and --r2 are where you can set ROUGE score thresholds (r1 and r2 represent ROUGE-1 and ROUGE-2 respectively) for filtering (assumingly) poor translated articles.

Dataset Statistic

Click the file name to download.

File Number of Articles Size
th2en_full.csv 310,926 2.96 GB
th2zh_full.csv 310,926 2.81 GB
testset.csv 3,000 44 MB
validation.csv 3,000 43 MB

Data Fields

Please refer to th2enzh_data_exploration.ipynb for more details.

Column Description
th_body Original Thai body text
th_sum Original Thai summary
th_title Original Thai Article headline
{en/zh}_body Translated body text
{en/zh}_sum Translated summary
{en/zh}_title Translated article's headline
{en/zh}2th Back translation of{en/zh}_body
{en/zh}_gg_sum Translated summary (by Google Translation)
url URL to original article’s webpage
  • {th/en/zh}_title are only available in test set.
  • {en/zh}_gg_sum are also only available in test set. We (at the time this experiment took place) assumed that Google translation was better than the main translation service we were using. We intended to use these Google translated summaries as some kind of alternative summary references, but in the end, they never been used. We decided to make them available in the test set anyway, just in case the others find them useful.
  • {en/zh}_body were not presented during training end-to-end models. They were used only in early translation methods.

AVAILABLE TRAINED MODELS

Model Corresponding Paper Thai -> English Thai -> Simplified Chinese
Full Filtered Full Filtered
TNCLS Zhu et al., 2019 - Available - -
CLS+MS Zhu et al., 2019 Available - - -
CLS+MT Zhu et al., 2019 Available - Available -
XLS – RL-ROUGE Dou et al., 2020 Available - Available -

To evaluate these trained models, please refer to xls_model_evaluation.ipynb and ncls_model_evaluation.ipynb.

If you wish to evaluate the models with our test sets, you can use below script to create test files for XLS and NCLS models.

python3 src/tools/create_cls_test_manifest.py \
--test_csv_path path/to/testset.csv \
--output_dir path/to/save/testset_files \
--use_google_sum {true/false} \
--max_tokens 500 \
--create_ms_ref {true/false}
  • output_dir is path to directory that you want to save test set files
  • use_google_sum can be {true/false}. If true, it will select summary reference from columns {en/zh}_gg_sum. Default is false.
  • max_tokens number of maximum words in input articles. Default is 500 words. Too short or too long articles can significantly worsen performance of the models.
  • create_ms_ref whether to create Thai summary reference file to evaluate MS task in NCLS:CLS+MS model.

This script will produce three files namely test.CLS.source.thai.txt and test.CLS.target.{en/zh}.txt. test.CLS.source.thai.txt is used as a test file for cls task. test.CLS.target.{en/zh}.txt are the crosslingual summary reference for English and Chinese, they are used to evaluate ROUGE and BertScore. Each line is corresponding to the body articles in test.CLS.source.thai.txt.

🥳 We also evaluated MT tasks in XLS and NCLS:CLS+MT models. Please refers to xls_model_evaluation.ipynb and ncls_model_evaluation.ipynb for BLUE score results . For test sets that we used to evaluate MT task, please refer to data/README.md.

EXPERIMENT RESULTS

🔆 It has to be noted that all of end-to-end models reported in this section were trained on filtered datasets NOT full datasets. And for all end-to-end models, only `th_body` and `{en/zh}_sum` were present during training. We trained end-to-end models for 1,000,000 steps and selected model checkpoints that yielded the highest overall ROUGE scores to report the experiment.

In this experiment, we used two automatic evaluation matrices namely ROUGE and BertScore to assess the performance of CLS models. We evaluated ROUGE on Chinese text at word-level, NOT character level.

We only reported BertScore on abstractive summarization models. To evaluate the results with BertScore we used weights from ‘roberta-large’ and ‘bert-base-chinese’ pretrained models for Th2En and Th2Zh respectively.

Model Thai to English Thai to Chinese
ROUGE BertScore ROUGE BertScore
R1 R2 RL F1 R1 R2 RL F1
Traditional Approaches
Translated Headline 23.44 6.99 21.49 - 21.55 4.66 18.58 -
ETrans → LEAD2 51.96 42.15 50.01 - 44.18 18.83 43.84 -
ETrans → BertSumExt 51.85 38.09 49.50 - 34.58 14.98 34.84 -
ETrans → BertSumExtAbs 52.63 32.19 48.14 88.18 35.63 16.02 35.36 70.42
BertSumExt → LTrans 42.33 27.33 34.85 - 28.11 18.85 27.46 -
End-to-End Training Approaches
TNCLS 26.48 6.65 21.66 85.03 27.09 6.69 21.99 63.72
CLS+MS 32.28 15.21 34.68 87.22 34.34 12.23 28.80 67.39
CLS+MT 42.85 19.47 39.48 88.06 42.48 19.10 37.73 71.01
XLS – RL-ROUGE 42.82 19.62 39.53 88.03 43.20 19.19 38.52 72.19

LICENSE

Thai crosslingual summarization datasets including TH2EN, TH2ZH, test and validation set are licensed under MIT License.

ACKNOWLEDGEMENT

  • These cross-lingual datasets and the experiments are parts of Nakhun Chumpolsathien ’s master’s thesis at school of computer science, Beijing Institute of Technology. Therefore, as well, a great appreciation goes to his supervisor, Assoc. Prof. Gao Yang.
  • Shout out to Tanachat Arayachutinan for the initial data processing and for introducing me 麻辣烫, 黄焖鸡.
  • We would like to thank Beijing Engineering Research Center of High Volume Language Information Processing and Cloud Computing Applications for providing computing resources to conduct the experiment.
  • In this experiment, we used PyThaiNLP v. 2.2.4 to tokenize (on both word & sentence levels) Thai texts. For Chinese and English segmentation, we used Stanza.
Owner
Nakhun Chumpolsathien
I thought it was fun.
Nakhun Chumpolsathien
🎐 a python library for doing approximate and phonetic matching of strings.

jellyfish Jellyfish is a python library for doing approximate and phonetic matching of strings. Written by James Turk James Turk 1.8k Dec 21, 2022

SAVI2I: Continuous and Diverse Image-to-Image Translation via Signed Attribute Vectors

SAVI2I: Continuous and Diverse Image-to-Image Translation via Signed Attribute Vectors [Paper] [Project Website] Pytorch implementation for SAVI2I. We

Qi Mao 44 Dec 30, 2022
BERT score for text generation

BERTScore Automatic Evaluation Metric described in the paper BERTScore: Evaluating Text Generation with BERT (ICLR 2020). News: Features to appear in

Tianyi 1k Jan 08, 2023
A BERT-based reverse dictionary of Korean proverbs

Wisdomify A BERT-based reverse-dictionary of Korean proverbs. 김유빈 : 모델링 / 데이터 수집 / 프로젝트 설계 / back-end 김종윤 : 데이터 수집 / 프로젝트 설계 / front-end / back-end 임용

94 Dec 08, 2022
Nmt - TensorFlow Neural Machine Translation Tutorial

Neural Machine Translation (seq2seq) Tutorial Authors: Thang Luong, Eugene Brevdo, Rui Zhao (Google Research Blogpost, Github) This version of the tut

6.1k Dec 29, 2022
Maix Speech AI lib, including ASR, chat, TTS etc.

Maix-Speech 中文 | English Brief Now only support Chinese, See 中文 Build Clone code by: git clone https://github.com/sipeed/Maix-Speech Compile x86x64 c

Sipeed 267 Dec 25, 2022
Suite of 500 procedurally-generated NLP tasks to study language model adaptability

TaskBench500 The TaskBench500 dataset and code for generating tasks. Data The TaskBench dataset is available under wget http://web.mit.edu/bzl/www/Tas

Belinda Li 20 May 17, 2022
Making text a first-class citizen in TensorFlow.

TensorFlow Text - Text processing in Tensorflow IMPORTANT: When installing TF Text with pip install, please note the version of TensorFlow you are run

1k Dec 26, 2022
Torchrecipes provides a set of reproduci-able, re-usable, ready-to-run RECIPES for training different types of models, across multiple domains, on PyTorch Lightning.

Recipes are a standard, well supported set of blueprints for machine learning engineers to rapidly train models using the latest research techniques without significant engineering overhead.Specifica

Meta Research 193 Dec 28, 2022
Dense Passage Retriever - is a set of tools and models for open domain Q&A task.

Dense Passage Retrieval Dense Passage Retrieval (DPR) - is a set of tools and models for state-of-the-art open-domain Q&A research. It is based on the

Meta Research 1.1k Jan 07, 2023
Multilingual finetuning of Machine Translation model on low-resource languages. Project for Deep Natural Language Processing course.

Low-resource-Machine-Translation This repository contains the code for the project relative to the course Deep Natural Language Processing. The goal o

Andrea Cavallo 3 Jun 22, 2022
Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"

BP-Transformer This repo contains the code for our paper BP-Transformer: Modeling Long-Range Context via Binary Partition Zihao Ye, Qipeng Guo, Quan G

Zihao Ye 119 Nov 14, 2022
A Multilingual Latent Dirichlet Allocation (LDA) Pipeline with Stop Words Removal, n-gram features, and Inverse Stemming, in Python.

Multilingual Latent Dirichlet Allocation (LDA) Pipeline This project is for text clustering using the Latent Dirichlet Allocation (LDA) algorithm. It

Artifici Online Services inc. 74 Oct 07, 2022
This repository contains (not all) code from my project on Named Entity Recognition in philosophical text

NERphilosophy 👋 Welcome to the github repository of my BsC thesis. This repository contains (not all) code from my project on Named Entity Recognitio

Ruben 1 Jan 27, 2022
Toward Model Interpretability in Medical NLP

Toward Model Interpretability in Medical NLP LING380: Topics in Computational Linguistics Final Project James Cross ( 1 Mar 04, 2022

Constituency Tree Labeling Tool

Constituency Tree Labeling Tool The purpose of this package is to solve the constituency tree labeling problem. Look from the dataset labeled by NLTK,

张宇 6 Dec 20, 2022
Utilize Korean BERT model in sentence-transformers library

ko-sentence-transformers 이 프로젝트는 KoBERT 모델을 sentence-transformers 에서 보다 쉽게 사용하기 위해 만들어졌습니다. Ko-Sentence-BERT-SKTBERT 프로젝트에서는 KoBERT 모델을 sentence-trans

Junghyun 40 Dec 20, 2022
Use Tensorflow2.7.0 Build OpenAI'GPT-2

TF2_GPT-2 Use Tensorflow2.7.0 Build OpenAI'GPT-2 使用最新tensorflow2.7.0构建openai官方的GPT-2 NLP模型 优点 使用无监督技术 拥有大量词汇量 可实现续写(堪比“xx梦续写”) 实现对话后续将应用于FloatTech的Bot

Watermelon 9 Sep 13, 2022
SurvTRACE: Transformers for Survival Analysis with Competing Events

⭐ SurvTRACE: Transformers for Survival Analysis with Competing Events This repo provides the implementation of SurvTRACE for survival analysis. It is

Zifeng 13 Oct 06, 2022
Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CASL project: http://casl-project.ai/

Texar-PyTorch is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar

ASYML 726 Dec 30, 2022