A PyTorch implementation of paper "Learning Shared Semantic Space for Speech-to-Text Translation", ACL (Findings) 2021

Overview

Chimera: Learning Shared Semantic Space for Speech-to-Text Translation


This is a Pytorch implementation for the "Chimera" paper Learning Shared Semantic Space for Speech-to-Text Translation https://arxiv.org/abs/2105.03095 (accepted by ACL Findings 2021), which aims to bridge the modality gap by unifying the task of MT (textual Machine Translation) and ST (Speech-to-Text Translation). It has achieved new SOTA performance on all 8 language pairs in MuST-C benchmark, by utilizing an external MT corpus.


This repository is up to now a nightly version, and is bug-prone because of code refactoring. Also it is not fully tested on configurations other than the authors' working environment yet. However, we encourage you to first have a look at the results and model codes to get a general impression of what this project is about.

The code base is forked from FairSeq repository https://github.com/pytorch/fairseq.git (without an actual forking operation) in Septempber 2020. It than lags behind the later updates in FairSeq, and both the codes and checkpoints are not compatible with currect Fairseq version. You will need to modify the model codes for checkpoint configurations if you want to follow the new FairSeq codes.

CONTRIBUTION: You are also more than welcomed to test our code on your machines, and report feedbacks on results, bugs and performance!



Results

Our model (Chimera) achieves new state-of-the-art results on all 8 language pairs on MuST-C:

Direction EN-DE EN-FR EN-RU EN-ES EN-IT EN-RO EN-PT EN-NL
BLEU 26.3 35.6 17.4 30.6 25.0 24.0 30.2 29.2

Chimera novelly learns M distinct "memories" to store specific types of semantic information from both audio and text inputs. Shown below is a visualization of the "Memories" learned by Chimera-16, which is a variant with M = 16. Each learned cluster represents a individual type of information, while each marker is a sentence sample. "+" and "." means text and audio samples, respectively.

We can see more clearly from below (left) that memories learn a well-clustered semantic space, forming a "semantic" alignment (rather than spatial) between audio and text inputs, while ignoring the modality differences.

On the right, we zoom in to focus one cluster in specific, and it can be easily observed that the vectors are well structured as well, with inputs with (probably one of) similar semantic features close in space to each other.

We can even focus on one instance of translation, and see how the memories works. Shown below visualizes the alignment between audio attention and text attention, which tightly gather around the diagonal line. Different colors represents different memories, which attend to different semantic segments of sentence / audio as shown in the figure.



Trained Checkpoints

Our trained checkpoints are available at:

Translation Direction filename External url
English-to-Deutsch Chimera_EN2DE.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2DE.pt
English-to-French Chimera_EN2FR.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2FR.pt
English-to-Russian Chimera_EN2RU.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2RU.pt
English-to-Espanol Chimera_EN2ES.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2ES.pt
English-to-Italiano Chimera_EN2IT.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2IT.pt
English-to-Romanian Chimera_EN2RO.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2RO.pt
English-to-Portuguese Chimera_EN2PT.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2PT.pt
English-to-Dutch Chimera_EN2NL.pt http://sf3-ttcdn-tos.pstatp.com/obj/nlp-opensource/acl2021/chimera/Chimera_EN2NL.pt



Interactive Translation

You can download any one checkpoint mentioned above to local, and translate local audios (only .wav files supported) to another language! To do this, you only need to run the model in an interactive mode. For example, you want to translate from English to Deutsh (DE) with an already trained checkpoint at $CHECKPOINT:

bash run.sh --script chimera/scripts/interactive-en2any-ST.sh \
    --target de --checkpoint $CHECKPOINT

The program will prompt an input file name like this:

2021-04-02 10:00:00 | INFO | fairseq_cli.interactive | Type the input sentence and press return:

After inputing the file name, the program will translate outputs like:

H-0     -1.0      ▁Nach ▁dem ...
D-0     -1.0      Nach dem ...
P-0     -1.0000 -1.0000 ...

NOTE: Do not input a file too large. Normally the model can translate 1~5 normal-length sentences in one time. If the input sentence is too long, the program could crash.

To exit the interactive mode, you only need to input an invalid file name.

To translate to other languages, remember to replace de with their language codes (in lower case):

Language Code
Deutsch (German) DE / de
French FR / fr
Espanol (Spanish) ES / es
Russian RU / ru
Italiano (Italian) IT / it
Romanian RO / ro
Portuguese PT / pt
Dutch (Netherlands) NL / nl



Training a Model on MuST-C

Let's first take a look at training an English-to-Deutsch model as an example.

Data Preparation

  1. Prerequisites and Configuration First check that requirements are met for pip in requirements.txt and for apt in apt-requirements.txt. Some items in the two files may be redundant, but we haven't got time to check and eliminate them.

For configuration, please set the global variables of $WMT_ROOT, $MUSTC_ROOT and SAVE_ROOT These will be where to put the datasets and checkpoints. For example:

export MUSTC_ROOT="speech_data/mustc"
export WMT_ROOT="wmt_data"
export SAVE_ROOT="checkpoints"
export target=de
mkdir -p $MUSTC_ROOT $WMT_ROOT $SAVE_ROOT

NOTE: This simple configuration is a prerequisite for most of the following steps. Here export target=de means the translation direction is English to Deutsch.

  1. Download and uncompress the EN-to-DE MuST-C dataset to $MUSTC_ROOT/en-$target. TIP: to speed up uncompressing a file too large, you can replace tar xzvf with: pigz -dc $TARFILE | tar xvf -

  2. Download the WMT to $WMT_ROOT/orig via:

bash chimera/prepare_data/download-wmt.sh --wmt14 --data-dir $WMT_ROOT --target $target

This may sometimes be too slow as the connection to statmt.org is not steady in some places. In this case you can turn to other faster download sources if possible.

  1. Append MuST-C text data to $WMT_ROOT, and prepare the datasets and produce a joint spm dictionary:
bash chimera/prepare_data/prepare-wmt-en2any.sh \
    --data-dir $WMT_ROOT --wmt14 --original-dev \
    --external mustc --target $target --subword spm
python3 chimera/prepare_data/prep_mustc_data.py \
    --data-root $MUSTC_ROOT --task wave \
    --ignore_fbank80 --joint_spm wmt14-en-$target-spm \
    --languages $target --vocab-type unigram --vocab-size 10000

NOTE: if the first command is executed correctly, you will see one line in the output:

Existing spm dictionary chimera/resources/wmt14-en-de-spm detected. Copying...

If not, the program will still produce one dictionary on the run and reports No existing spm detected. Learning unigram spm on wmt14_en_de/tmp/train.de-en ... This is okay in most cases, with the only risk being a potential mismatch to already trained checkpoints we provided.

Training

To reproduce the results in the last row in Figure 1 in paper, you can directly use the training scripts available as follows.

  1. Pre-training on MT data:
bash run.sh --script chimera/scripts/train-en2any-MT.sh \
    --target $target --dataset wmt14 --max_updates 500000

If you like, you can specify some arguments other than default values. The default setting is --seed 1 --num-gpus 8, which makes the command look like bash run.sh --script chimera/scripts/train-en2$target-MT.sh --seed 1 --num-gpus 8. Value for --num-gpus is recommended to be power of 2, and smaller than 8, e.g. {1, 2, 4, 8}.

  1. Fine-tuning on MuST-C data:
bash run.sh --script chimera/scripts/train-en2any-ST.sh \
    --target $target --dataset wmt14 --max_updates 150000

This script moves the MT-pre-trained model from ${MT_SAVE_DIR}/checkpoint_best.pt to ${ST_SAVE_DIR} as a initialization for ST fine-tuning.

Optionally, if you need to resume a single ST training, you can add argument --resume to the command to avoid overwriting the existing ${ST_SAVE_DIR}/checkpoint_last.pt.

The scripts in step 4 and 5 forks a separate background evaluation process while running. The process monitors $MT_SAVE_ROOT or $ST_SAVE_ROOT and evaluates any new checkpoints. Don't worry, it will be automatically killed after the training finishes, unless the script is Ctrl-C'ed, in which case, you can manually raise the suicide flag by touch chimera/tools/auto-generate-suicide.code to kill the background generation process.

Note that this automatic process only evaluates a single checkpoint (with no averaging), and with a low beam width.

  1. Averaging Checkpoints and Evaluate It

Suppose the best ST checkpoint is at epoch $BEST_EPOCH, and we want to averaging 7 checkpoints around it.

python3 chimera/tools/eval-average-checkpoint.py \
    --ckpt-dir $ST_SAVE_ROOT --number-of-ckpts 7 \
    --center-of-ckpts $BEST_EPOCH

Other Language Pairs

For language pairs English-to-{French, Russian, Espanol}, you only need to replace the export target=de with {fr, ru, es} in step 0, and then run the steps 1~5.

For language pairs English-to-{Italiano, Portuguese, Dutch, Romanian}, the MT data is different, so we need to modify Step 2 and 3. All other Steps remains unchanged.

English to Romanian

For Romanian, we use WMT16 corpora in our paper.

The Step 2 changes to

bash chimera/prepare_data/download-wmt.sh --wmt16 --data-dir $WMT_ROOT --target ro

Step 3 remains unchanged.

English to {Italiano, Portuguese, Dutch}

These language pairs uses OPUS100 as external MT corpora.

The Step 2 changes to

bash chimera/prepare_data/download-opus100.sh --data-dir $WMT_ROOT

Step 3 changes to

bash chimera/prepare_data/prepare-opus100-en2any.sh \
    --data-dir $WMT_ROOT --original-dev \
    --external mustc --target $target --subword spm
python3 chimera/prepare_data/prep_mustc_data.py \
    --data-root $MUSTC_ROOT --task wave \
    --ignore_fbank80 --joint_spm wmt14-en-$target-spm \
    --languages $target --vocab-type unigram --vocab-size 10000

Actually, only the first command of Step 3 changes.

Evaluating a Checkpoint

You can also manually evaluate the performance of any one checkpoint on MuST-C test set. Suppose the path to your checkpoint is $CHECKPOINT

target=de bash chimera/generate/generate-mustc-final.sh $CHECKPOINT



License

Part of codes (especially codes outside chimera/) is adapted from FAIRSEQ code base, therefore carrying the MIT License of its original codes. See NOTICE.md for more details.

Citation

Please cite as:

@article{han2021learning,
  title={Learning Shared Semantic Space for Speech-to-Text Translation},
  author={Han, Chi and Wang, Mingxuan and Ji, Heng and Li, Lei},
  journal={arXiv preprint arXiv:2105.03095},
  year={2021}
}
Owner
Chi Han
Undergraduate student in Tsinghua University, P.R. China
Chi Han
Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction

This is a fork of Fairseq(-py) with implementations of the following models: Pervasive Attention - 2D Convolutional Neural Networks for Sequence-to-Se

Maha 490 Dec 15, 2022
Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks

Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks, which modifies the input text with a textual template and directly uses PLMs to conduct pre

THUNLP 2.3k Jan 08, 2023
Anomaly Detection 이상치 탐지 전처리 모듈

Anomaly Detection 시계열 데이터에 대한 이상치 탐지 1. Kernel Density Estimation을 활용한 이상치 탐지 train_data_path와 test_data_path에 존재하는 시점 정보를 포함하고 있는 csv 형태의 train data와

CLUST-consortium 43 Nov 28, 2022
Submit issues and feature requests for our API here.

AIx GPT API Submit issues and feature requests for our API here. See https://apps.aixsolutionsgroup.com for more info. Python Quick Start pip install

AIx Solutions 7 Mar 27, 2022
Simple, Fast, Powerful and Easily extensible python package for extracting patterns from text, with over than 60 predefined Regular Expressions.

patterns-finder Simple, Fast, Powerful and Easily extensible python package for extracting patterns from text, with over than 60 predefined Regular Ex

22 Dec 19, 2022
Utilities for preprocessing text for deep learning with Keras

Note: This utility is really old and is no longer maintained. You should use keras.layers.TextVectorization instead of this. Utilities for pre-process

Hamel Husain 180 Dec 09, 2022
PORORO: Platform Of neuRal mOdels for natuRal language prOcessing

PORORO: Platform Of neuRal mOdels for natuRal language prOcessing pororo performs Natural Language Processing and Speech-related tasks. It is easy to

Kakao Brain 1.2k Dec 21, 2022
Finding Label and Model Errors in Perception Data With Learned Observation Assertions

Finding Label and Model Errors in Perception Data With Learned Observation Assertions This is the project page for Finding Label and Model Errors in P

Stanford Future Data Systems 17 Oct 14, 2022
[ICCV 2021] Instance-level Image Retrieval using Reranking Transformers

Instance-level Image Retrieval using Reranking Transformers Fuwen Tan, Jiangbo Yuan, Vicente Ordonez, ICCV 2021. Abstract Instance-level image retriev

UVA Computer Vision 86 Dec 28, 2022
Code for the ACL 2021 paper "Structural Guidance for Transformer Language Models"

Structural Guidance for Transformer Language Models This repository accompanies the paper, Structural Guidance for Transformer Language Models, publis

International Business Machines 10 Dec 14, 2022
Python package for Turkish Language.

PyTurkce Python package for Turkish Language. Documentation: https://pyturkce.readthedocs.io. Installation pip install pyturkce Usage from pyturkce im

Mert Cobanov 14 Oct 09, 2022
Reformer, the efficient Transformer, in Pytorch

Reformer, the Efficient Transformer, in Pytorch This is a Pytorch implementation of Reformer https://openreview.net/pdf?id=rkgNKkHtvB It includes LSH

Phil Wang 1.8k Dec 30, 2022
The official repository of the ISBI 2022 KNIGHT Challenge

KNIGHT The official repository holding the data for the ISBI 2022 KNIGHT Challenge About The KNIGHT Challenge asks teams to develop models to classify

Nicholas Heller 4 Jan 22, 2022
Official codebase for Can Wikipedia Help Offline Reinforcement Learning?

Official codebase for Can Wikipedia Help Offline Reinforcement Learning?

Machel Reid 82 Dec 19, 2022
Named-entity recognition using neural networks. Easy-to-use and state-of-the-art results.

NeuroNER NeuroNER is a program that performs named-entity recognition (NER). Website: neuroner.com. This page gives step-by-step instructions to insta

Franck Dernoncourt 1.6k Dec 27, 2022
The repository for the paper: Multilingual Translation via Grafting Pre-trained Language Models

Graformer The repository for the paper: Multilingual Translation via Grafting Pre-trained Language Models Graformer (also named BridgeTransformer in t

22 Dec 14, 2022
Code for our paper "Mask-Align: Self-Supervised Neural Word Alignment" in ACL 2021

Mask-Align: Self-Supervised Neural Word Alignment This is the implementation of our work Mask-Align: Self-Supervised Neural Word Alignment. @inproceed

THUNLP-MT 46 Dec 15, 2022
A natural language processing model for sequential sentence classification in medical abstracts.

NLP PubMed Medical Research Paper Abstract (Randomized Controlled Trial) A natural language processing model for sequential sentence classification in

Hemanth Chandran 1 Jan 17, 2022
Implementation of Multistream Transformers in Pytorch

Multistream Transformers Implementation of Multistream Transformers in Pytorch. This repository deviates slightly from the paper, where instead of usi

Phil Wang 47 Jul 26, 2022
Open solution to the Toxic Comment Classification Challenge

Starter code: Kaggle Toxic Comment Classification Challenge More competitions 🎇 Check collection of public projects 🎁 , where you can find multiple

minerva.ml 153 Jun 22, 2022