A Chinese to English Neural Model Translation Project

Overview

ZH-EN NMT Chinese to English Neural Machine Translation

This project is inspired by Stanford's CS224N NMT Project

Dataset used in this project: News Commentary v14

Intro

This project is more of a learning project to make myself familiar with Pytorch, machine translation, and NLP model training.

To investigate how would various setups of the recurrent layer affect the final performance, I compared Training Efficiency and Effectiveness of different types of RNN layer for encoder by changing one feature each time while controlling all other parameters:

  • RNN types

    • GRU
    • LSTM
  • Activation Functions on Output Layer

    • Tanh
    • ReLU
    • LeakyReLU
  • Number of layers

    • single layer
    • double layer

Code Files

_/
├─ utils.py # utilities
├─ vocab.py # generate vocab
├─ model_embeddings.py # embedding layer
├─ nmt_model.py # nmt model definition
├─ run.py # training and testing

Good Translation Examples

  • source: 相反,这意味着合作的基础应当是共同的长期战略利益,而不是共同的价值观。

    • target: Instead, it means that cooperation must be anchored not in shared values, but in shared long-term strategic interests.
    • translation: On the contrary, that means cooperation should be a common long-term strategic interests, rather than shared values.
  • source: 但这个问题其实很简单: 谁来承受这些用以降低预算赤字的紧缩措施的冲击。

    • target: But the issue is actually simple: Who will bear the brunt of measures to reduce the budget deficit?
    • translation: But the question is simple: Who is to bear the impact of austerity measures to reduce budget deficits?
  • source: 上述合作对打击恐怖主义、贩卖人口和移民可能发挥至关重要的作用。

    • target: Such cooperation is essential to combat terrorism, human trafficking, and migration.
    • translation: Such cooperation is essential to fighting terrorism, trafficking, and migration.
  • source: 与此同时, 政治危机妨碍着政府追求艰难的改革。

    • target: At the same time, political crisis is impeding the government’s pursuit of difficult reforms.
    • translation: Meanwhile, political crises hamper the government’s pursuit of difficult reforms.

Preprocessing

Preprocessing Colab notebook

  • using jieba to separate Chinese words by spaces

Generate Vocab From Training Data

  • Input: training data of Chinese and English

  • Output: a vocab file containing mapping from (sub)words to ids of Chinese and English -- a limited size of vocab is selected using SentencePiece (essentially Byte Pair Encoding of character n-grams) to cover around 99.95% of training data

Model Definition

  • a Seq2Seq model with attention

    This image is from the book DIVE INTO DEEP LEARNING

    • Encoder
      • A Recurrent Layer
    • Decoder
      • LSTMCell (hidden_size=512)
    • Attention
      • Multiplicative Attention

Training And Testing Results

Training Colab notebook

  • Hyperparameters:
    • Embedding Size & Hidden Size: 512
    • Dropout Rate: 0.25
    • Starting Learning Rate: 5e-4
    • Batch Size: 32
    • Beam Size for Beam Search: 10
  • NOTE: The BLEU score calculated here is based on the Test Set, so it could only be used to compare the relative effectiveness of the models using this data

For Experiment

  • Dataset: the dataset is split into training set(~260000), validation set(~20000), and testing set(~20000) randomly (they are the same for each experiment group)
  • Max Number of Iterations: 50000
  • NOTE: I've tried Vanilla-RNN(nn.RNN) in various ways, but the BLEU score turns out to be extremely low for it (absence of residual connections might be the issue)
    • I decided to not include it for comparison until the issue is resolved
Training Time(sec) BLEU Score on Test Set Training Perplexities Validation Perplexities
A. Bidirectional 1-Layer GRU with Tanh 5158.99 14.26
B. Bidirectional 1-Layer LSTM with Tanh 5150.31 16.20
C. Bidirectional 2-Layer LSTM with Tanh 6197.58 16.38
D. Bidirectional 1-Layer LSTM with ReLU 5275.12 14.01
E. Bidirectional 1-Layer LSTM with LeakyReLU(slope=0.1) 5292.58 14.87

Current Best Version

Bidirectional 2-Layer LSTM with Tanh, 1024 embed_size & hidden_size, trained 11517.19 sec (44000 iterations), BLEU score 17.95

Traning Time BLEU Score on Test Set Training Perplexities Validation Perplexities
Best Model 11517.19 17.95

Analysis

  • LSTM tends to have better performance than GRU (it has an extra set of parameters)
  • Tanh tends to be better since less information is lost
  • Making the LSTM deeper (more layers) could improve the performance, but it cost more time to train
  • Surprisingly, the training time for A, B, and D are roughly the same
    • the issue may be the dataset is not large enough, or the cloud service I used to train models does not perform consistently

Bad Examples & Case Analysis

  • source: 全球目击组织(Global Witness)的报告记录, 光是2015年就有16个国家的185人被杀。
    • target: A Global Witness report documented 185 killings across 16 countries in 2015 alone.
    • translation: According to the Global eye, the World Health Organization reported that 185 people were killed in 2015.
    • problems:
      • Information Loss: 16 countries
      • Unknown Proper Noun: Global Witness
  • source: 大自然给了足以满足每个人需要的东西, 但无法满足每个人的贪婪
    • target: Nature provides enough for everyone’s needs, but not for everyone’s greed.
    • translation: Nature provides enough to satisfy everyone.
    • problems:
      • Huge Information Loss
  • source: 我衷心希望全球经济危机和巴拉克·奥巴马当选总统能对新冷战的荒唐理念进行正确的评估。
    • target: It is my hope that the global economic crisis and Barack Obama’s presidency will put the farcical idea of a new Cold War into proper perspective.
    • translation: I do hope that the global economic crisis and President Barack Obama will be corrected for a new Cold War.
    • problems:
      • Action Sender And Receiver Exchanged
      • Failed To Translate Complex Sentence
  • source: 人们纷纷猜测欧元区将崩溃。
    • target: Speculation about a possible breakup was widespread.
    • translation: The eurozone would collapse.
    • problems:
      • Significant Information Loss

Means to Improve the NMT model

  • Dataset
    • The dataset is fairly small, and our model is not being trained thorough all data
    • Being a native Chinese speaker, I could not understand what some of the source sentences are saying
    • The target sentences are not informational comprehensive; they themselves need context to be understood (e.g. the target sentence in the last "Bad Examples")
    • Even for human, some of the source sentence was too hard to translate
  • Model Architecture
    • CNN & Transformer
    • character based model
    • Make the model even larger & deeper (... I need GPUs)
  • Tricks that might help
    • Add a proper noun dictionary to translate unknown proper nouns word-by-word (phrase-by-phrase)
    • Initialize (sub)word embedding with pretrained embedding

How To Run

  • Download the dataset you desire, and change all "./zh_en_data" in run.sh to the path where your data is stored
  • To run locally on a CPU (mostly for sanity check, CPU is not able to train the model)
    • set up the environment using conda/miniconda conda env create --file local env.yml
  • To run on a GPU
    • set up the environment and running process following the Colab notebook

Contact

If you have any questions or you have trouble running the code, feel free to contact me via email

Owner
Zhenbang Feng
Be an engineer, not a coder. [email protected]
Zhenbang Feng
novel deep learning research works with PaddlePaddle

Research 发布基于飞桨的前沿研究工作,包括CV、NLP、KG、STDM等领域的顶会论文和比赛冠军模型。 目录 计算机视觉(Computer Vision) 自然语言处理(Natrual Language Processing) 知识图谱(Knowledge Graph) 时空数据挖掘(Spa

1.5k Jan 03, 2023
Multilingual text (NLP) processing toolkit

polyglot Polyglot is a natural language pipeline that supports massive multilingual applications. Free software: GPLv3 license Documentation: http://p

RAMI ALRFOU 2.1k Jan 07, 2023
A collection of GNN-based fake news detection models.

This repo includes the Pytorch-Geometric implementation of a series of Graph Neural Network (GNN) based fake news detection models. All GNN models are implemented and evaluated under the User Prefere

SafeGraph 251 Jan 01, 2023
History Aware Multimodal Transformer for Vision-and-Language Navigation

History Aware Multimodal Transformer for Vision-and-Language Navigation This repository is the official implementation of History Aware Multimodal Tra

Shizhe Chen 46 Nov 23, 2022
Toward Model Interpretability in Medical NLP

Toward Model Interpretability in Medical NLP LING380: Topics in Computational Linguistics Final Project James Cross ( 1 Mar 04, 2022

glow-speak is a fast, local, neural text to speech system that uses eSpeak-ng as a text/phoneme front-end.

Glow-Speak glow-speak is a fast, local, neural text to speech system that uses eSpeak-ng as a text/phoneme front-end. Installation git clone https://g

Rhasspy 8 Dec 25, 2022
A collection of scripts to preprocess ASR datasets and finetune language-specific Wav2Vec2 XLSR models

wav2vec-toolkit A collection of scripts to preprocess ASR datasets and finetune language-specific Wav2Vec2 XLSR models This repository accompanies the

Anton Lozhkov 29 Oct 23, 2022
Binary LSTM model for text classification

Text Classification The purpose of this repository is to create a neural network model of NLP with deep learning for binary classification of texts re

Nikita Elenberger 1 Mar 11, 2022
Natural Language Processing library built with AllenNLP 🌲🌱

Custom Natural Language Processing with big and small models 🌲🌱

Recognai 65 Sep 13, 2022
Sentiment Analysis Project using Count Vectorizer and TF-IDF Vectorizer

Sentiment Analysis Project This project contains two sentiment analysis programs for Hotel Reviews using a Hotel Reviews dataset from Datafiniti. The

Simran Farrukh 0 Mar 28, 2022
Create a machine learning model which will predict if the mortgage will be approved or not based on 5 variables

Mortgage-Application-Analysis Create a machine learning model which will predict if the mortgage will be approved or not based on 5 variables: age, in

1 Jan 29, 2022
Active learning for text classification in Python

Active Learning allows you to efficiently label training data in a small-data scenario.

Webis 375 Dec 28, 2022
GPT-2 Model for Leetcode Questions in python

Leetcode using AI 🤖 GPT-2 Model for Leetcode Questions in python New demo here: https://huggingface.co/spaces/gagan3012/project-code-py Note: the Ans

Gagan Bhatia 100 Dec 12, 2022
A python gui program to generate reddit text to speech videos from the id of any post.

Reddit text to speech generator A python gui program to generate reddit text to speech videos from the id of any post. Current functionality Generate

Aadvik 17 Dec 19, 2022
Random Directed Acyclic Graph Generator

DAG_Generator Random Directed Acyclic Graph Generator verison1.0 简介 工作流通常由DAG(有向无环图)来定义,其中每个计算任务$T_i$由一个顶点(node,task,vertex)表示。同时,任务之间的每个数据或控制依赖性由一条加权

Livion 17 Dec 27, 2022
Bnagla hand written document digiiztion

Bnagla hand written document digiiztion This repo addresses the problem of digiizing hand written documents in Bangla. Documents have definite fields

Mushfiqur Rahman 1 Dec 10, 2021
Ongoing research training transformer language models at scale, including: BERT & GPT-2

Megatron (1 and 2) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA.

NVIDIA Corporation 3.5k Dec 30, 2022
Official code for Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset

Official code for our Interspeech 2021 - Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset [1]*. Visually-grounded spoken language datasets c

Ian Palmer 3 Jan 26, 2022
Python library to make development of portfolio analysis faster and easier

Trafalgar Python library to make development of portfolio analysis faster and easier Installation 🔥 For the moment, Trafalgar is still in beta develo

Santosh Passoubady 641 Jan 01, 2023
Built for cleaning purposes in military institutions

Ferramenta do AL Construído para fins de limpeza em instituições militares. Instalação Requer python = 3.2 pip install -r requirements.txt Usagem Exe

0 Aug 13, 2022