VQMIVC - Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion

Overview

VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion (Interspeech 2021)

arXiv GitHub Stars download

Run VQMIVC on Replicate

Integrated to Huggingface Spaces with Gradio. See Gradio Web Demo.

Pre-trained models: google-drive or here | Paper demo

This paper proposes a speech representation disentanglement framework for one-shot/any-to-any voice conversion, which performs conversion across arbitrary speakers with only a single target-speaker utterance for reference. Vector quantization with contrastive predictive coding (VQCPC) is used for content encoding and mutual information (MI) is introduced as the correlation metric during training, to achieve proper disentanglement of content, speaker and pitch representations, by reducing their inter-dependencies in an unsupervised manner.

📢 Update

Many thanks to ericguizzo & AK391!

  1. A Replicate demo is provided online, so you can play our pre-trained models there, have fun!
  2. VQMIVC can be trained and tested inside a Docker environment via Cog now.
  3. Gradio Web Demo is available, another online demo!

TODO

  • Add more details on how to use Cog for development

Requirements

Python 3.6 is used, install apex for speeding up training (optional), other requirements are listed in 'requirements.txt':

pip install -r requirements.txt

Quick start with pre-trained models

ParallelWaveGAN is used as the vocoder, so firstly please install ParallelWaveGAN to try the pre-trained models:

python convert_example.py -s {source-wav} -r {reference-wav} -c {converted-wavs-save-path} -m {model-path} 

For example:

python convert_example.py -s test_wavs/p225_038.wav -r test_wavs/p334_047.wav -c converted -m checkpoints/useCSMITrue_useCPMITrue_usePSMITrue_useAmpTrue/VQMIVC-model.ckpt-500.pt 

The converted wav is put in 'converted' directory.

Training and inference:

  • Step1. Data preparation & preprocessing
  1. Put VCTK corpus under directory: 'Dataset/'

  2. Training/testing speakers split & feature (mel+lf0) extraction:

     python preprocess.py
    
  • Step2. model training:
  1. Training with mutual information minimization (MIM):

     python train.py use_CSMI=True use_CPMI=True use_PSMI=True
    
  2. Training without MIM:

     python train.py use_CSMI=False use_CPMI=False use_PSMI=False 
    
  • Step3. model testing:
  1. Put PWG vocoder under directory: 'vocoder/'

  2. Inference with model trained with MIM:

     python convert.py checkpoint=checkpoints/useCSMITrue_useCPMITrue_usePSMITrue_useAmpTrue/model.ckpt-500.pt
    
  3. Inference with model trained without MIM:

     python convert.py checkpoint=checkpoints/useCSMIFalse_useCPMIFalse_usePSMIFalse_useAmpTrue/model.ckpt-500.pt
    

Citation

If the code is used in your research, please Star our repo and cite our paper:

@inproceedings{wang21n_interspeech,
  author={Disong Wang and Liqun Deng and Yu Ting Yeung and Xiao Chen and Xunying Liu and Helen Meng},
  title={{VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-Shot Voice Conversion}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={1344--1348},
  doi={10.21437/Interspeech.2021-283}
}

Acknowledgements:

  • The content encoder is borrowed from VectorQuantizedCPC, which also inspires the negative sampling within-utterance for CPC;
  • The speaker encoder is borrowed from AdaIN-VC;
  • The decoder is modified from AutoVC;
  • Estimation of mutual information is modified from CLUB;
  • Speech features extraction is based on espnet and Pyworld.
Comments
  • The issue of  vocoder  in Inference progress

    The issue of vocoder in Inference progress

    Hi Sir,

    Thank you for your sharing firstly.

    Now I meet a issure about the inference as below:

    raceback (most recent call last): File "convert.py", line 201, in convert(config) File "convert.py", line 194, in convert subprocess.call(cmd) File "/home/tts/xxxx/softWare/miniConda/miniconda3/envs/ft_tts/lib/python3.6/subprocess.py", line 287, in call with Popen(*popenargs, **kwargs) as p: File "/home/tts/xxxx/softWare/miniConda/miniconda3/envs/ft_tts/lib/python3.6/subprocess.py", line 729, in init restore_signals, start_new_session) File "/home/tts/xxxx/softWare/miniConda/miniconda3/envs/ft_tts/lib/python3.6/subprocess.py", line 1364, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) PermissionError: [Errno 13] Permission denied: 'parallel-wavegan-decode'

    What can I do to solve this problem? The pretrain vocoder I have been put in the vocoder dir.

    (tts) [[email protected] VQMIVC]$ ll vocoder/ 总用量 4 lrwxrwxrwx 1 xxxx xxxx 53 6月 25 10:50 checkpoint-3000000steps.pkl -> ../pretrain_model/vocoder/checkpoint-3000000steps.pkl lrwxrwxrwx 1 xxxx xxxx 36 6月 25 10:50 config.yml -> ../pretrain_model/vocoder/config.yml -rw-r--r-- 1 xxxx xxxx 39 6月 24 17:53 README.md lrwxrwxrwx 1 xxxx xxxx 34 6月 25 10:50 stats.h5 -> ../pretrain_model/vocoder/stats.h5

    opened by TaoTaoFu 14
  • How to slove this problem?

    How to slove this problem?

    Dear Phd WANG: When I run the convert.py file, I meet this problem and i can not slove it, can you give me some suggest? Thank you very much! Error: Traceback (most recent call last): File "convert.py", line 168, in convert '--feats-scp', f'{str(out_dir)}/feats.1.scp', '--outdir', str(out_dir)]) File "/home/liyp/anaconda3/envs/xll/lib/python3.6/subprocess.py", line 287, in call with Popen(*popenargs, **kwargs) as p: File "/home/liyp/anaconda3/envs/xll/lib/python3.6/subprocess.py", line 729, in init restore_signals, start_new_session) File "/home/liyp/anaconda3/envs/xll/lib/python3.6/subprocess.py", line 1364, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'parallel-wavegan-decode': 'parallel-wavegan-decode'

    opened by Hu-chengyang 9
  • preprocess issue

    preprocess issue

    After downloaded the VCTK Corpus and copy the file under /Dataset (and create a directory '/Dataset/VCTK-Corpus/' to include the file: speaker-info.txt), I run the preprocess.py and get the following result. How can I fix this?

    (voice-clone) C:\Python\VQMIVC>python preprocess.py all_spks: ['257', '294', '304', '297', '226', '282', '247', '330', '361', '252', '293', '306', '340', '231', '268', '283', '243', '334', '315', '269', '285', '310', '230', '311', '374', '307', '286', '323', '245', '227', '239', '240', '363', '284', '251', '318', '246', '265', '244', '228', '333', '276', '255', '225', '308', '260', '339', '312', '336', '347', '345', '258', '335', '270', '376', '237', '316', '326', '364', '273', '263', '259', '267', '292', '232', '229', '254', '264', '287', '278', '236', '317', '272', '233', '234', '248', '249', '305', '299', '281', '302', '329', '262', '351', '288', '298', '250', '343', '256', '300', '275', '341', '279', '277', '271', '241', '303', '274', '313', '266', '301', '253', '261', '314', '295', '360', '362', '238'] len(spk_wavs): 0 len(spk_wavs): 0 len(spk_wavs): 0 . . . len(spk_wavs): 0 len(spk_wavs): 0 len(spk_wavs): 0 0 0 0 extract log-mel... 0it [00:00, ?it/s] normalize log-mel... Traceback (most recent call last): File "preprocess.py", line 141, in mels = np.concatenate(mels, 0) File "<array_function internals>", line 6, in concatenate ValueError: need at least one array to concatenate

    opened by Chuk101 8
  • Question About Batch Size, number of Epochs and Learning Rate

    Question About Batch Size, number of Epochs and Learning Rate

    Hi @Wendison , I've already has trained some models (with VCTK subsets and external speakers) and could notice that a bigger batch size doesn't necessarily results in better audio quality for the same 500 epochs, in some cases, audio quality could be worse (For male References). My question is:

    Do you have any report or experiments with different Batch Sizes, Number of Epochs (Why 500 and not 600 or more), and Learning Rates for different batch sizes?

    If not, what advice could you provide regarding the Batch Size and the number of Epochs? The bigger the better?

    For complex data like this there should be an improvement on bigger batches, but learning rate or number of epochs should be tuned.

    Thank You.

    opened by jlmarrugom 6
  • about the model question

    about the model question

    I try to train the model again,after I finished the process.I used the model that trained by myself to voice conversion, but I got noting. could you give me some advice. I have done all things follow the ReadME

    opened by Mike66666 4
  • Add Docker environment & web demo

    Add Docker environment & web demo

    Hey @Wendison! 👋

    I really liked your implementation and it works very well with any kind of voice! Really funny :)

    This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.

    This also means we can make a web page where other people can try out your model! View it here: https://replicate.ai/wendison/vqmivc

    Claim your page here so you can edit it, and we'll feature it on our website and tweet about it too.

    In case you're wondering who I am, I'm from Replicate, where we're trying to make machine learning reproducible. We got frustrated that we couldn't run all the really interesting ML work being done. So, we're going round implementing models we like. 😊

    opened by ericguizzo 4
  • The CPCLoss

    The CPCLoss

    I read related papers, but still do not understand the CPC loss computaiton.

        labels = torch.zeros(
            self.n_speakers_per_batch * self.n_utterances_per_speaker, length,
            dtype=torch.long, device=z.device
        )
    
        loss = F.cross_entropy(f, labels)
    

    Can someone explain it for me. Why labels of zeros and cross_entropy used here?

    opened by Liujingxiu23 4
  • NameError: name 'amp' is not defined .   File

    NameError: name 'amp' is not defined . File "train.py", line 407, in train_model

    I am getting below error.

    File "train.py", line 407, in train_model optimizer, optimizer_cs_mi_net, optimizer_ps_mi_net, optimizer_cp_mi_net, scheduler, amp, epoch, checkpoint_dir, cfg) NameError: name 'amp' is not defined

    opened by geni120 3
  • Improper converted audio when source = reference

    Improper converted audio when source = reference

    Hi, I tried using python convert_example.py -s test_wavs/jane3.wav -r test_wavs/jane3.wav -c converted -m checkpoints/useCSMITrue_useCPMITrue_usePSMITrue_useAmpTrue/VQMIVC-model.ckpt-500.pt to check out how the results are when source audio and reference audio are same. But the output is mostly silent. Am I missing something? To reproduce the results, the audio files and vocoder are uploaded here

    Source and reference: https://drive.google.com/file/d/1bPAQ9UaKJF1gNNCtkeDmySxLv_uXW1HN/view?usp=sharing Converted: https://drive.google.com/file/d/1TmxjpHx3WY3nKRwy5lz04LWfKAo69qwW/view?usp=sharing CC: @Wendison

    opened by vishalbhavani 3
  • What is the

    What is the "parallel-wavegan-decode" in cmd = ['parallel-wavegan-decode', '--checkpoint',...] ,it is a folder???

    Thanks for your code, but I have some problems, In code: cmd = ['parallel-wavegan-decode', '--checkpoint',...], Is it a folder? If so, what does this folder contain? My system told me it couldn't be found

    opened by DIO385 2
  • Where can I  get the silence trimmed VCTK corpus?

    Where can I get the silence trimmed VCTK corpus?

    Hi,

    Thank you for sharing your code! I wonder where can I get the silence trimmed VCTK corpus? Since the VCTK dataset I have only contains *.wav file and in your preprocess.py script it seems that all audio files are *.flac format, I cannot run the script.

    opened by Aria-K-Alethia 2
  • voice conversion not happens after fine-tuned with pretrained model

    voice conversion not happens after fine-tuned with pretrained model

    Hi @Wendison

    Thank you so much for this great work.

    I fine-tuned (resumed) pretrained model (use_CSMI=True use_CPMI=True use_PSMI=True) with indicTTS dataset (20 speakers - each having 1 hour audios)

    the model trained with 1000 epochs.

    Quality gets better for the target speaker. but source speaker modulation is not converted.

    Can you please give your suggestions?

    Thanks

    opened by MuruganR96 0
  • Training for Indian Multi-Speaker/Multi-lingual VC

    Training for Indian Multi-Speaker/Multi-lingual VC

    Hi, @Wendison Thank you so much for your excellent work. very nice paper.

    When I saw this reply on the below issues, it helped me to motivate to go further.

    https://github.com/Wendison/VQMIVC/issues/14#issuecomment-937900528

    https://github.com/Wendison/VQMIVC/issues/17#issuecomment-971136691

    I am trying Common Voice Indian English Multi-Speakers and VCTK Training. I need a few suggestions from you

    Steps:

    1. I add Common Voice Indian English Multi-Speakers (40 speakers - each having 30 minutes Datasets) along with VCTK 109 Speakers. and start training use_CSMI=True use_CPMI=True use_PSMI=True

    2. After the model is trained with good accuracy, will go for fine-tuning with other Indian regional languages of Common Voice (Tamil, Hindi, Urdu, etc)

    is this approach good?

    @Wendison kindly request, please give your suggestions. Thanks

    opened by MuruganR96 0
  • What do z_dim and c_dim stand for?

    What do z_dim and c_dim stand for?

    Dear PHD: Could you tell me what do z_dim:64 and c_dim:256 in config/model/default stand for?And what n_embeddings: 512 in config/model/default stand for?Thank you very much.

    opened by Hu-chengyang 4
  • Training Loss Abnormal

    Training Loss Abnormal

    @andreasjansson @Wendison Hello, sorry to interrupt you! I'm a rookie of voice model. I have trained the model in VCTK-Corpus-0.92.zip dataset by "python3 train.py use_CSMI=True use_CPMI=True use_PSMI=True" in NVIDIA V100S. But after 65 epochs, the train loss are as follows: image Could you give me some advice? Thank you very much!

    opened by Haoyanlong 3
  • lf0 question about convert phase

    lf0 question about convert phase

    Hi, I wonder why you normalize f0 series before feeding to the f0encoder in convert.py. However, this kind of normalization for f0 isn't used in preprocessing phase.

    opened by powei-C 3
  • How to solve this problem?

    How to solve this problem?

    Dear PHD: I try to train a vocoder, and I have installed parallelwavegan,and I run the command: run.sh,however it came out with the traceback: Traceback (most recent call last): File "/home/liyp/anaconda3/envs/xll/bin/parallel-wavegan-preprocess", line 11, in load_entry_point('parallel-wavegan', 'console_scripts', 'parallel-wavegan-preprocess')() File "/data2/hcy/VQMIVC-main/vocoder/ParallelWaveGAN/parallel_wavegan/bin/preprocess.py", line 186, in main ), f"{utt_id} seems to have a different sampling rate."

    I find that the sampling rate is 24000hz,however the sampling rate of the VQMIVC is 16000,could you tell me how to modify the sampling rate?

    opened by Hu-chengyang 3
Owner
Disong Wang
PhD student @ CUHK, focus on voice conversion & speech synthesis.
Disong Wang
Prometheus Exporter for data scraped from datenplattform.darmstadt.de

darmstadt-opendata-exporter Scrapes data from https://datenplattform.darmstadt.de and presents it in the Prometheus Exposition format. Pull requests w

Martin Weinelt 2 Apr 12, 2022
Visualizing lattice vibration information from phonon dispersion to atoms (For GPUMD)

Phonon-Vibration-Viewer (For GPUMD) Visualizing lattice vibration information from phonon dispersion for primitive atoms. In this tutorial, we will in

Liangting 6 Dec 10, 2022
Reinforcement learning framework and algorithms implemented in PyTorch.

Reinforcement learning framework and algorithms implemented in PyTorch.

Robotic AI & Learning Lab Berkeley 2.1k Jan 04, 2023
Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness?

Adversrial Machine Learning Benchmarks This code belongs to the papers: Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness? Det

Adversarial Machine Learning 9 Nov 27, 2022
Source code for the plant extraction workflow introduced in the paper “Agricultural Plant Cataloging and Establishment of a Data Framework from UAV-based Crop Images by Computer Vision”

Plant extraction workflow Source code for the plant extraction workflow introduced in the paper "Agricultural Plant Cataloging and Establishment of a

Maurice Günder 0 Apr 22, 2022
TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters.

TensorFlowOnSpark TensorFlowOnSpark brings scalable deep learning to Apache Hadoop and Apache Spark clusters. By combining salient features from the T

Yahoo 3.8k Jan 04, 2023
Joint Detection and Identification Feature Learning for Person Search

Person Search Project This repository hosts the code for our paper Joint Detection and Identification Feature Learning for Person Search. The code is

712 Dec 17, 2022
Implementation of paper "DeepTag: A General Framework for Fiducial Marker Design and Detection"

Implementation of paper DeepTag: A General Framework for Fiducial Marker Design and Detection. Project page: https://herohuyongtao.github.io/research/

Yongtao Hu 46 Dec 12, 2022
Framework that uses artificial intelligence applied to mathematical models to make predictions

LiconIA Framework that uses artificial intelligence applied to mathematical models to make predictions Interface Overview Table of contents [TOC] 1 Ar

4 Jun 20, 2021
HybridNets: End-to-End Perception Network

HybridNets: End2End Perception Network HybridNets Network Architecture. HybridNets: End-to-End Perception Network by Dat Vu, Bao Ngo, Hung Phan 📧 FPT

Thanh Dat Vu 370 Dec 29, 2022
Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative adversarial networks (GAN)

Flickr-Faces-HQ Dataset (FFHQ) Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative

NVIDIA Research Projects 2.9k Dec 28, 2022
FedCV: A Federated Learning Framework for Diverse Computer Vision Tasks

FedCV: A Federated Learning Framework for Diverse Computer Vision Tasks Image Classification Dataset: Google Landmark, COCO, ImageNet Model: Efficient

FedML-AI 62 Dec 10, 2022
SFD implement with pytorch

S³FD: Single Shot Scale-invariant Face Detector A PyTorch Implementation of Single Shot Scale-invariant Face Detector Description Meanwhile train hand

Jun Li 251 Dec 22, 2022
Automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azure

fwhr-calc-website This project is to automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azur

SoohyunPark 1 Feb 07, 2022
Pretrained models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet.

Pretrained models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet.

Matthias Wright 169 Dec 26, 2022
Dataset for the Research2Clinics @ NeurIPS 2021 Paper: What Do You See in this Patient? Behavioral Testing of Clinical NLP Models

Behavioral Testing of Clinical NLP Models This repository contains code for testing the behavior of clinical prediction models based on patient letter

Betty van Aken 2 Sep 20, 2022
A new version of the CIDACS-RL linkage tool suitable to a cluster computing environment.

Fully Distributed CIDACS-RL The CIDACS-RL is a brazillian record linkage tool suitable to integrate large amount of data with high accuracy. However,

Robespierre Pita 5 Nov 04, 2022
An Official Repo of CVPR '20 "MSeg: A Composite Dataset for Multi-Domain Segmentation"

This is the code for the paper: MSeg: A Composite Dataset for Multi-domain Semantic Segmentation (CVPR 2020, Official Repo) [CVPR PDF] [Journal PDF] J

226 Nov 05, 2022
A general framework for inferring CNNs efficiently. Reduce the inference latency of MobileNet-V3 by 1.3x on an iPhone XS Max without sacrificing accuracy.

GFNet-Pytorch (NeurIPS 2020) This repo contains the official code and pre-trained models for the glance and focus network (GFNet). Glance and Focus: a

Rainforest Wang 169 Oct 28, 2022
一套完整的微博舆情分析流程代码,包括微博爬虫、LDA主题分析和情感分析。

已经将项目的关键文件上传,包含微博爬虫、LDA主题分析和情感分析三个部分。 1.微博爬虫 实现微博评论爬取和微博用户信息爬取,一天大概十万条。 2.LDA主题分析 实现文档主题抽取,包括数据清洗及分词、主题数的确定(主题一致性和困惑度)和最优主题模型的选择(暴力搜索)。 3.情感分析 实现评论文本的

182 Jan 02, 2023