TTS is a library for advanced Text-to-Speech generation.

Overview

TTS: Text-to-Speech for all.

TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. TTS comes with pretrained models, tools for measuring dataset quality and already used in 20+ languages for products and research projects.

CircleCI License PyPI version

๐Ÿ“ข English Voice Samples and SoundCloud playlist

๐Ÿ‘จโ€๐Ÿณ TTS training recipes

๐Ÿ“„ Text-to-Speech paper collection

๐Ÿ’ฌ Where to ask questions

Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly, so that more people can benefit from it.

Type Platforms
๐Ÿšจ Bug Reports GitHub Issue Tracker
โ” FAQ TTS/Wiki
๐ŸŽ Feature Requests & Ideas GitHub Issue Tracker
๐Ÿ‘ฉโ€๐Ÿ’ป Usage Questions Discourse Forum
๐Ÿ—ฏ General Discussion Discourse Forum and Matrix Channel

๐Ÿ”— Links and Resources

Type Links
๐Ÿ’พ Installation TTS/README.md
๐Ÿ‘ฉ๐Ÿพโ€๐Ÿซ Tutorials and Examples TTS/Wiki
๐Ÿš€ Released Models TTS/Wiki
๐Ÿ’ป Docker Image Repository by @synesthesiam
๐Ÿ–ฅ๏ธ Demo Server TTS/server
๐Ÿค– Running TTS on Terminal TTS/README.md
โœจ How to contribute TTS/README.md

๐Ÿฅ‡ TTS Performance

"Mozilla*" and "Judy*" are our models. Details...

Features

  • High performance Deep Learning models for Text2Speech tasks.
    • Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech).
    • Speaker Encoder to compute speaker embeddings efficiently.
    • Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN)
  • Fast and efficient model training.
  • Detailed training logs on console and Tensorboard.
  • Support for multi-speaker TTS.
  • Efficient Multi-GPUs training.
  • Ability to convert PyTorch models to Tensorflow 2.0 and TFLite for inference.
  • Released models in PyTorch, Tensorflow and TFLite.
  • Tools to curate Text2Speech datasets underdataset_analysis.
  • Demo server for model testing.
  • Notebooks for extensive model benchmarking.
  • Modular (but not too much) code base enabling easy testing for new ideas.

Implemented Models

Text-to-Spectrogram

Attention Methods

  • Guided Attention: paper
  • Forward Backward Decoding: paper
  • Graves Attention: paper
  • Double Decoder Consistency: blog

Speaker Encoder

Vocoders

You can also help us implement more models. Some TTS related work can be found here.

Install TTS

TTS supports python >= 3.6, <3.9.

If you are only interested in synthesizing speech with the released TTS models, installing from PyPI is the easiest option.

pip install TTS

If you plan to code or train models, clone TTS and install it locally.

git clone https://github.com/mozilla/TTS
pip install -e .

Directory Structure

|- notebooks/       (Jupyter Notebooks for model evaluation, parameter selection and data analysis.)
|- utils/           (common utilities.)
|- TTS
    |- bin/             (folder for all the executables.)
      |- train*.py                  (train your target model.)
      |- distribute.py              (train your TTS model using Multiple GPUs.)
      |- compute_statistics.py      (compute dataset statistics for normalization.)
      |- convert*.py                (convert target torch model to TF.)
    |- tts/             (text to speech models)
        |- layers/          (model layer definitions)
        |- models/          (model definitions)
        |- tf/              (Tensorflow 2 utilities and model implementations)
        |- utils/           (model specific utilities.)
    |- speaker_encoder/ (Speaker Encoder models.)
        |- (same)
    |- vocoder/         (Vocoder models.)
        |- (same)

Sample Model Output

Below you see Tacotron model state after 16K iterations with batch-size 32 with LJSpeech dataset.

"Recent research at Harvard has shown meditating for as little as 8 weeks can actually increase the grey matter in the parts of the brain responsible for emotional regulation and learning."

Audio examples: soundcloud

example_output

Datasets and Data-Loading

TTS provides a generic dataloader easy to use for your custom dataset. You just need to write a simple function to format the dataset. Check datasets/preprocess.py to see some examples. After that, you need to set dataset fields in config.json.

Some of the public datasets that we successfully applied TTS:

Example: Synthesizing Speech on Terminal Using the Released Models.

After the installation, TTS provides a CLI interface for synthesizing speech using pre-trained models. You can either use your own model or the release models under the TTS project.

Listing released TTS models.

tts --list_models

Run a tts and a vocoder model from the released model list. (Simply copy and paste the full model names from the list as arguments for the command below.)

tts --text "Text for TTS" \
    --model_name "///" \
    --vocoder_name "///" \
    --out_path folder/to/save/output/

Run your own TTS model (Using Griffin-Lim Vocoder)

tts --text "Text for TTS" \
    --model_path path/to/model.pth.tar \
    --config_path path/to/config.json \
    --out_path output/path/speech.wav

Run your own TTS and Vocoder models

tts --text "Text for TTS" \
    --model_path path/to/config.json \
    --config_path path/to/model.pth.tar \
    --out_path output/path/speech.wav \
    --vocoder_path path/to/vocoder.pth.tar \
    --vocoder_config_path path/to/vocoder_config.json

Note: You can use ./TTS/bin/synthesize.py if you prefer running tts from the TTS project folder.

Example: Training and Fine-tuning LJ-Speech Dataset

Here you can find a CoLab notebook for a hands-on example, training LJSpeech. Or you can manually follow the guideline below.

To start with, split metadata.csv into train and validation subsets respectively metadata_train.csv and metadata_val.csv. Note that for text-to-speech, validation performance might be misleading since the loss value does not directly measure the voice quality to the human ear and it also does not measure the attention module performance. Therefore, running the model with new sentences and listening to the results is the best way to go.

shuf metadata.csv > metadata_shuf.csv
head -n 12000 metadata_shuf.csv > metadata_train.csv
tail -n 1100 metadata_shuf.csv > metadata_val.csv

To train a new model, you need to define your own config.json to define model details, trainin configuration and more (check the examples). Then call the corressponding train script.

For instance, in order to train a tacotron or tacotron2 model on LJSpeech dataset, follow these steps.

python TTS/bin/train_tacotron.py --config_path TTS/tts/configs/config.json

To fine-tune a model, use --restore_path.

python TTS/bin/train_tacotron.py --config_path TTS/tts/configs/config.json --restore_path /path/to/your/model.pth.tar

To continue an old training run, use --continue_path.

python TTS/bin/train_tacotron.py --continue_path /path/to/your/run_folder/

For multi-GPU training, call distribute.py. It runs any provided train script in multi-GPU setting.

CUDA_VISIBLE_DEVICES="0,1,4" python TTS/bin/distribute.py --script train_tacotron.py --config_path TTS/tts/configs/config.json

Each run creates a new output folder accomodating used config.json, model checkpoints and tensorboard logs.

In case of any error or intercepted execution, if there is no checkpoint yet under the output folder, the whole folder is going to be removed.

You can also enjoy Tensorboard, if you point Tensorboard argument--logdir to the experiment folder.

Contribution Guidelines

This repository is governed by Mozilla's code of conduct and etiquette guidelines. For more details, please read the Mozilla Community Participation Guidelines.

  1. Create a new branch.
  2. Implement your changes.
  3. (if applicable) Add Google Style docstrings.
  4. (if applicable) Implement a test case under tests folder.
  5. (Optional but Prefered) Run tests.
./run_tests.sh
  1. Run the linter.
pip install pylint cardboardlint
cardboardlinter --refspec master
  1. Send a PR to dev branch, explain what the change is about.
  2. Let us discuss until we make it perfect :).
  3. We merge it to the dev branch once things look good.

Feel free to ping us at any step you need help using our communication channels.

Collaborative Experimentation Guide

If you like to use TTS to try a new idea and like to share your experiments with the community, we urge you to use the following guideline for a better collaboration. (If you have an idea for better collaboration, let us know)

  • Create a new branch.
  • Open an issue pointing your branch.
  • Explain your idea and experiment.
  • Share your results regularly. (Tensorboard log files, audio results, visuals etc.)

Major TODOs

Acknowledgement

Comments
  • Tacotron2 + WaveRNN experiments

    Tacotron2 + WaveRNN experiments

    Tacotron2: https://arxiv.org/pdf/1712.05884.pdf WaveRNN: https://github.com/erogol/WaveRNN forked from https://github.com/fatchord/WaveRNN

    The idea is to add Tacotron2 as another alternative if it is really useful then the current model.

    • [x] Code boilerplate tracotron2 architecture.
    • [x] Train Tacotron2 and compare results (Baseline)
    • [x] Train TTS current model in a comparable size with T2. (Current TTS model has 7M and Tacotron2 has 28M parameters)
    • [x] Add TTS specific architectural changes to T2 and compare with the baseline.
    • [x] Train WaveRNN a vocoder on generated spectrograms
    • [x] Train a better stopnet. Stopnet sometimes misses the prediction that leads to unstable predictions. Maybe it is better to use a RNN as previous TTS version.
    • [x] Release LJspeech Tacotron 2 model. (soon)
    • [x] Release LJSpeech WaveRNN model. (https://github.com/erogol/WaveRNN)

    Best result so far: https://soundcloud.com/user-565970875/ljspeech-logistic-wavernn

    Some findings:

    • Adding an entropy loss for the attention seems to improve the cases hard to learn the alignment. It forces network to learn more sparse and noise free alignment weights.
    entropy = torch.distributions.Categorical(probs=alignments).entropy()
    entropy_loss = (entropy / np.log(alignments.shape[1])).mean()
    loss += 1e-4 * entropy_loss
    

    Here is the alignment with entropy loss. However, if you keep the loss weight high, then it degrades the model's generalization for new words. image

    • Replacing Prenet with a BatchNorm version ehnace the performance quite a lot.
    • A network with BN Prenet is harder to learn the attention. It looks like the network needs a level of noise onto autoregressive connection to relate encoder output to network output. Otwerwise, in teacher forcing mode, network does not need encoder output since it finds previous prediction frame enough to generate the next frame.
    • Forward attention seems more robust to longer sequences and faster to align. (https://arxiv.org/abs/1807.06736)
    improvement experiment 
    opened by erogol 80
  • Train a better Speaker Encoder

    Train a better Speaker Encoder

    Our current speaker encoder is trained with only LibriTTS (100, 360) datasets. However, we can improve its performance using other available datasets (VoxCeleb, LibriTTS-500, Common Voice etc.). It will also increase the performance ofour multi-speaker model and makes it easier to adapt to new voices.

    I can't really work on this alone due to the recent changes and the amount of work needed therefore I need some hand here to work together.

    So I can list the TODO as follows and feel free to contribute to any part of it or suggest changes;

    • [x] decide target datasets
    • [x] download and preprocess the datasets
    • [x] write preprocessors for new datasets
    • [x] increase the efficiency of the speaker encoder data-loader.
    • [x] training a model only using Eng datasets.
    • [x] training a model with all the available datasets.
    improvement help wanted discussion 
    opened by erogol 79
  • [Discussion] WaveGrad

    [Discussion] WaveGrad

    This is not an issue and is more of a discussion. I read the WaveGrad paper today (which may be found here) and listened to the samples here, which sound very good. There seems to be an open source implementation already here with great progress. Has anyone read the paper or used this implementation?

    wontfix discussion 
    opened by george-roussos 76
  • Multi Speaker Embeddings

    Multi Speaker Embeddings

    Hi @erogol, I've been a bit off the radar for the past month because of vacation and other projects, but now I am back and ready for action! I am looking into how to do multi speaker embeddings, and here's my current plan of action:

    1. Have all preprocessors output items that also have a speaker ID to be used down the line. Formats that do not have explicit speaker ids, i.e. all current preprocessors, would use a uniform ID. This speaker ID must then be passed down by the dataset through the collate function and into the forward pass of the model.

    2. Add speaker embeddings to the model. An additional embedding with configurable number of speakers and embedding dimensionality. The embedding vector is retrieved based on speaker id and then replicated and concatenated to each encoder output. The result is passed to the decoder as before. Here we could also easily ignore speaker embeddings if we only deal with a single speaker.

    3. It might make sense to let speaker embeddings put some constraints on the train/dev/test split, i.e. every speaker in the dev/test set should at least have some examples in the train set, otherwise their embeddings are never learned. I could implement a check for that and issue a warning if this isn't the case.

    Any thoughts or additional hints on this?

    wontfix 
    opened by twerkmeister 51
  • [New-Model] Implement Multilingual Speech Synthesis

    [New-Model] Implement Multilingual Speech Synthesis

    I was wondering if anyone else would be interested by the implementation of this paper in the mozilla/TTS repo : "Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning"

    I think that having the possibility of using code-switching is a huge plus for non English models since English is use in everyday life and not being able to pronounce English words in French for example limit the usability of the model. (my model can't say parking)

    Furthermore, I hope that combining this with the new encoder we have trained would maybe allow for voice cloning in language with low resources (or at least have more voices available).

    I'm a beginner when it comes to pytorch but I would love to help implementing this paper although I'm not sure I can do it alone.

    What do you think ? Would it be interesting to have that in the repo ? would it be hard to implement ? Who would be willing to help ?

    Thanks for reading

    wontfix new-model 
    opened by WeberJulian 43
  • [Poll] Should we include WaveRNN in Mozilla TTS ?

    [Poll] Should we include WaveRNN in Mozilla TTS ?

    I see a lot of people still use WaveRNN although we released new faster vocoders.

    I am not willing to invest time in it given the way faster alternatives but you can let us know if you like to see WaveRNN as a part of Mozilla TTS repo.

    Please give thumps up or down to this post to have a poll.

    You can also state your comment or reason to have WaveRNN below.

    help wanted poll 
    opened by erogol 40
  • Model Release: Tacotron2 with Discrete Graves Attention - LJSpeech

    Model Release: Tacotron2 with Discrete Graves Attention - LJSpeech

    Model Link: https://drive.google.com/drive/folders/12Ct0ztVWHpL7SrEbUammGMmDopOKL9X_?usp=sharing

    This model is trained with Discrete Grave attention with BatchNorm prenet. It produces good examples with robust attention alignment without any inference time tricks. You can even hear breathing effects with this model in between pauses.

    You can also use this TTS model with PWGAN or WaveRNN vocoders. PWGAn provides real-time voice synthesis and WaveRNN is slower but provides better quality.

    https://github.com/erogol/ParallelWaveGAN https://github.com/erogol/WaveRNN

    (Ignore the small jiggle on the figures caused by TB) image

    image

    model-release 
    opened by erogol 36
  • Parallel_wavegan tensorboard results weird

    Parallel_wavegan tensorboard results weird

    I used the dev branch training PWGAN, then i looked into the tensorboard results, it seems that the spectrograms look weird. May i ask whether i did something wrong or i miss something?

    I used the original parallel_wavegan_config.json.

    Screenshot 2020-09-02 at 6 02 30 PM

    wontfix 
    opened by PPGGG 33
  • Introduce github action for CI

    Introduce github action for CI

    It seemed to me like Travis-CI checks are not working anymore. I'm aware of the new pricing policy they introduced recently and suspected it might be due to that.

    The CI last ran somewhere mid-october.

    Since this project is hosted on GitHub, I believe their actions feature might be a good fit for the time being. So I started to port the travis tests to the best of my understanding. I hope that is alright.

    You can look at the current state over here: https://github.com/mweinelt/TTS/actions/runs/363907718


    There is currently the following issue, that was introduced in 39c71ee8a98bcbfea242e6b203556150ee64205b:

     ======================================================================
    ERROR: Test if all layers are updated in a basic training cycle
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/home/runner/work/TTS/TTS/tests/test_wavegrad_train.py", line 36, in test_train_step
        model.compute_noise_level(1000, 1e-6, 1e-2)
    TypeError: compute_noise_level() takes 2 positional arguments but 4 were given
    
    ----------------------------------------------------------------------
    

    I'll happyily rebase once this fix has hit the dev branch, so we can check if this works.

    opened by mweinelt 32
  • prenet dropout

    prenet dropout

    I was using another repo previously, and now I am switching to mozilla TTS;

    according to my experience, the dropout in decoder prenet also used in inference, without dropout in inference, the quality is bad(tacotron 2), which is hard to understand,

    do you get similar experience and why?

    experiment 
    opened by xinqipony 32
  • Multi-speaker Tacotron model training from scratch

    Multi-speaker Tacotron model training from scratch

    Hi,

    I'm trying to train a multi-speaker Tacotron model from scratch using VCTK + LibriTTS databases. The model trains fine until about 50K global steps but after that I start running into "CUDA out of memory", "NaN loss with key=decoder_coarse_loss", or "NaN loss with key=decoder_loss" errors. I tried reducing batch sizes, limiting input sequence lengths, and/or reducing learning rate but those didn't seem to help. I also tried training from scratch using VCTK only and ended up with similar errors. I'm training on a single Titan X GPU with 12GB memory. I didn't want to try multi-gpu training yet so I wonder if I should be setting some parameters differently in the config file. Any suggestions? Also, can someone explain the following parameters and how they should be set for single GPU training? Or, should I simply avoid single GPU training?

    "num_loader_workers": 4,        // number of training data loader processes. Don't set it too big. 4-8 are good values.                                                                                 
    "num_val_loader_workers": 4,    // number of evaluation data loader processes.                                                                                                                          
    "batch_group_size": 4,  //Number of batches to shuffle after bucketing. 
    

    Thanks!

    Additional info:

    • My branch is based on commit ea976b0543c7fa97628c41c4a936e3113896d18a
    • Config file attached
    • Tensorboard loss plots, attention alignments, output spectrograms, Griffin-Lim synthesized audio look/sound as expected before running into these errors
    • As far as I can tell, the errors occur pretty randomly. It could continue training a couple of thousands steps after 50K steps or fail after 500 steps. I don't also see any specific input files triggering these errors in a consistent manner.
    opened by oytunturk 25
  • error in --list_speaker_idxs

    error in --list_speaker_idxs

    Hello. I've installed tts via pip

    tts --list_speaker_idxs generates the following error:

     > Available speaker ids: (Set --speaker_idx flag to one of these values to use the multi-speaker model.
    Traceback (most recent call last):
      File "/home/user/.local/bin/tts", line 8, in <module>
        sys.exit(main())
      File "/home/user/.local/lib/python3.10/site-packages/TTS/bin/synthesize.py", line 333, in main
        print(synthesizer.tts_model.speaker_manager.name_to_id)
    AttributeError: 'NoneType' object has no attribute 'name_to_id'
    
    opened by 0x199x 0
  • Error in conversion from Torch to TF model

    Error in conversion from Torch to TF model

    Hi I have been using convert_tacotron2_torch_to_tf.py for conversion of a downloaded Tacotron model to tf version, but I faced with an error:

    AssertionError: [!] weight shapes does not match: decoder/while/attention/query_layer/linear_layer/kernel:0 vs decoder.attention.query_layer.weight --> (1024, 128) vs (128, 1024)

    I think it is the bug of conversion code. Would you please help me to solve the issue?

    Neda

    opened by nfaraji2002 0
  • short word   with server  no finish

    short word with server no finish

    some time if the enter as short 'exemple 'i'm ironman' the wave file is not short and the result is :"i'm ironmannneeeueueneeueueeneeuheuuueuehhahahhhhhahhhhaahhhhahhahanuuuuuuuuhhh" to finish in imitation of a motorcycle

    opened by greatAznur 0
  • Tacotron (2?) based models appear to be limited to rather short input

    Tacotron (2?) based models appear to be limited to rather short input

    Running tts --text on some meaningful sentences results in the following output:

    $ tts --text "An important event is the scheduling that periodically raises or lowers the CPU priority for each process in the system based on that processโ€™s recent CPU usage (see Section 4.4). The rescheduling calculation is done once per second. The scheduler is started at boot time, and each time that it runs, it requests that it be invoked again 1 second in the future."                                                           
     > tts_models/en/ljspeech/tacotron2-DDC is already downloaded.
     > vocoder_models/en/ljspeech/hifigan_v2 is already downloaded.
     > Using model: Tacotron2
     > Model's reduction rate `r` is set to: 1
     > Vocoder Model: hifigan
     > Generator Model: hifigan_generator
     > Discriminator Model: hifigan_discriminator
    Removing weight norm...
     > Text: An important event is the scheduling that periodically raises or lowers the CPU priority for each process in the system based on that processโ€™s recent CPU usage (see Section 4.4). The rescheduling calculation is done once per second. The scheduler is started at boot time, and each time that it runs, it requests that it be invoked again 1 second in the future.
     > Text splitted to sentences.
    ['An important event is the scheduling that periodically raises or lowers the CPU priority for each process in the system based on that processโ€™s recent CPU usage (see Section 4.4).', 'The rescheduling calculation is done once per second.', 'The scheduler is started at boot time, and each time that it runs, it requests that it be invoked again 1 second in the future.']
       > Decoder stopped with `max_decoder_steps` 500
       > Decoder stopped with `max_decoder_steps` 500
     > Processing time: 52.66666388511658
     > Real-time factor: 3.1740607061125763
     > Saving output to tts_output.wav
    

    The audio file is truncated with respect to the text. If I hack the config file at TTS/tts/configs/tacotron_config.py to have a larger max_decoder_steps value, the output does seem to successfully get longer, but I'm not sure how safe this is.

    Are there any better solutions? Should I use a different model?

    opened by deliciouslytyped 10
Releases(v0.0.9)
Owner
Mozilla
This technology could fall into the right hands.
Mozilla
This repository implements a brute-force spellchecker utilizing the Damerau-Levenshtein edit distance.

About spellchecker.py Implementing a highly-accurate, brute-force, and dynamically programmed spellchecking program that utilizes the Damerau-Levensht

Raihan Ahmed 1 Dec 11, 2021
An open-source NLP library: fast text cleaning and preprocessing.

An open-source NLP library: fast text cleaning and preprocessing

Iaroslav 21 Mar 18, 2022
To be a next-generation DL-based phenotype prediction from genome mutations.

Sequence -----------+-- 3D_structure -- 3D_module --+ +-- ? | |

Eric Alcaide 18 Jan 11, 2022
ConvBERT: Improving BERT with Span-based Dynamic Convolution

ConvBERT Introduction In this repo, we introduce a new architecture ConvBERT for pre-training based language model. The code is tested on a V100 GPU.

YITUTech 237 Dec 10, 2022
Unofficial Implementation of Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration

Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration This repo contains only model Implementation of Zero-Shot Text-to-Speech for Text

Rishikesh (เค‹เคทเคฟเค•เฅ‡เคถ) 33 Sep 22, 2022
KoBERTopic์€ BERTopic์„ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ์— ์ ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ† ํฌ๋‚˜์ด์ €์™€ BERT๋ฅผ ์ˆ˜์ •ํ•œ ์ฝ”๋“œ์ž…๋‹ˆ๋‹ค.

KoBERTopic ๋ชจ๋ธ ์†Œ๊ฐœ KoBERTopic์€ BERTopic์„ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ์— ์ ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ† ํฌ๋‚˜์ด์ €์™€ BERT๋ฅผ ์ˆ˜์ •ํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ธฐ์กด BERTopic : https://github.com/MaartenGr/BERTopic/tree/05a6790b21009d

Won Joon Yoo 26 Jan 03, 2023
PyTorch implementation of Microsoft's text-to-speech system FastSpeech 2: Fast and High-Quality End-to-End Text to Speech.

An implementation of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech"

Chung-Ming Chien 1k Dec 30, 2022
Train BPE with fastBPE, and load to Huggingface Tokenizer.

BPEer Train BPE with fastBPE, and load to Huggingface Tokenizer. Description The BPETrainer of Huggingface consumes a lot of memory when I am training

Lizhuo 1 Dec 23, 2021
REST API for sentence tokenization and embedding using Multilingual Universal Sentence Encoder.

What is MUSE? MUSE stands for Multilingual Universal Sentence Encoder - multilingual extension (16 languages) of Universal Sentence Encoder (USE). MUS

Dani El-Ayyass 47 Sep 05, 2022
Unofficial PyTorch implementation of Google AI's VoiceFilter system

VoiceFilter Note from Seung-won (2020.10.25) Hi everyone! It's Seung-won from MINDs Lab, Inc. It's been a long time since I've released this open-sour

MINDs Lab 881 Jan 03, 2023
Basic Utilities for PyTorch Natural Language Processing (NLP)

Basic Utilities for PyTorch Natural Language Processing (NLP) PyTorch-NLP, or torchnlp for short, is a library of basic utilities for PyTorch NLP. tor

Michael Petrochuk 2.1k Jan 01, 2023
Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts

gpt-2-simple A simple Python package that wraps existing model fine-tuning and generation scripts for OpenAI's GPT-2 text generation model (specifical

Max Woolf 3.1k Jan 07, 2023
[ICCV 2021] Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification

Counterfactual Attention Learning Created by Yongming Rao*, Guangyi Chen*, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for ICCV

Yongming Rao 89 Dec 18, 2022
Main repository for the chatbot Bobotinho.

Bobotinho Bot Main repository for the chatbot Bobotinho. โ„น๏ธ Introduction Twitch chatbot with entertainment commands. โ€Ž ๐Ÿ’ป Technologies Concurrent code

Bobotinho 14 Nov 29, 2022
SentAugment is a data augmentation technique for semi-supervised learning in NLP.

SentAugment SentAugment is a data augmentation technique for semi-supervised learning in NLP. It uses state-of-the-art sentence embeddings to structur

Meta Research 363 Dec 30, 2022
Trained T5 and T5-large model for creating keywords from text

text to keywords Trained T5-base and T5-large model for creating keywords from text. Supported languages: ru Pretraining Large version | Pretraining B

Danil 61 Nov 24, 2022
Stuff related to Ben Eater's 8bit breadboard computer

8bit breadboard computer simulator This is an assembler + simulator/emulator of Ben Eater's 8bit breadboard computer. For a version with its RAM upgra

Marijn van Vliet 29 Dec 29, 2022
A Chinese to English Neural Model Translation Project

ZH-EN NMT Chinese to English Neural Machine Translation This project is inspired by Stanford's CS224N NMT Project Dataset used in this project: News C

Zhenbang Feng 29 Nov 26, 2022
pyupbit ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ํ™œ์šฉํ•˜์—ฌ upbit์—์„œ ๋น„ํŠธ์ฝ”์ธ์„ ์ž๋™๋งค๋งคํ•˜๋Š” ์ฝ”๋“œ์ž…๋‹ˆ๋‹ค. ์กฐ์ฝ”๋”ฉ ์œ ํŠœ๋ธŒ ์ฑ„๋„์—์„œ ์ž์„ธํ•œ ๊ฐ•์˜ ์˜์ƒ์„ ๋ณด์‹ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

ํŒŒ์ด์ฌ ๋น„ํŠธ์ฝ”์ธ ํˆฌ์ž ์ž๋™ํ™” ๊ฐ•์˜ ์ฝ”๋“œ by ์œ ํŠœ๋ธŒ ์กฐ์ฝ”๋”ฉ ์ฑ„๋„ pyupbit ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ํ™œ์šฉํ•˜์—ฌ upbit ๊ฑฐ๋ž˜์†Œ์—์„œ ๋น„ํŠธ์ฝ”์ธ ์ž๋™๋งค๋งค๋ฅผ ํ•˜๋Š” ์ฝ”๋“œ์ž…๋‹ˆ๋‹ค. ํŒŒ์ผ ๊ตฌ์„ฑ test.py : ์ž”๊ณ  ์กฐํšŒ (1๊ฐ•) backtest.py : ๋ฐฑํ…Œ์ŠคํŒ… ์ฝ”๋“œ (2๊ฐ•) bestK.p

์กฐ์ฝ”๋”ฉ JoCoding 186 Dec 29, 2022
HuggingTweets - Train a model to generate tweets

HuggingTweets - Train a model to generate tweets Create in 5 minutes a tweet generator based on your favorite Tweeter Make my own model with the demo

Boris Dayma 318 Jan 04, 2023