BERT score for text generation

Overview

BERTScore

made-with-python arxiv PyPI version bert-score Downloads Downloads License: MIT Code style: black

Automatic Evaluation Metric described in the paper BERTScore: Evaluating Text Generation with BERT (ICLR 2020).

News:

  • Features to appear in the next version (currently in the master branch):

    • Support 3 ByT5 models
  • Updated to version 0.3.10

    • Support 8 SimCSE models
    • Fix the support of scibert (to be compatible with transformers >= 4.0.0)
    • Add scripts for reproducing some results in our paper (See this folder)
    • Support fast tokenizers in huggingface transformers with --use_fast_tokenizer. Notably, you will get different scores because of the difference in the tokenizer implementations (#106).
    • Fix non-zero recall problem for empty candidate strings (#107).
    • Add Turkish BERT Supoort (#108).
  • Updated to version 0.3.9

    • Support 3 BigBird models
    • Fix bugs for mBART and T5
    • Support 4 mT5 models as requested (#93)
  • Updated to version 0.3.8

    • Support 53 new pretrained models including BART, mBART, BORT, DeBERTa, T5, BERTweet, MPNet, ConvBERT, SqueezeBERT, SpanBERT, PEGASUS, Longformer, LED, Blendbot, etc. Among them, DeBERTa achives higher correlation with human scores than RoBERTa (our default) on WMT16 dataset. The correlations are presented in this Google sheet.
    • Please consider using --model_type microsoft/deberta-xlarge-mnli or --model_type microsoft/deberta-large-mnli (faster) if you want the scores to correlate better with human scores.
    • Add baseline files for DeBERTa models.
    • Add example code to generate baseline files (please see the details).
  • Updated to version 0.3.7

    • Being compatible with Huggingface's transformers version >=4.0.0. Thanks to public contributers (#84, #85, #86).
  • See #22 if you want to replicate our experiments on the COCO Captioning dataset.

  • For people in China, downloading pre-trained weights can be very slow. We provide copies of a few models on Baidu Pan.

  • Huggingface's datasets library includes BERTScore in their metric collection.

Previous updates

  • Updated to version 0.3.6
    • Support custom baseline files #74
    • The option --rescale-with-baseline is changed to --rescale_with_baseline so that it is consistent with other options.
  • Updated to version 0.3.5
    • Being compatible with Huggingface's transformers >=v3.0.0 and minor fixes (#58, #66, #68)
    • Several improvements related to efficency (#67, #69)
  • Updated to version 0.3.4
    • Compatible with transformers v2.11.0 now (#58)
  • Updated to version 0.3.3
    • Fixing the bug with empty strings issue #47.
    • Supporting 6 ELECTRA models and 24 smaller BERT models.
    • A new Google sheet for keeping the performance (i.e., pearson correlation with human judgment) of different models on WMT16 to-English.
    • Including the script for tuning the best number of layers of an English pre-trained model on WMT16 to-English data (See the details).
  • Updated to version 0.3.2
    • Bug fixed: fixing the bug in v0.3.1 when having multiple reference sentences.
    • Supporting multiple reference sentences with our command line tool.
  • Updated to version 0.3.1
    • A new BERTScorer object that caches the model to avoid re-loading it multiple times. Please see our jupyter notebook example for the usage.
    • Supporting multiple reference sentences for each example. The score function now can take a list of lists of strings as the references and return the score between the candidate sentence and its closest reference sentence.

Please see release logs for older updates.

Authors:

*: Equal Contribution

Overview

BERTScore leverages the pre-trained contextual embeddings from BERT and matches words in candidate and reference sentences by cosine similarity. It has been shown to correlate with human judgment on sentence-level and system-level evaluation. Moreover, BERTScore computes precision, recall, and F1 measure, which can be useful for evaluating different language generation tasks.

For an illustration, BERTScore recall can be computed as

If you find this repo useful, please cite:

@inproceedings{bert-score,
  title={BERTScore: Evaluating Text Generation with BERT},
  author={Tianyi Zhang* and Varsha Kishore* and Felix Wu* and Kilian Q. Weinberger and Yoav Artzi},
  booktitle={International Conference on Learning Representations},
  year={2020},
  url={https://openreview.net/forum?id=SkeHuCVFDr}
}

Installation

  • Python version >= 3.6
  • PyTorch version >= 1.0.0

Install from pypi with pip by

pip install bert-score

Install latest unstable version from the master branch on Github by:

pip install git+https://github.com/Tiiiger/bert_score

Install it from the source by:

git clone https://github.com/Tiiiger/bert_score
cd bert_score
pip install .

and you may test your installation by:

python -m unittest discover

Usage

Python Function

On a high level, we provide a python function bert_score.score and a python object bert_score.BERTScorer. The function provides all the supported features while the scorer object caches the BERT model to faciliate multiple evaluations. Check our demo to see how to use these two interfaces. Please refer to bert_score/score.py for implementation details.

Running BERTScore can be computationally intensive (because it uses BERT :p). Therefore, a GPU is usually necessary. If you don't have access to a GPU, you can try our demo on Google Colab

Command Line Interface (CLI)

We provide a command line interface (CLI) of BERTScore as well as a python module. For the CLI, you can use it as follows:

  1. To evaluate English text files:

We provide example inputs under ./example.

bert-score -r example/refs.txt -c example/hyps.txt --lang en

You will get the following output at the end:

roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0) P: 0.957378 R: 0.961325 F1: 0.959333

where "roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0)" is the hash code.

Starting from version 0.3.0, we support rescaling the scores with baseline scores

bert-score -r example/refs.txt -c example/hyps.txt --lang en --rescale_with_baseline

You will get:

roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0)-rescaled P: 0.747044 R: 0.770484 F1: 0.759045

This makes the range of the scores larger and more human-readable. Please see this post for details.

When having multiple reference sentences, please use

bert-score -r example/refs.txt example/refs2.txt -c example/hyps.txt --lang en

where the -r argument supports an arbitrary number of reference files. Each reference file should have the same number of lines as your candidate/hypothesis file. The i-th line in each reference file corresponds to the i-th line in the candidate file.

  1. To evaluate text files in other languages:

We currently support the 104 languages in multilingual BERT (full list).

Please specify the two-letter abbreviation of the language. For instance, using --lang zh for Chinese text.

See more options by bert-score -h.

  1. To load your own custom model: Please specify the path to the model and the number of layers to use by --model and --num_layers.
bert-score -r example/refs.txt -c example/hyps.txt --model path_to_my_bert --num_layers 9
  1. To visualize matching scores:
bert-score-show --lang en -r "There are two bananas on the table." -c "On the table are two apples." -f out.png

The figure will be saved to out.png.

Practical Tips

  • Report the hash code (e.g., roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0)-rescaled) in your paper so that people know what setting you use. This is inspired by sacreBLEU. Changes in huggingface's transformers version may also affect the score (See issue #46).
  • Unlike BERT, RoBERTa uses GPT2-style tokenizer which creates addition " " tokens when there are multiple spaces appearing together. It is recommended to remove addition spaces by sent = re.sub(r' +', ' ', sent) or sent = re.sub(r'\s+', ' ', sent).
  • Using inverse document frequency (idf) on the reference sentences to weigh word importance may correlate better with human judgment. However, when the set of reference sentences become too small, the idf score would become inaccurate/invalid. We now make it optional. To use idf, please set --idf when using the CLI tool or idf=True when calling bert_score.score function.
  • When you are low on GPU memory, consider setting batch_size when calling bert_score.score function.
  • To use a particular model please set -m MODEL_TYPE when using the CLI tool or model_type=MODEL_TYPE when calling bert_score.score function.
  • We tune layer to use based on WMT16 metric evaluation dataset. You may use a different layer by setting -l LAYER or num_layers=LAYER. To tune the best layer for your custom model, please follow the instructions in tune_layers folder.
  • Limitation: Because BERT, RoBERTa, and XLM with learned positional embeddings are pre-trained on sentences with max length 512, BERTScore is undefined between sentences longer than 510 (512 after adding [CLS] and [SEP] tokens). The sentences longer than this will be truncated. Please consider using XLNet which can support much longer inputs.

Default Behavior

Default Model

Language Model
en roberta-large
en-sci allenai/scibert_scivocab_uncased
zh bert-base-chinese
tr dbmdz/bert-base-turkish-cased
others bert-base-multilingual-cased

Default Layers

Please see this Google sheet for the supported models and their performance.

Acknowledgement

This repo wouldn't be possible without the awesome bert, fairseq, and transformers.

Comments
  • AssertionError: Different number of candidates and references

    AssertionError: Different number of candidates and references

    Hi,

    I am trying to compute semantic similarity using BertScore between system and reference summary having different number of sentences and getting this error. AssertionError: Different number of candidates and references

    I understand that the developer has asserted the condition.

    Is there a way out?

    Thanks Alka

    opened by alkakhurana 13
  • rescale and specify certain model

    rescale and specify certain model

    Hi Thank you for making your code available. I have used your score before the last update (before muti-refs were possible and before scorer). I used to get the hash of the model to make sure I get the same results always. With the new update, I'm struggling to find how to set a specific model and also rescale.

    For example, would like to do like this out, hash_code= score(preds, golds, model_type="roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.5.0)", rescale_with_baseline= True, return_hash=True)

    roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.5.0) is the hash I got from my earlier runs couple of months ago.

    Appreciate your help Areej

    opened by areejokaili 11
  • Evaluation Corpora

    Evaluation Corpora

    Thank you for your great work!

    Can you guys share the Machine Translation and Image Captioning corpora that contain references, candidates and human scores along with the BLEU and Bert Score?

    Thank you!

    opened by gmihaila 10
  • [Question] Cross-lingual Score

    [Question] Cross-lingual Score

    Assumed that the embeddings have learned joint languages representations (so that cat is closer to katze or chat, hence a sentence like I love eating will be closer to Ich esse gerne, as it happens in the MUSE or LASER models), would it be possible to evaluate the BERTscore against sentences in two different languages?

    opened by loretoparisi 10
  • Running slowly

    Running slowly

    Hi, Thank you for your nice work and release code, I have tried to run the code, but it is very slow, may be i should change some setting? Could you give me some advice ? Thank you very much~

    opened by HqWu-HITCS 10
  • request for the data in Image Captioning experiment

    request for the data in Image Captioning experiment

    Hi, would you mind release the data used in Image Captioning experiment (human judgments of twelve submission entries from the COCO 2015 Captioning Challenge) in your paper? Thanks a lot!

    opened by ChrisRBXiong 9
  • Version update changes the scores

    Version update changes the scores

    Hi there,

    Going from v0.3.4 (transformers=2.11) to v0.3.6 (transformers=3.3.1) shifts the scores abruptly for a given system:

    roberta-large_L17_no-idf_version=0.3.6(hug_trans=3.3.1)-rescaled P: 0.294713 R: 0.474827 F1: 0.376573
    roberta-large_L17_no-idf_version=0.3.4(hug_trans=2.11.0)-rescaled P: 0.459102 R: 0.514197 F1: 0.479780
    
    opened by ozancaglayan 8
  • The download speed is very slow in China

    The download speed is very slow in China

    After I execute bert_score to run examples, it begins to download files, and the first file is downloaded fast, but the second file of size 1.43G, which I suppose is the BERT model file, is very verty slow, and my base is China. Is there any method to work around this problem? for example can I download BERT models elsewhere first and then use bert_score to run them?

    opened by chncwang 8
  • Create my own Model for Sentence Similarity/Automated Scoring

    Create my own Model for Sentence Similarity/Automated Scoring

    I have a wikipedia dump file as my corpus (which is in Indonesian, i've extract it and convert it to .txt) How can i train my corpus file with bert multilingual cased (fine tune) with BERTScore so i can have my own model for specific task such as sentence similarity or automated short answer scoring?

    Or maybe i should do this with the original BERT? Thankyou so much in advance.

    opened by dhimasyoga16 7
  • Use fine tuned model

    Use fine tuned model

    I fine-tuned the bert model on my pc, then used the bert-score -r comments.txt -c all_tags.txt --model my_model --num_layers 9 code to use the fine-tuned model on bert score, but this error happened.

    image

    opened by seyyedjavadrazavi 6
  • Slow tokenizer is used by default

    Slow tokenizer is used by default

    First of all, thanks for a useful metric along with the code!

    I've used to evaluate a big set of internal documents and it takes a while. Upon a review of your code, I've noticed that by default slow tokenizers (python version) is used (use_fast=False). It would be great either to switch the default to fast tokenizers (written in RUST) or at least parametrize it to avoid hacking lib code to make it work for a massive set of documents (currently it's a bottleneck, especially on GPU-powered machines).

    Thanks!

    opened by nikitajz 6
  • get_wmt18_seg_result question

    get_wmt18_seg_result question

    您好,我在运行get_wmt18_seg_result.py文件时,得到的如下结果,这和您在论文中报告的absolute pearson correlation 并不相同。 您可以解释下get_wmt18_seg_result.py文件得到的结果吗? 我应该怎么样得到论文中报告的absolute pearson correlation结果呢?

    model_type cs-en de-en et-en fi-en ru-en tr-en zh-en avg roberta-large P 0.3831702544031311 0.54582257007364 0.39514465541862803 0.2939672801635992 0.35140330642060746 0.29337243401759533 0.2487933567167311 0.35881055103056175 roberta-large R 0.40626223091976515 0.5499350991505058 0.39761287706493187 0.3182515337423313 0.35851595540176856 0.29665689149560115 0.25802680097131037 0.3693230555351735 roberta-large F 0.4140900195694716 0.5548187274292837 0.4033603074698965 0.30610940695296524 0.354479046520569 0.3020527859237537 0.26480199058668347 0.3713874692075177 allenai/scibert_scivocab_uncased P 0.3232876712328767 0.506162367788616 0.3475784982634298 0.21945296523517382 0.3081507112648981 0.22205278592375366 0.2200737476391762 0.3066798210497034 allenai/scibert_scivocab_uncased R 0.3557729941291585 0.5114572489750806 0.35551206784083494 0.25140593047034765 0.31103421760861205 0.23096774193548386 0.23560272206733218 0.32167898900383574 allenai/scibert_scivocab_uncased F 0.34794520547945207 0.5180887021115267 0.3570282611378502 0.24821063394683027 0.31391772395232603 0.23026392961876832 0.23302455256767696 0.3212112869734901 bert-base-chinese P 0.225440313111546 0.4306460526146689 0.30604185398705946 0.17356850715746422 0.23317954632833526 0.19390029325513197 0.18409928950445184 0.24955369370837963 bert-base-chinese R 0.2802348336594912 0.4495379830615209 0.32434195447894076 0.21434049079754602 0.2808535178777393 0.20351906158357772 0.1951314566657673 0.27827989973208334 bert-base-chinese F 0.25909980430528373 0.45072033517111976 0.3256465859205585 0.20718302658486706 0.26855055747789314 0.21008797653958944 0.19722996672362622 0.27407403610327685 dbmdz/bert-base-turkish-cased P 0.2939334637964775 0.4726966624256211 0.3287847534422877 0.18852249488752557 0.2750865051903114 0.2133724340175953 0.20964115478010611 0.2831482097914178 dbmdz/bert-base-turkish-cased R 0.32289628180039137 0.4839290074668106 0.3352726503411435 0.2312116564417178 0.2877739331026528 0.2129032258064516 0.21731570584884732 0.298757494401145 dbmdz/bert-base-turkish-cased F 0.31859099804305285 0.4864993381398517 0.3396801889952575 0.22124233128834356 0.29257977700884275 0.22041055718475072 0.21977396048805348 0.2998253073068789 bert-base-multilingual-cased P 0.338160469667319 0.5064708074693809 0.3588265369087287 0.22955010224948874 0.3075740099961553 0.23049853372434018 0.22888748988218366 0.31428113569965666 bert-base-multilingual-cased R 0.3573385518590998 0.5134107002865919 0.36771213483542253 0.25639059304703476 0.32410611303344866 0.2300293255131965 0.2407590610666427 0.3271066399487767 bert-base-multilingual-cased F 0.3585127201565558 0.5162380640269371 0.36690114772306553 0.25140593047034765 0.31718569780853517 0.24269794721407625 0.24279761369427708 0.3279627315848278 roberta-large P 0.3831702544031311 0.54582257007364 0.39514465541862803 0.2939672801635992 0.35140330642060746 0.29337243401759533 0.2487933567167311 0.35881055103056175 roberta-large R 0.40626223091976515 0.5499350991505058 0.39761287706493187 0.3182515337423313 0.35851595540176856 0.29665689149560115 0.25802680097131037 0.3693230555351735 roberta-large F 0.4140900195694716 0.5548187274292837 0.4033603074698965 0.30610940695296524 0.354479046520569 0.3020527859237537 0.26480199058668347 0.3713874692075177 allenai/scibert_scivocab_uncased P 0.3232876712328767 0.506162367788616 0.3475784982634298 0.21945296523517382 0.3081507112648981 0.22205278592375366 0.2200737476391762 0.3066798210497034 allenai/scibert_scivocab_uncased R 0.3557729941291585 0.5114572489750806 0.35551206784083494 0.25140593047034765 0.31103421760861205 0.23096774193548386 0.23560272206733218 0.32167898900383574 allenai/scibert_scivocab_uncased F 0.34794520547945207 0.5180887021115267 0.3570282611378502 0.24821063394683027 0.31391772395232603 0.23026392961876832 0.23302455256767696 0.3212112869734901 bert-base-chinese P 0.225440313111546 0.4306460526146689 0.30604185398705946 0.17356850715746422 0.23317954632833526 0.19390029325513197 0.18409928950445184 0.24955369370837963 bert-base-chinese R 0.2802348336594912 0.4495379830615209 0.32434195447894076 0.21434049079754602 0.2808535178777393 0.20351906158357772 0.1951314566657673 0.27827989973208334 bert-base-chinese F 0.25909980430528373 0.45072033517111976 0.3256465859205585 0.20718302658486706 0.26855055747789314 0.21008797653958944 0.19722996672362622 0.27407403610327685 dbmdz/bert-base-turkish-cased P 0.2939334637964775 0.4726966624256211 0.3287847534422877 0.18852249488752557 0.2750865051903114 0.2133724340175953 0.20964115478010611 0.2831482097914178 dbmdz/bert-base-turkish-cased R 0.32289628180039137 0.4839290074668106 0.3352726503411435 0.2312116564417178 0.2877739331026528 0.2129032258064516 0.21731570584884732 0.298757494401145 dbmdz/bert-base-turkish-cased F 0.31859099804305285 0.4864993381398517 0.3396801889952575 0.22124233128834356 0.29257977700884275 0.22041055718475072 0.21977396048805348 0.2998253073068789 bert-base-multilingual-cased P 0.338160469667319 0.5064708074693809 0.3588265369087287 0.22955010224948874 0.3075740099961553 0.23049853372434018 0.22888748988218366 0.31428113569965666 bert-base-multilingual-cased R 0.3573385518590998 0.5134107002865919 0.36771213483542253 0.25639059304703476 0.32410611303344866 0.2300293255131965 0.2407590610666427 0.3271066399487767 bert-base-multilingual-cased F 0.3585127201565558 0.5162380640269371 0.36690114772306553 0.25140593047034765 0.31718569780853517 0.24269794721407625 0.24279761369427708 0.3279627315848278

    捕获

    opened by Hou-jing 3
Releases(v0.3.12)
  • v0.3.12(Oct 14, 2022)

  • v0.3.11(Dec 10, 2021)

  • v0.3.10(Aug 5, 2021)

    • Updated to version 0.3.10
      • Support 8 SimCSE models
      • Fix the support of scibert (to be compatible with transformers >= 4.0.0)
      • Add scripts for reproducing some results in our paper (See this folder)
      • Support fast tokenizers in huggingface transformers with --use_fast_tokenizer. Notably, you will get different scores because of the difference in the tokenizer implementations (#106).
      • Fix non-zero recall problem for empty candidate strings (#107).
      • Add Turkish BERT Supoort (#108).
    Source code(tar.gz)
    Source code(zip)
  • v0.3.9(Apr 17, 2021)

  • v0.3.8(Mar 3, 2021)

    • Support 53 new pretrained models including BART, mBART, BORT, DeBERTa, T5, mT5, BERTweet, MPNet, ConvBERT, SqueezeBERT, SpanBERT, PEGASUS, Longformer, LED, Blendbot, etc. Among them, DeBERTa achives higher correlation with human scores than RoBERTa (our default) on WMT16 dataset. The correlations are presented in this Google sheet.
    • Please consider using --model_type microsoft/deberta-xlarge-mnli or --model_type microsoft/deberta-large-mnli (faster) if you want the scores to correlate better with human scores.
    • Add baseline files for DeBERTa models.
    • Add example code to generate baseline files (please see the details).
    Source code(tar.gz)
    Source code(zip)
  • v0.3.7(Dec 6, 2020)

  • v0.3.6(Sep 3, 2020)

    Updated to version 0.3.6

    • Support custom baseline files #74
    • The option --rescale-with-baseline is changed to --rescale_with_baseline so that it is consistent with other options.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.5(Jul 17, 2020)

  • v0.3.4(Jun 10, 2020)

  • v0.3.3(May 10, 2020)

    • Fixing the bug with empty strings issue #47.
    • Supporting 6 ELECTRA models and 24 smaller BERT models.
    • A new Google sheet for keeping the performance (i.e., pearson correlation with human judgment) of different models on WMT16 to-English.
    • Including the script for tuning the best number of layers of an English pre-trained model on WMT16 to-English data (See the details).
    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Apr 18, 2020)

    • Bug fixed: fixing the bug in v0.3.1 when having multiple reference sentences.
    • Supporting multiple reference sentences with our command-line tool
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Apr 18, 2020)

    • A new BERTScorer object that caches the model to avoid re-loading it multiple times. Please see our jupyter notebook example for the usage.
    • Supporting multiple reference sentences for each example. The score function now can take a list of lists of strings as the references and return the score between the candidate sentence and its closest reference sentence.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Apr 18, 2020)

    • Supporting Baseline Rescaling: we apply a simple linear transformation to enhance the readability of BERTscore using pre-computed "baselines". It has been pointed out (e.g. by #20, #23) that the numerical range of BERTScore is exceedingly small when computed with RoBERTa models. In other words, although BERTScore correctly distinguishes examples through ranking, the numerical scores of good and bad examples are very similar. We detail our approach in a separate post.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.3(Apr 18, 2020)

    • Supporting DistilBERT (Sanh et al.), ALBERT (Lan et al.), and XLM-R (Conneau et al.) models.
    • Including the version of huggingface's transformers in the hash code for reproducibility
    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Apr 18, 2020)

    • Bug fixed: when using RoBERTaTokenizer, we now set add_prefix_space=True which was the default setting in huggingface's pytorch_transformers (when we ran the experiments in the paper) before they migrated it to transformers. This breaking change in transformers leads to a lower correlation with human evaluation. To reproduce our RoBERTa results in the paper, please use version 0.2.2.
    • The best number of layers for DistilRoBERTa is included
    • Supporting loading a custom model
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Apr 18, 2020)

  • v0.2.0(Apr 18, 2020)

    • Supporting BERT, XLM, XLNet, and RoBERTa models using huggingface's Transformers library
    • Automatically picking the best model for a given language
    • Automatically picking the layer based on a model
    • IDF is not set as default as we show in the new version that the improvement brought by importance weighting is not consistent
    Source code(tar.gz)
    Source code(zip)
Owner
Tianyi
Graduate student at Stanford University.
Tianyi
Toward Model Interpretability in Medical NLP

Toward Model Interpretability in Medical NLP LING380: Topics in Computational Linguistics Final Project James Cross ( 1 Mar 04, 2022

Neural Lexicon Reader: Reduce Pronunciation Errors in End-to-end TTS by Leveraging External Textual Knowledge

Neural Lexicon Reader: Reduce Pronunciation Errors in End-to-end TTS by Leveraging External Textual Knowledge This is an implementation of the paper,

Mutian He 19 Oct 14, 2022
Diaformer: Automatic Diagnosis via Symptoms Sequence Generation

Diaformer Diaformer: Automatic Diagnosis via Symptoms Sequence Generation (AAAI 2022) Diaformer is an efficient model for automatic diagnosis via symp

Junying Chen 20 Dec 13, 2022
CPC-big and k-means clustering for zero-resource speech processing

The CPC-big model and k-means checkpoints used in Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing.

Benjamin van Niekerk 5 Nov 23, 2022
This repository contains the code, data, and models of the paper titled "CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs".

CrossSum This repository contains the code, data, and models of the paper titled "CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summ

BUET CSE NLP Group 29 Nov 19, 2022
Spert NLP Relation Extraction API deployed with torchserve for inference

URLMask Python program for Linux users to change a URL to ANY domain. A program than can take any url and mask it to any domain name you like. E.g. ne

Zichu Chen 1 Nov 24, 2021
neural network based speaker embedder

Content What is deepaudio-speaker? Installation Get Started Model Architecture How to contribute to deepaudio-speaker? Acknowledge What is deepaudio-s

20 Dec 29, 2022
ChatterBot is a machine learning, conversational dialog engine for creating chat bots

ChatterBot ChatterBot is a machine-learning based conversational dialog engine build in Python which makes it possible to generate responses based on

Gunther Cox 12.8k Jan 03, 2023
Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System

Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System Authors: Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai

Amazon Web Services - Labs 124 Jan 03, 2023
This project consists of data analysis and data visualization (done using python)of all IPL seasons from 2008 to 2019 and answering the most asked questions about the IPL.

IPL-data-analysis This project consists of data analysis and data visualization of all IPL seasons from 2008 to 2019 and answering the most asked ques

Sivateja A T 2 Feb 08, 2022
A Word Level Transformer layer based on PyTorch and 🤗 Transformers.

Transformer Embedder A Word Level Transformer layer based on PyTorch and 🤗 Transformers. How to use Install the library from PyPI: pip install transf

Riccardo Orlando 27 Nov 20, 2022
Yet Another Neural Machine Translation Toolkit

YANMTT YANMTT is short for Yet Another Neural Machine Translation Toolkit. For a backstory how I ended up creating this toolkit scroll to the bottom o

Raj Dabre 121 Jan 05, 2023
Generate a cool README/About me page for your Github Profile

Github Profile README/ About Me Generator 💯 This webapp lets you build a cool README for your profile. A few inputs + ~15 mins = Your Github Profile

Rahul Banerjee 179 Jan 07, 2023
Every Google, Azure & IBM text to speech voice for free

TTS-Grabber Quick thing i made about a year ago to download any text with any tts voice, over 630 voices to choose from currently. It will split the i

16 Dec 07, 2022
Unsupervised Language Modeling at scale for robust sentiment classification

** DEPRECATED ** This repo has been deprecated. Please visit Megatron-LM for our up to date Large-scale unsupervised pretraining and finetuning code.

NVIDIA Corporation 1k Nov 17, 2022
SimCTG - A Contrastive Framework for Neural Text Generation

A Contrastive Framework for Neural Text Generation Authors: Yixuan Su, Tian Lan,

Yixuan Su 345 Jan 03, 2023
Code Generation using a large neural network called GPT-J

CodeGenX is a Code Generation system powered by Artificial Intelligence! It is delivered to you in the form of a Visual Studio Code Extension and is Free and Open-source!

DeepGenX 389 Dec 31, 2022
📔️ Generate a text-based journal from a template file.

JGen 📔️ Generate a text-based journal from a template file. Contents Getting Started Example Overview Usage Details Reserved Keywords Gotchas Getting

Harrison Broadbent 21 Sep 25, 2022
Spam filtering made easy for you

spammy Author: Tasdik Rahman Latest version: 1.0.3 Contents 1 Overview 2 Features 3 Example 3.1 Accuracy of the classifier 4 Installation 4.1 Upgradin

Tasdik Rahman 137 Dec 18, 2022
My Implementation for the paper EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks using Tensorflow

Easy Data Augmentation Implementation This repository contains my Implementation for the paper EDA: Easy Data Augmentation Techniques for Boosting Per

Aflah 9 Oct 31, 2022