Simple tool/toolkit for evaluating NLG (Natural Language Generation) offering various automated metrics.

Overview

Jury

Python versions downloads PyPI version Latest Release Open in Colab
Build status Dependencies Code style: black License: MIT

Simple tool/toolkit for evaluating NLG (Natural Language Generation) offering various automated metrics. Jury offers a smooth and easy-to-use interface. It uses datasets for underlying metric computation, and hence adding custom metric is easy as adopting datasets.Metric.

Main advantages that Jury offers are:

  • Easy to use for any NLG system.
  • Calculate many metrics at once.
  • Metrics calculations are handled concurrently to save processing time.
  • It supports evaluating multiple predictions.

To see more, check the official Jury blog post.

Installation

Through pip,

pip install jury

or build from source,

git clone https://github.com/obss/jury.git
cd jury
python setup.py install

Usage

API Usage

It is only two lines of code to evaluate generated outputs.

from jury import Jury

jury = Jury()

# Microsoft translator translation for "Yurtta sulh, cihanda sulh." (16.07.2021)
predictions = ["Peace in the dormitory, peace in the world."]
references = ["Peace at home, peace in the world."]
scores = jury.evaluate(predictions, references)

Specify metrics you want to use on instantiation.

jury = Jury(metrics=["bleu", "meteor"])
scores = jury.evaluate(predictions, references)

CLI Usage

You can specify predictions file and references file paths and get the resulting scores. Each line should be paired in both files.

jury eval --predictions /path/to/predictions.txt --references /path/to/references.txt --reduce_fn max

If you want to specify metrics, and do not want to use default, specify it in config file (json) in metrics key.

{
  "predictions": "/path/to/predictions.txt",
  "references": "/path/to/references.txt",
  "reduce_fn": "max",
  "metrics": [
    "bleu",
    "meteor"
  ]
}

Then, you can call jury eval with config argument.

jury eval --config path/to/config.json

Custom Metrics

You can use custom metrics with inheriting jury.metrics.Metric, you can see current metrics on datasets/metrics. The code snippet below gives a brief explanation.

from jury.metrics import Metric

CustomMetric(Metric):
    def compute(self, predictions, references):
        pass

Contributing

PRs are welcomed as always :)

Installation

git clone https://github.com/obss/jury.git
cd jury
pip install -e .[develop]

Tests

To tests simply run.

python tests/run_tests.py

Code Style

To check code style,

python tests/run_code_style.py check

To format codebase,

python tests/run_code_style.py format

License

Licensed under the MIT License.

Comments
  • Facing datasets error

    Facing datasets error

    Hello, After dowloading the contents from git and instantiating the object, i get this error :-

    /content/image-captioning-bottom-up-top-down
    Traceback (most recent call last):
      File "eval.py", line 11, in <module>
       from jury import Jury 
      File "/usr/local/lib/python3.7/dist-packages/jury/__init__.py", line 1, in <module>
        from jury.core import Jury
      File "/usr/local/lib/python3.7/dist-packages/jury/core.py", line 6, in <module>
        from jury.metrics import EvaluationInstance, Metric, load_metric
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/__init__.py", line 1, in <module>
        from jury.metrics._core import (
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/_core/__init__.py", line 1, in <module>
        from jury.metrics._core.auto import AutoMetric, load_metric
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/_core/auto.py", line 23, in <module>
        from jury.metrics._core.base import Metric
      File "/usr/local/lib/python3.7/dist-packages/jury/metrics/_core/base.py", line 28, in <module>
        from datasets.utils.logging import get_logger
    ModuleNotFoundError: No module named 'datasets.utils'; 'datasets' is not a package
    

    Can you please check what could be the issue

    opened by amit0623 8
  • CLI Implementation

    CLI Implementation

    CLI implementation for the package the read from txt files.

    Draft Usage: jury evaluate --predictions predictions.txt --references references.txt

    NLGEval uses single prediction and multiple references in a way that u specify multiple references.txt files for mulitple references, and like this on API.

    My idea is to have a single prediction and refenence file including multiple predictions or multiple references. In a single txt file, maybe we can use some sort of special separator like "<sep>" instead of a special char like [",", ";", ":", "\t"] maybe tab seperated would be OK. Wdyt ? @fcakyon @cemilcengiz

    help wanted discussion 
    opened by devrimcavusoglu 5
  • BLEU: ndarray reshape error

    BLEU: ndarray reshape error

    Hey, when computing BLEU score (snippet), facing reshape error in _compute_single_pred_single_ref.

    Could you assist with the same.

    from jury import Jury
    
    scorer = Jury()
    
    # [2, 5/5]
    p = [
            ['dummy text', 'dummy text', 'dummy text', 'dummy text', 'dummy text'],
            ['dummy text', 'dummy text', 'dummy text', 'dummy text', 'dummy text']
        ]
    
    # [2, 4/2]
    r = [['be looking for a certain office in the building ',
          ' ask the elevator operator for directions ',
          ' be a trained detective ',
          ' be at the scene of a crime'],
         ['leave the room ',
          ' transport the notebook']]
    
    scores = scorer(predictions=p, references=r)
    

    Output:

    Traceback (most recent call last):
      File "/home/axe/Projects/VisComSense/del.py", line 22, in <module>
        scores = scorer(predictions=p, references=r)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/core.py", line 78, in __call__
        score = self._compute_single_score(inputs)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/core.py", line 137, in _compute_single_score
        score = metric.compute(predictions=predictions, references=references, reduce_fn=reduce_fn)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/datasets/metric.py", line 404, in compute
        output = self._compute(predictions=predictions, references=references, **kwargs)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/_core/base.py", line 325, in _compute
        result = self.evaluate(predictions=predictions, references=references, reduce_fn=reduce_fn, **eval_params)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 241, in evaluate
        return eval_fn(predictions=predictions, references=references, reduce_fn=reduce_fn, **kwargs)
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 195, in _compute_multi_pred_multi_ref
        score = self._compute_single_pred_multi_ref(
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 176, in _compute_single_pred_multi_ref
        return self._compute_single_pred_single_ref(
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/metrics/bleu/bleu_for_language_generation.py", line 165, in _compute_single_pred_single_ref
        predictions = predictions.reshape(
      File "/home/axe/VirtualEnvs/pyenv3_8/lib/python3.8/site-packages/jury/collator.py", line 35, in reshape
        return Collator(_seq.reshape(args).tolist(), keep=True)
    ValueError: cannot reshape array of size 20 into shape (10,)
    
    Process finished with exit code 1
    
    bug 
    opened by Axe-- 4
  • Understanding BLEU Score ('bleu_n')

    Understanding BLEU Score ('bleu_n')

    Hey, how are different bleu scores calculated?

    For the give snippet, why are all bleu(n) scores identical? And how does this relate to nltk's sentence_bleu (weights) ?

    from jury import Jury
    
    scorer = Jury()
    predictions = [
        ["the cat is on the mat", "There is cat playing on the mat"], 
        ["Look!    a wonderful day."]
    ]
    references = [
        ["the cat is playing on the mat.", "The cat plays on the mat."], 
        ["Today is a wonderful day", "The weather outside is wonderful."]
    ]
    scores = scorer(predictions=predictions, references=references)
    
    

    Output:

    {'empty_predictions': 0,
     'total_items': 2,
     'bleu_1': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'bleu_2': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'bleu_3': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'bleu_4': {'score': 0.42370250917168295,
      'precisions': [0.8823529411764706,
       0.6428571428571429,
       0.45454545454545453,
       0.125],
      'brevity_penalty': 1.0,
      'length_ratio': 1.0,
      'translation_length': 11,
      'reference_length': 11},
     'meteor': {'score': 0.5420511682934044},
     'rouge': {'rouge1': 0.7783882783882783,
      'rouge2': 0.5925324675324675,
      'rougeL': 0.7426739926739926,
      'rougeLsum': 0.7426739926739926}}
    
    
    bug 
    opened by Axe-- 4
  • Computing BLEU more than once

    Computing BLEU more than once

    Hey, why does computing the BLEU score more than once, affect the key value of the score dict. e.g. 'bleu_1', 'bleu_1_1', 'bleu_1_1_1'

    Overall I find the library quite user-friendly, but unsure about this behavior.

    opened by Axe-- 4
  • New metrics structure completed.

    New metrics structure completed.

    New metrics structure allows user to create and define params for metrics as desired. Current metric classes in metrics/ can be extended or completely new custom metric can be defined inheriting jury.metrics.Metric.

    patch 
    opened by devrimcavusoglu 3
  • Fixed warning message in BLEURT default initialization

    Fixed warning message in BLEURT default initialization

    Jury constructor accepts metrics as a string, an object from Metric class or list of metric configurations inside a dict. In addition, BLEURT metric checks for config_namekey instead of checkpoint key. Thus, this warning message misleads if default model is not used.

    Here is an example of incorrect initialization and warning message:

    Screen Shot 2022-05-16 at 15 43 06

    checkpoint is ignored: Screen Shot 2022-05-16 at 15 42 55

    opened by zafercavdar 1
  • Fix Reference Structure for Basic BLEU calculation

    Fix Reference Structure for Basic BLEU calculation

    The wrapped function expects a slightly different reference structure than the one we give in the Single Ref-Pred method. A small structure change fixes the issue.

    Fixes #72

    opened by Sophylax 1
  • Bug: Metric object and string cannot be used together in input.

    Bug: Metric object and string cannot be used together in input.

    Currently, jury allows usage of input metrics to be passed in Jury(metrics=metrics) to be either list of jury.metrics.Metric or str, but it does not allow to use both str and Metric object together as,

    from jury import Jury
    from jury.metrics import load_metric
    
    metrics = ["bleu", load_metric("meteor")]
    jury = Jury(metrics=metrics)
    

    raises an error as metrics parameter expects a NestedSingleType of object which is either list<str> or list<jury.metrics.Metric.

    opened by devrimcavusoglu 1
  • BLEURT is failing to produce results

    BLEURT is failing to produce results

    I was trying to check with the same example mentioned in the readme file for Bleurt. It is failing by throwing an error. Please let me know the issue.

    Error :

    ImportError                               Traceback (most recent call last)
    <ipython-input-16-ed14e2ab4c7e> in <module>
    ----> 1 bleurt = Bleurt.construct()
          2 score = bleurt.compute(predictions=predictions, references=references)
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\auxiliary.py in construct(cls, task, resulting_name, compute_kwargs, **kwargs)
         99         subclass = cls._get_subclass()
        100         resulting_name = resulting_name or cls._get_path()
    --> 101         return subclass._construct(resulting_name=resulting_name, compute_kwargs=compute_kwargs, **kwargs)
        102 
        103     @classmethod
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\base.py in _construct(cls, resulting_name, compute_kwargs, **kwargs)
        235         cls, resulting_name: Optional[str] = None, compute_kwargs: Optional[Dict[str, Any]] = None, **kwargs
        236     ):
    --> 237         return cls(resulting_name=resulting_name, compute_kwargs=compute_kwargs, **kwargs)
        238 
        239     @staticmethod
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\base.py in __init__(self, resulting_name, compute_kwargs, **kwargs)
        220     def __init__(self, resulting_name: Optional[str] = None, compute_kwargs: Optional[Dict[str, Any]] = None, **kwargs):
        221         compute_kwargs = self._validate_compute_kwargs(compute_kwargs)
    --> 222         super().__init__(task=self._task, resulting_name=resulting_name, compute_kwargs=compute_kwargs, **kwargs)
        223 
        224     def _validate_compute_kwargs(self, compute_kwargs: Dict[str, Any]) -> Dict[str, Any]:
    
    ~\anaconda3\lib\site-packages\jury\metrics\_core\base.py in __init__(self, task, resulting_name, compute_kwargs, config_name, keep_in_memory, cache_dir, num_process, process_id, seed, experiment_id, max_concurrent_cache_files, timeout, **kwargs)
        100         self.resulting_name = resulting_name if resulting_name is not None else self.name
        101         self.compute_kwargs = compute_kwargs or {}
    --> 102         self.download_and_prepare()
        103 
        104     @abstractmethod
    
    ~\anaconda3\lib\site-packages\evaluate\module.py in download_and_prepare(self, download_config, dl_manager)
        649             )
        650 
    --> 651         self._download_and_prepare(dl_manager)
        652 
        653     def _download_and_prepare(self, dl_manager):
    
    ~\anaconda3\lib\site-packages\jury\metrics\bleurt\bleurt_for_language_generation.py in _download_and_prepare(self, dl_manager)
        120         global bleurt
        121         try:
    --> 122             from bleurt import score
        123         except ModuleNotFoundError:
        124             raise ModuleNotFoundError(
    
    ImportError: cannot import name 'score' from 'bleurt' (unknown location)
    
    opened by Santhanreddy71 4
  • Prism support for use_cuda option

    Prism support for use_cuda option

    Referring this issue https://github.com/thompsonb/prism/issues/13, since it seems like no activate maintanance is going on, we can add this support on a public fork.

    enhancement 
    opened by devrimcavusoglu 0
  • Add support for custom tokenizer for BLEU

    Add support for custom tokenizer for BLEU

    Due to the nature of the Jury API, all input strings must be a whole (not tokenized), the current implementation of BLEU score is tokenized by white spaces. However, one might want results for smaller tokens, morphemes, or even character level rather than BLEU score of the words. Thus, it'd be great to support this with adding a support for tokenizer in the score computation for BLEU.

    enhancement help wanted 
    opened by devrimcavusoglu 0
Releases(2.2.3)
  • 2.2.3(Dec 26, 2022)

    What's Changed

    • flake8 error on python3.7 by @devrimcavusoglu in https://github.com/obss/jury/pull/118
    • Seqeval typo fix by @devrimcavusoglu in https://github.com/obss/jury/pull/117
    • Refactored requirements (sklearn). by @devrimcavusoglu in https://github.com/obss/jury/pull/121

    Full Changelog: https://github.com/obss/jury/compare/2.2.2...2.2.3

    Source code(tar.gz)
    Source code(zip)
  • 2.2.2(Sep 30, 2022)

    What's Changed

    • Migrating to evaluate package (from datasets). by @devrimcavusoglu in https://github.com/obss/jury/pull/116

    Full Changelog: https://github.com/obss/jury/compare/2.2.1...2.2.2

    Source code(tar.gz)
    Source code(zip)
  • 2.2.1(Sep 21, 2022)

    What's Changed

    • Fixed warning message in BLEURT default initialization by @zafercavdar in https://github.com/obss/jury/pull/110
    • ZeroDivisionError on precision and recall values. by @devrimcavusoglu in https://github.com/obss/jury/pull/112
    • validators added to the requirements. by @devrimcavusoglu in https://github.com/obss/jury/pull/113
    • Intermediate patch, fixes, updates. by @devrimcavusoglu in https://github.com/obss/jury/pull/114

    New Contributors

    • @zafercavdar made their first contribution in https://github.com/obss/jury/pull/110

    Full Changelog: https://github.com/obss/jury/compare/2.2...2.2.1

    Source code(tar.gz)
    Source code(zip)
  • 2.2(Mar 29, 2022)

    What's Changed

    • Fix Reference Structure for Basic BLEU calculation by @Sophylax in https://github.com/obss/jury/pull/74
    • Added BLEURT. by @devrimcavusoglu in https://github.com/obss/jury/pull/78
    • README.md updated with doi badge and citation inforamtion. by @devrimcavusoglu in https://github.com/obss/jury/pull/81
    • Add VSCode Folder to Gitignore by @Sophylax in https://github.com/obss/jury/pull/82
    • Change one BERTScore test Device to CPU by @Sophylax in https://github.com/obss/jury/pull/84
    • Add Prism metric by @devrimcavusoglu in https://github.com/obss/jury/pull/79
    • Update issue templates by @devrimcavusoglu in https://github.com/obss/jury/pull/85
    • Dl manager rework by @devrimcavusoglu in https://github.com/obss/jury/pull/86
    • Nltk upgrade by @devrimcavusoglu in https://github.com/obss/jury/pull/88
    • CER metric implementation. by @devrimcavusoglu in https://github.com/obss/jury/pull/90
    • Prism checkpoint URL updated. by @devrimcavusoglu in https://github.com/obss/jury/pull/92
    • Test cases refactored. by @devrimcavusoglu in https://github.com/obss/jury/pull/96
    • Added BARTScore by @Sophylax in https://github.com/obss/jury/pull/89
    • License information added for prism and bleurt. by @devrimcavusoglu in https://github.com/obss/jury/pull/97
    • Remove Unused Imports by @Sophylax in https://github.com/obss/jury/pull/98
    • Added WER metric. by @devrimcavusoglu in https://github.com/obss/jury/pull/103
    • Add TER metric by @devrimcavusoglu in https://github.com/obss/jury/pull/104
    • CHRF metric added. by @devrimcavusoglu in https://github.com/obss/jury/pull/105
    • Add comet by @devrimcavusoglu in https://github.com/obss/jury/pull/107
    • Doc refactor by @devrimcavusoglu in https://github.com/obss/jury/pull/108
    • Pypi fix by @devrimcavusoglu in https://github.com/obss/jury/pull/109

    New Contributors

    • @Sophylax made their first contribution in https://github.com/obss/jury/pull/74

    Full Changelog: https://github.com/obss/jury/compare/2.1.5...2.2

    Source code(tar.gz)
    Source code(zip)
  • 2.1.5(Dec 23, 2021)

    What's Changed

    • Bug fix: Typo corrected in _remove_empty() in core.py. by @devrimcavusoglu in https://github.com/obss/jury/pull/67
    • Metric name path bug fix. by @devrimcavusoglu in https://github.com/obss/jury/pull/69

    Full Changelog: https://github.com/obss/jury/compare/2.1.4...2.1.5

    Source code(tar.gz)
    Source code(zip)
  • 2.1.4(Dec 6, 2021)

    What's Changed

    • Handle for empty predictions & references on Jury (skipping empty). by @devrimcavusoglu in https://github.com/obss/jury/pull/65

    Full Changelog: https://github.com/obss/jury/compare/2.1.3...2.1.4

    Source code(tar.gz)
    Source code(zip)
  • 2.1.3(Dec 1, 2021)

    What's Changed

    • Bug fix: Bleu reshape error fixed. by @devrimcavusoglu in https://github.com/obss/jury/pull/63

    Full Changelog: https://github.com/obss/jury/compare/2.1.2...2.1.3

    Source code(tar.gz)
    Source code(zip)
  • 2.1.2(Nov 14, 2021)

    What's Changed

    • Bug fix: bleu returning same score with different max_order is fixed. by @devrimcavusoglu in https://github.com/obss/jury/pull/59
    • nltk version upgraded as >=3.6.4 (from >=3.6.2). by @devrimcavusoglu in https://github.com/obss/jury/pull/61

    Full Changelog: https://github.com/obss/jury/compare/2.1.1...2.1.2

    Source code(tar.gz)
    Source code(zip)
  • 2.1.1(Nov 10, 2021)

    What's Changed

    • Seqeval: json normalization added. by @devrimcavusoglu in https://github.com/obss/jury/pull/55
    • Read support from folders by @devrimcavusoglu in https://github.com/obss/jury/pull/57

    Full Changelog: https://github.com/obss/jury/compare/2.1.0...2.1.1

    Source code(tar.gz)
    Source code(zip)
  • 2.1.0(Oct 25, 2021)

    What's New 🚀

    Tasks 📝

    We added task based new metric system which allows us to evaluate different type of inputs rather than old system which could only evaluate from strings (generated text) for only language generation tasks. Hence, jury now is able to support broader set of metrics works with different types of input.

    With this, on jury.Jury API, the consistency of set of tasks given is under control. Jury will raise an error if any pair of metrics are not consistent with each other in terms of task (evaluation input).

    AutoMetric ✨

    • AutoMetric is introduced as a main factory class for automatically loading metrics, as a side note load_metric is still available for backward compatibility and is preferred (it uses AutoMetric under the hood).
    • Tasks are now distinguished within metrics. For example, precision can be used for language-generation or sequence-classification task, where one evaluates from string (generated text) while other one evaluates from integers (class labels).
    • On configuration file, metrics can be now stated with HuggingFace's datasets' metrics initializiation parameters. The keyword arguments for metrics that are used on computation are now separated in "compute_kwargs" key.

    Full Changelog: https://github.com/obss/jury/compare/2.0.0...2.1.0

    Source code(tar.gz)
    Source code(zip)
  • 2.0.0(Oct 11, 2021)

    Jury 2.0.0 is out 🎉🥳

    New Metric System

    • datasets package Metric implementation is adopted (and extended) to provide high performance 💯 and more unified interface 🤗.
    • Custom metric implementation changed accordingly (it now requires 3 abstract methods to be implemented).
    • Jury class is now callable (implements call() method to be used thoroughly) though evaluate() method is still available for backward compatibility.
    • In the usage of evaluate of Jury, predictions and references parameters are restricted to be passed as keyword arguments to prevent confusion/wrong computations (like datasets' metrics).
    • MetricCollator is removed, the methods for metrics are attached directly to Jury class. Now, metric addition and removal can be performed from a Jury instance directly.
    • Jury now supports reading metrics from string, list and dictionaries. It is more generic to input type of metrics given along with parameters.

    New metrics

    • Accuracy, F1, Precision, Recall are added to Jury metrics.
    • All metrics on datasets package are still available on jury through the use of jury.load_metric()

    Development

    • Test cases are improved with fixtures, and test structure is enchanced.
    • Expected outputs are now required for tests as a json with proper name.
    Source code(tar.gz)
    Source code(zip)
  • 1.1.2(Sep 15, 2021)

  • 1.1.1(Aug 15, 2021)

    • Malfunctioning multiple prediction calculation caused by multiple reference input for BLEU and SacreBLEU is fixed.
    • CLI Implementation is completed. 🎉
    Source code(tar.gz)
    Source code(zip)
  • 1.0.1(Aug 13, 2021)

  • 1.0.0(Aug 9, 2021)

    Release Notes

    • New metric structure is completed.
      • Custom metric support is improved and no longer required to extend datasets.Metric, rather uses jury.metrics.Metric.
      • Metric usage is unified with compute, preprocess and postprocess functions, which the only required implementation for custom metric is compute.
      • Both string and Metric objects can be passed to Jury(metrics=metrics) now in a mixed fashion.
      • load_metric function was rearranged to capture end score results and several metrics added accordingly (e.g. load_metric("squad_f1") will load squad metric which returns F1-score).
    • Example notebook has added to example.
      • MT and QA tasks were illustrated.
      • Custom metric creation added as example.

    Acknowledgments

    @fcakyon @cemilcengiz @devrimcavusoglu

    Source code(tar.gz)
    Source code(zip)
  • 0.0.3(Jul 26, 2021)

  • 0.0.2(Jul 14, 2021)

Owner
Open Business Software Solutions
Open Source for Open Business
Open Business Software Solutions
基于百度的语音识别,用python实现,pyaudio+pyqt

Speech-recognition 基于百度的语音识别,python3.8(conda)+pyaudio+pyqt+baidu-aip 百度有面向python

J-L 1 Jan 03, 2022
Simple Python script to scrape youtube channles of "Parity Technologies and Web3 Foundation" and translate them to well-known braille language or any language

Simple Python script to scrape youtube channles of "Parity Technologies and Web3 Foundation" and translate them to well-known braille language or any

Little Endian 1 Apr 28, 2022
Fully featured implementation of Routing Transformer

Routing Transformer A fully featured implementation of Routing Transformer. The paper proposes using k-means to route similar queries / keys into the

Phil Wang 246 Jan 02, 2023
Implementation of ProteinBERT in Pytorch

ProteinBERT - Pytorch (wip) Implementation of ProteinBERT in Pytorch. Original Repository Install $ pip install protein-bert-pytorch Usage import torc

Phil Wang 92 Dec 25, 2022
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.

GPT Neo 🎉 1T or bust my dudes 🎉 An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library. If you're just here t

EleutherAI 6.7k Dec 28, 2022
TensorFlow code and pre-trained models for BERT

BERT ***** New March 11th, 2020: Smaller BERT Models ***** This is a release of 24 smaller BERT models (English only, uncased, trained with WordPiece

Google Research 32.9k Jan 08, 2023
ACL22 paper: Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost

Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost LOVE is accpeted by ACL22 main conference as a long pape

Lihu Chen 32 Jan 03, 2023
Simple bots or Simbots is a library designed to create simple bots using the power of python. This library utilises Intent, Entity, Relation and Context model to create bots .

Simple bots or Simbots is a library designed to create simple chat bots using the power of python. This library utilises Intent, Entity, Relation and

14 Dec 15, 2021
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"

Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation This repository is the pytorch implementation of our paper: Hierarchical Cr

44 Jan 06, 2023
Prompt tuning toolkit for GPT-2 and GPT-Neo

mkultra mkultra is a prompt tuning toolkit for GPT-2 and GPT-Neo. Prompt tuning injects a string of 20-100 special tokens into the context in order to

61 Jan 01, 2023
🤗🖼️ HuggingPics: Fine-tune Vision Transformers for anything using images found on the web.

🤗 🖼️ HuggingPics Fine-tune Vision Transformers for anything using images found on the web. Check out the video below for a walkthrough of this proje

Nathan Raw 185 Dec 21, 2022
Deploying a Text Summarization NLP use case on Docker Container Utilizing Nvidia GPU

GPU Docker NLP Application Deployment Deploying a Text Summarization NLP use case on Docker Container Utilizing Nvidia GPU, to setup the enviroment on

Ritesh Yadav 9 Oct 14, 2022
Understand Text Summarization and create your own summarizer in python

Automatic summarization is the process of shortening a text document with software, in order to create a summary with the major points of the original document. Technologies that can make a coherent

Sreekanth M 1 Oct 18, 2022
Experiments in converting wikidata to ftm

FollowTheMoney / Wikidata mappings This repo will contain tools for converting Wikidata entities into FtM schema. Prefixes: https://www.mediawiki.org/

Friedrich Lindenberg 2 Nov 12, 2021
Let Xiao Ai speakers control third-party devices

A stupid way to extend miot/xiaoai. Demo for Panasonic Bath Bully FV-RB20VL1 逆向 Panasonic Smart China,获得控制浴霸的请求信息(HTTP 请求),详见 apps/panasonic.py; 2. 通过

bin 14 Jul 07, 2022
TweebankNLP - Pre-trained Tweet NLP Pipeline (NER, tokenization, lemmatization, POS tagging, dependency parsing) + Models + Tweebank-NER

TweebankNLP This repo contains the new Tweebank-NER dataset and off-the-shelf Twitter-Stanza pipeline for state-of-the-art Tweet NLP, as described in

Laboratory for Social Machines 84 Dec 20, 2022
Plugin repository for Macast

Macast-plugins Plugin repository for Macast. How to use third-party player plugin Download Macast from GitHub Release. Download the plugin you want fr

109 Jan 04, 2023
Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing

Token Shift GPT Implementation of Token Shift GPT - An autoregressive model that relies solely on shifting along the sequence dimension and feedforwar

Phil Wang 32 Oct 14, 2022
Data manipulation and transformation for audio signal processing, powered by PyTorch

torchaudio: an audio library for PyTorch The aim of torchaudio is to apply PyTorch to the audio domain. By supporting PyTorch, torchaudio follows the

1.9k Jan 08, 2023
Tools and data for measuring the popularity & growth of various programming languages.

growth-data Tools and data for measuring the popularity & growth of various programming languages. Install the dependencies $ pip install -r requireme

3 Jan 06, 2022