Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models

Overview

Parrot

1. What is Parrot?

Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model.

2. Why Parrot?

Huggingface lists 12 paraphrase models, RapidAPI lists 7 fremium and commercial paraphrasers like QuillBot, Rasa has discussed an experimental paraphraser for augmenting text data here, Sentence-transfomers offers a paraphrase mining utility and NLPAug offers word level augmentation with a PPDB (a multi-million paraphrase database). While these attempts at paraphrasing are great, there are still some gaps and paraphrasing is NOT yet a mainstream option for text augmentation in building NLU models....Parrot is a humble attempt to fill some of these gaps.

What is a good paraphrase? Almost all conditioned text generation models are validated on 2 factors, (1) if the generated text conveys the same meaning as the original context (Adequacy) (2) if the text is fluent / grammatically correct english (Fluency). For instance Neural Machine Translation outputs are tested for Adequacy and Fluency. But a good paraphrase should be adequate and fluent while being as different as possible on the surface lexical form. With respect to this definition, the 3 key metrics that measures the quality of paraphrases are:

  • Adequacy (Is the meaning preserved adequately?)
  • Fluency (Is the paraphrase fluent English?)
  • Diversity (Lexical / Phrasal / Syntactical) (How much has the paraphrase changed the original sentence?)

Parrot offers knobs to control Adequacy, Fluency and Diversity as per your needs.

What makes a paraphraser a good augmentor? For training a NLU model we just don't need a lot of utterances but utterances with intents and slots/entities annotated. Typical flow would be:

  • Given an input utterance + input annotations a good augmentor spits out N output paraphrases while preserving the intent and slots.
  • The output paraphrases are then converted into annotated data using the input annotations that we got in step 1.
  • The annotated data created out of the output paraphrases then makes the training dataset for your NLU model.

But in general being a generative model paraphrasers doesn't guarantee to preserve the slots/entities. So the ability to generate high quality paraphrases in a constrained fashion without trading off the intents and slots for lexical dissimilarity makes a paraphraser a good augmentor. More on this in section 3 below

Installation

pip install git+https://github.com/PrithivirajDamodaran/Parrot.git

Quickstart

from parrot import Parrot
import torch
import warnings
warnings.filterwarnings("ignore")

''' 
uncomment to get reproducable paraphrase generations
def random_state(seed):
  torch.manual_seed(seed)
  if torch.cuda.is_available():
    torch.cuda.manual_seed_all(seed)

random_state(1234)
'''

#Init models (make sure you init ONLY once if you integrate this to your code)
parrot = Parrot(model_tag="prithivida/parrot_paraphraser_on_T5", use_gpu=False)

phrases = ["Can you recommed some upscale restaurants in Rome?",
           "What are the famous places we should not miss in Russia?"
]

for phrase in phrases:
  print("-"*100)
  print("Input_phrase: ", phrase)
  print("-"*100)
  para_phrases = parrot.augment(input_phrase=phrase)
  for para_phrase in para_phrases:
   print(para_phrase)
----------------------------------------------------------------------
Input_phrase: Can you recommed some upscale restaurants in Rome?
----------------------------------------------------------------------
"which upscale restaurants are recommended in rome?"
"which are the best restaurants in rome?"
"are there any upscale restaurants near rome?"
"can you recommend a good restaurant in rome?"
"can you recommend some of the best restaurants in rome?"
"can you recommend some best restaurants in rome?"
"can you recommend some upscale restaurants in rome?"
----------------------------------------------------------------------
Input_phrase: What are the famous places we should not miss in Russia
----------------------------------------------------------------------
"which are the must do places for tourists to visit in russia?"
"what are the best places to visit in russia?"
"what are some of the most visited sights in russia?"
"what are some of the most beautiful places in russia that tourists should not miss?"
"which are some of the most beautiful places to visit in russia?"
"what are some of the most important places to visit in russia?"
"what are some of the most famous places of russia?"
"what are some places we should not miss in russia?"

How to get syntactic and phrasal diversity/variety in your paraphrases using parrot?

You can play with the do_diverse knob (checkout the next section for more knobs). Consider this example: do_diverse = False (default)*

------------------------------------------------------------------------------
Input_phrase: How are the new Macbook Pros with M1 chips?
------------------------------------------------------------------------------
'how do you rate the new macbook pros? '
'how are the new macbook pros? '
'how is the new macbook pro doing with new chips? '
'how do you like the new macbook pro m1 chip? '
'what is the use of the new macbook pro m1 chips? '

do_diverse = True

------------------------------------------------------------------------------
Input_phrase: How are the new Macbook Pros with M1 chips?
------------------------------------------------------------------------------
'what do you think about the new macbook pro m1? '
'how is the new macbook pro m1? '
'how are the new macbook pros? '
'what do you think about the new macbook pro m1 chips? '
'how good is the new macbook pro m1 chips? '
'how is the new macbook pro m1 chip? '
'do you like the new macbook pro m1 chips? '
'how are the new macbook pros with m1 chips? '

Other Knobs

 para_phrases = parrot.augment(input_phrase=phrase, 
                               diversity_ranker="levenshtein",
                               do_diverse=False, 
                               max_return_phrases = 10, 
                               max_length=32, 
                               adequacy_threshold = 0.99, 
                               fluency_threshold = 0.90)

3. Scope

In the space of conversational engines, knowledge bots are to which we ask questions like "when was the Berlin wall teared down?", transactional bots are to which we give commands like "Turn on the music please" and voice assistants are the ones which can do both answer questions and action our commands. Parrot mainly foucses on augmenting texts typed-into or spoken-to conversational interfaces for building robust NLU models. (So usually people neither type out or yell out long paragraphs to conversational interfaces. Hence the pre-trained model is trained on text samples of maximum length of 32.)

While Parrot predominantly aims to be a text augmentor for building good NLU models, it can also be used as a pure-play paraphraser.

4. What makes a paraphraser a good augmentor for NLU? (Details)

To enable automatic training data generation, a paraphraser needs to keep the slots in intact. So the end to end process can take input utternaces, augment and convert them into NLU training format goo et al or rasa format (as shown below). The data generation process needs to look for the same slots in the output paraphrases to derive the start and end positions.(as shown in the json below)

Ideally the above process needs an UI like below to collect to input utternaces along with annotations (Intents, Slots and slot types) which then can be agumented and converted to training data.

Sample NLU data (Rasa format)

{
    "rasa_nlu_data": {
        "common_examples": [
            {
                "text": "i would like to find a flight from charlotte to las vegas that makes a stop in st. louis",
                "intent": "flight",
                "entities": [
                    {
                        "start": 35,
                        "end": 44,
                        "value": "charlotte",
                        "entity": "fromloc.city_name"
                    },
                    {
                        "start": 48,
                        "end": 57,
                        "value": "las vegas",
                        "entity": "toloc.city_name"
                    },
                    {
                        "start": 79,
                        "end": 88,
                        "value": "st. louis",
                        "entity": "stoploc.city_name"
                    }
                ]
            },
            ...
        ]
    }
}
  • Original: I would like a list of round trip flights between indianapolis and orlando florida for the 27th
  • Paraphrase useful for augmenting: what are the round trip flights between indianapolis and orlando for the 27th
  • Paraphrase not-so-useful for augmenting: what are the round trip flights between chicago and orlando for the 27th.

5.Dataset for paraphrase model

The paraphrase generation model prithivida/parrot_paraphraser_on_T5 has been fine tuned on the following datasets.

  • MSRP Paraphrase
  • Google PAWS
  • ParaNMT
  • Quora question pairs.
  • SNIPS Alexa commands
  • MSRP Frames
  • GYAFC Dataset

Metrics and Comparison

TBD

Current Features

TBD

Roadmap

TBD

Current Limitations/Known issues

  • The diversity scores are not normalised each of the diversity rankers scores paraphrases differently
  • Some command style input phrases generate less adequate paraphrases
Comments
  • TypeError: 'NoneType' object is not iterable

    TypeError: 'NoneType' object is not iterable

    Hi,

    Why do I get the error when running the following code?

    `phrases = ["Can you recommend some upscale restaurants in Newyork?", "What are the famous places we should not miss in Russia?" ]

    for phrase in phrases: print("-"*100) print("Input_phrase: ", phrase) print("-"*100) para_phrases = parrot.augment(input_phrase=phrase, use_gpu=False, do_diverse=True, diversity_ranker="levenshtein") for para_phrase in para_phrases: print(para_phrase)`

    Error:

    `TypeError Traceback (most recent call last) /home/user/Code/Parrot/main.ipynb Cell 4 in <cell line: 5>() 8 print("-"*100) 9 para_phrases = parrot.augment(input_phrase=phrase, use_gpu=False, do_diverse=True, diversity_ranker="levenshtein") ---> 10 for para_phrase in para_phrases: 11 print(para_phrase)

    TypeError: 'NoneType' object is not iterable`

    opened by 0x11c11e 6
  • use_gpu=True Error

    use_gpu=True Error

    In Google Colab.

    INSTALLED: !pip install -qqq git+https://github.com/PrithivirajDamodaran/Parrot_Paraphraser.git

    MY CODE:

    from parrot import Parrot

    def random_state(seed): torch.manual_seed(seed) if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed) random_state(1234)

    parrot_gpu = Parrot(model_tag="prithivida/parrot_paraphraser_on_T5", use_gpu=True)

    phrases = ['i drive a ford pickup truck.', 'i am very conservative.', 'my family lives down the street from me.', 'i go to church every sunday.', 'i have three guns and love hunting.']

    para_phrases_gpu = parrot_gpu.augment(input_phrase=phrases[0], use_gpu=True, max_return_phrases = 10)

    ERROR:


    RuntimeError Traceback (most recent call last) in () ----> 1 para_phrases_gpu = parrot_gpu.augment(input_phrase=phrases[0], use_gpu=True, max_return_phrases = 10)

    /usr/local/lib/python3.7/dist-packages/parrot/parrot.py in augment(self, input_phrase, use_gpu, diversity_ranker, do_diverse, max_return_phrases, max_length, adequacy_threshold, fluency_threshold) 128 129 --> 130 adequacy_filtered_phrases = self.adequacy_score.filter(input_phrase, paraphrases, adequacy_threshold, device ) 131 if len(adequacy_filtered_phrases) > 0 : 132 fluency_filtered_phrases = self.fluency_score.filter(adequacy_filtered_phrases, fluency_threshold, device )

    /usr/local/lib/python3.7/dist-packages/parrot/filters.py in filter(self, input_phrase, para_phrases, adequacy_threshold, device) 13 x = self.tokenizer(input_phrase, para_phrase, return_tensors='pt', max_length=128, truncation=True) 14 self.adequacy_model = self.adequacy_model.to(device) ---> 15 logits = self.adequacy_model(**x).logits 16 probs = logits.softmax(dim=1) 17 prob_label_is_true = probs[:,1]

    /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], []

    /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict) 1213 output_attentions=output_attentions, 1214 output_hidden_states=output_hidden_states, -> 1215 return_dict=return_dict, 1216 ) 1217 sequence_output = outputs[0]

    /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], []

    /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 844 token_type_ids=token_type_ids, 845 inputs_embeds=inputs_embeds, --> 846 past_key_values_length=past_key_values_length, 847 ) 848 encoder_outputs = self.encoder(

    /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], []

    /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length) 126 127 if inputs_embeds is None: --> 128 inputs_embeds = self.word_embeddings(input_ids) 129 token_type_embeddings = self.token_type_embeddings(token_type_ids) 130

    /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], []

    /usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py in forward(self, input) 158 return F.embedding( 159 input, self.weight, self.padding_idx, self.max_norm, --> 160 self.norm_type, self.scale_grad_by_freq, self.sparse) 161 162 def extra_repr(self) -> str:

    /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2181 # remove once script supports set_grad_enabled 2182 no_grad_embedding_renorm(weight, input, max_norm, norm_type) -> 2183 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2184 2185

    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)

    opened by Mario-RC 2
  • Understanding adequacy metric

    Understanding adequacy metric

    Hi, I have been using the filters file from this repo to experiment on evaluating some paraphrases I created using various different models, but I noticed that the adequacy score gives some unexpected results so I was wondering if you could tell me some more about how it was trained? I noticed that if the paraphrase and the original are the exact same, the adequacy is quite low (around 0.7-0.80). If the paraphrase is shorter or longer than the original, it generally has a much higher score. Ex. Original: "I need to buy a house in the neighborhood" -> Paraphrase: "I need to buy a house" the paraphrase has a score of 0.98. Paraphrase: "I need to buy a house in the neighborhood where I want to live" results in an even higher score of .99 while the paraphrase "I need to buy a house in the neighborhood" (which is the same exact sentence as the original) gets a score of 0.7 and the same sentence with a period at the end gets 0.8. This makes me think that the adequacy model takes into account how much the new sentence has changed from the original in addition to how well its meaning was preserved in some way. Since the ReadMe states that adequacy measures whether or not the paraphrase preserves the meaning of the original, it is confusing to me that using the same sentence for original and paraphrase does not get a high score, could you clarify?

    opened by cegersdoerfer 2
  • error: legacy-install-failure

    error: legacy-install-failure

    When trying to install I get this:

    note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure

    × Encountered error while trying to install package. ╰─> python-Levenshtein

    How can I fix it?

    opened by hiddenchamp 2
  • Installation issue, torch >=1.6.0, Raspberry Pi OS 64 bit, Raspberry Pi 4b 8 gig

    Installation issue, torch >=1.6.0, Raspberry Pi OS 64 bit, Raspberry Pi 4b 8 gig

    Would love to get this working. Getting two errors - first error here is the last thing the install reports before exiting back to the terminal prompt:

    Collecting torch>=1.6.0 (from sentence-transformers->parrot==1.0) Could not find a version that satisfies the requirement torch>=1.6.0 (from sentence-transformers->parrot==1.0) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) No matching distribution found for torch>=1.6.0 (from sentence-transformers->parrot==1.0)

    Tried running the quick start script anyway, get this error:

    Traceback (most recent call last): File "paraphrase.py", line 1, in <module> from parrot import Parrot ModuleNotFoundError: No module named 'parrot'

    Raspberry Pi 4B 8 Gig RAM version, running Raspberry Pi OS 64 Bit. Apologies if I have not provided sufficient info here - let me know how else I may help to figure out why this won't run on my build. Thanks

    opened by DaveXanatos 2
  • Parrot returns very similar description without paraphrasing for some sentences

    Parrot returns very similar description without paraphrasing for some sentences

    Hi Prithviraj,

    Good Day!

    Awesome work on building this library! I tried to use it for a personal project from the fashion domain and here's what I observed:

    image

    image

    Have a look at the two sentences above. I have provided the input sentence and the paraphrased sentence obtained using parrot. Except some punctuation and contractions, there's not much that the model is able to do.

    Such is the case even for most of the descriptions that I have scraped through fashion retailers. Could you advise how can I use parrot to obtain better paraphrased suggestions please?

    Thanks & Regards, Vinayak Nayak.

    wontfix 
    opened by ElisonSherton 2
  • fix: add missing dependencies into requirements.txt

    fix: add missing dependencies into requirements.txt

    • pandas is imported in a few places in the code
    • transformers needs to be installed with torchhub extras othewise it ends up with following error:
    Traceback (most recent call last):
      File "/Users/yed/dev/foo/bar/phraser/phrase.py", line 114, in <module>
        parrot = Parrot(model_tag="prithivida/parrot_paraphraser_on_T5")
      File "/Users/yed/dev/foo/bar/phraser/Parrot_Paraphraser/parrot/parrot.py", line 10, in __init__
        self.tokenizer = AutoTokenizer.from_pretrained(model_tag, use_auth_token=True)
      File "/Users/yed/.pyenv/versions/godot39/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 628, in from_pretrained
        return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
      File "/Users/yed/.pyenv/versions/godot39/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1775, in from_pretrained
        return cls._from_pretrained(
      File "/Users/yed/.pyenv/versions/godot39/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1930, in _from_pretrained
        tokenizer = cls(*init_inputs, **init_kwargs)
      File "/Users/yed/.pyenv/versions/godot39/lib/python3.9/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 134, in __init__
        super().__init__(
      File "/Users/yed/.pyenv/versions/godot39/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 114, in __init__
        fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
      File "/Users/yed/.pyenv/versions/godot39/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 1162, in convert_slow_tokenizer
        return converter_class(transformer_tokenizer).converted()
      File "/Users/yed/.pyenv/versions/godot39/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py", line 434, in __init__
        requires_backends(self, "protobuf")
      File "/Users/yed/.pyenv/versions/godot39/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 967, in requires_backends
        raise ImportError("".join(failed))
    ImportError:
    T5Converter requires the protobuf library but it was not found in your environment. Checkout the instructions on the
    installation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones
    that match your environment.
    
    opened by yedpodtrzitko 1
  • Failed to build tokenizers ERROR: Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects

    Failed to build tokenizers ERROR: Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects

    error: can't find Rust compiler

    If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler.

    To update pip, run:

      pip install --upgrade pip
    

    and then retry package installation.

    If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain.

    ERROR: Failed building wheel for tokenizers Successfully built parrot Failed to build tokenizers ERROR: Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects

    opened by jameswan 1
  • ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.

    ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.

    Tried installing in kaggle - Parrot_Paraphraser. It got successfully installed but i tried using it in code it shows ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.

    My internet connection is also ON on kaggle. Can you please check and tell. I have added screenshot too. image

    opened by ChiragM-Hexaware 1
  • Installation Issues on Windows

    Installation Issues on Windows

    It appears there is a broken dependency on python-Levenshtein. You should update your import to point to Levenshtein package:

    Installing collected packages: python-Levenshtein Running setup.py install for python-Levenshtein ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\gablanco\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\gablanco\AppData\Local\Temp\pip-install-3g1ne2jo\python-levenshtein_9f5029b6aae44944bceb3f676daf71a1\setup.py'"'"'; file='"'"'C:\Users\gablanco\AppData\Local\Temp\pip-install-3g1ne2jo\python-levenshtein_9f5029b6aae44944bceb3f676daf71a1\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\gablanco\AppData\Local\Temp\pip-record-8_vcaac_\install-record.txt' --single-version-externally-managed --user --prefix= --compile --install-headers 'C:\Users\gablanco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\Include\python-Levenshtein' cwd: C:\Users\gablanco\AppData\Local\Temp\pip-install-3g1ne2jo\python-levenshtein_9f5029b6aae44944bceb3f676daf71a1
    Complete output (28 lines): running install running build running build_py creating build creating build\lib.win-amd64-3.9 creating build\lib.win-amd64-3.9\Levenshtein copying Levenshtein\StringMatcher.py -> build\lib.win-amd64-3.9\Levenshtein copying Levenshtein_init_.py -> build\lib.win-amd64-3.9\Levenshtein running egg_info writing python_Levenshtein.egg-info\PKG-INFO writing dependency_links to python_Levenshtein.egg-info\dependency_links.txt writing entry points to python_Levenshtein.egg-info\entry_points.txt writing namespace_packages to python_Levenshtein.egg-info\namespace_packages.txt writing requirements to python_Levenshtein.egg-info\requires.txt writing top-level names to python_Levenshtein.egg-info\top_level.txt reading manifest file 'python_Levenshtein.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*pyc' found anywhere in distribution warning: no previously-included files matching '*so' found anywhere in distribution warning: no previously-included files matching '.project' found anywhere in distribution warning: no previously-included files matching '.pydevproject' found anywhere in distribution adding license file 'COPYING' writing manifest file 'python_Levenshtein.egg-info\SOURCES.txt' copying Levenshtein_levenshtein.c -> build\lib.win-amd64-3.9\Levenshtein copying Levenshtein_levenshtein.h -> build\lib.win-amd64-3.9\Levenshtein running build_ext building 'Levenshtein.levenshtein' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Command errored out with exit status 1: 'C:\Users\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\AppData\Local\Temp\pip-install-3g1ne2jo\python-levenshtein_9f5029b6aae44944bceb3f676daf71a1\setup.py'"'"'; file='"'"'C:\Users\AppData\Local\Temp\pip-install-3g1ne2jo\python-levenshtein_9f5029b6aae44944bceb3f676daf71a1\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\AppData\Local\Temp\pip-record-8_vcaac\install-record.txt' --single-version-externally-managed --user --prefix= --compile --install-headers 'C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\Include\python-Levenshtein' Check the logs for full command output.

    opened by gablans 1
  • Killed while running sample script

    Killed while running sample script

    Hi, I try to run this project on a VPS with 2GB RAM. When i try to enter script parrot = Parrot the python terminal is Killed. Can you help me please?

    Python 3.8.10 (default, Nov 26 2021, 20:14:08)
    [GCC 9.3.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> from parrot import Parrot
    >>> import torch
    >>> import warnings
    >>> warnings.filterwarnings("ignore")
    >>> def random_state(seed):
    ...   torch.manual_seed(seed)
    ...   if torch.cuda.is_available():
    ...     torch.cuda.manual_seed_all(seed)
    ...
    >>> random_state(1234)
    >>>
    >>> parrot = Parrot(model_tag="prithivida/parrot_paraphraser_on_T5")
    Killed
    
    
    opened by kang-mus 1
  • TypeError: Descriptors cannot not be created directly.

    TypeError: Descriptors cannot not be created directly.

    When running the Quickstart example from the readme, I get:

    Traceback (most recent call last):
      File "$HOME/src/try-parrot/demo.py", line 19, in <module>
        parrot = Parrot(model_tag="prithivida/parrot_paraphraser_on_T5", use_gpu=False)
      File "$HOME/.local/lib/python3.8/site-packages/parrot/parrot.py", line 10, in __init__
        self.tokenizer = AutoTokenizer.from_pretrained(model_tag, use_auth_token=False)
      File "$HOME/.local/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 659, in from_pretrained
        return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
      File "$HOME/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1801, in from_pretrained
        return cls._from_pretrained(
      File "$HOME/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1956, in _from_pretrained
        tokenizer = cls(*init_inputs, **init_kwargs)
      File "$HOME/.local/lib/python3.8/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 133, in __init__
        super().__init__(
      File "$HOME/.local/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 114, in __init__
        fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
      File "$HOME/.local/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 1162, in convert_slow_tokenizer
        return converter_class(transformer_tokenizer).converted()
      File "$HOME/.local/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 438, in __init__
        from .utils import sentencepiece_model_pb2 as model_pb2
      File "$HOME/.local/lib/python3.8/site-packages/transformers/utils/sentencepiece_model_pb2.py", line 92, in <module>
        _descriptor.EnumValueDescriptor(
      File "$HOME/.local/lib/python3.8/site-packages/google/protobuf/descriptor.py", line 755, in __new__
        _message.Message._CheckCalledFromGeneratedFile()
    TypeError: Descriptors cannot not be created directly.
    If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
    If you cannot immediately regenerate your protos, some other possible workarounds are:
     1. Downgrade the protobuf package to 3.20.x or lower.
     2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
    
    More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
    
    opened by ct2034 2
  • Unable to use the model due to Huggingface updated API

    Unable to use the model due to Huggingface updated API

    Hii. You might be unable to use the Parrot model due to an error something like this ...is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'...

    To solve this I already created a pull request. Till then you can open the Parrot Library source code in your code editor and add these update these lines lines (13, 14 lines most probably)

    self.tokenizer = AutoTokenizer.from_pretrained(model_tag, use_auth_token = <your auth token>)
    self.model     = AutoModelForSeq2SeqLM.from_pretrained(model_tag, use_auth_token = <your auth token>)
    
    opened by LEAGUEDORA 1
  • added use auth token according to new hugging face

    added use auth token according to new hugging face

    Since hugging face has updated their API, you cannot access Parrot models without using an auth token. You may land into this issue

    ...is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'....

    To solve this I added a new parameter called use_auth_token.

    How to get a token from Huggingface

    1. Open Hugging Face and register/login with your credentials
    2. Nativate to Token settings page and create a write permitted access token.
    3. Copy the token and pass it as a parameter to Parrot class while initiating.

    So the updated code will be

    from parrot import Parrot
    import torch
    import warnings
    warnings.filterwarnings("ignore")
    
    ''' 
    uncomment to get reproducable paraphrase generations
    def random_state(seed):
      torch.manual_seed(seed)
      if torch.cuda.is_available():
        torch.cuda.manual_seed_all(seed)
    
    random_state(1234)
    '''
    
    #Init models (make sure you init ONLY once if you integrate this to your code)
    parrot = Parrot(model_tag="prithivida/parrot_paraphraser_on_T5", use_auth_token = "<Your Hugging Face token>")
    
    phrases = ["Can you recommend some upscale restaurants in Newyork?",
               "What are the famous places we should not miss in Russia?"
    ]
    
    for phrase in phrases:
      print("-"*100)
      print("Input_phrase: ", phrase)
      print("-"*100)
      para_phrases = parrot.augment(input_phrase=phrase, use_gpu=False)
      for para_phrase in para_phrases:
       print(para_phrase)
    
    opened by LEAGUEDORA 8
  • Update README.md

    Update README.md

    Congratulations. Your project is featured on the kandi kit. kandi kits help developers shortlist reusable libraries and code snippets for specific topics or use cases. Add your kandi badge to help more developers discover and adopt your project easily. Thanks!

    opened by javadev984 0
Releases(v1.0)
Owner
Prithivida
Bigdata Engineering & Applied NLP
Prithivida
Common Voice Dataset explorer

Common Voice Dataset Explorer Common Voice Dataset is by Mozilla Made during huggingface finetuning week Usage pip install -r requirements.txt streaml

Ceyda Cinarel 22 Nov 16, 2022
List of GSoC organisations with number of times they have been selected.

Welcome to GSoC Organisation Frequency And Details 👋 List of GSoC organisations with number of times they have been selected, techonologies, topics,

Shivam Kumar Jha 41 Oct 01, 2022
端到端的长本文摘要模型(法研杯2020司法摘要赛道)

端到端的长文本摘要模型(法研杯2020司法摘要赛道)

苏剑林(Jianlin Su) 334 Jan 08, 2023
edge-SR: Super-Resolution For The Masses

edge-SR: Super Resolution For The Masses Citation Pablo Navarrete Michelini, Yunhua Lu and Xingqun Jiang. "edge-SR: Super-Resolution For The Masses",

Pablo 40 Nov 10, 2022
A large-scale (194k), Multiple-Choice Question Answering (MCQA) dataset designed to address realworld medical entrance exam questions.

MedMCQA MedMCQA : A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering A large-scale, Multiple-Choice Question Answe

MedMCQA 24 Nov 30, 2022
chaii - hindi & tamil question answering

chaii - hindi & tamil question answering This is the solution for rank 5th in Kaggle competition: chaii - Hindi and Tamil Question Answering. The comp

abhishek thakur 33 Dec 18, 2022
Bpe algorithm can finetune tokenizer - Bpe algorithm can finetune tokenizer

"# bpe_algorithm_can_finetune_tokenizer" this is an implyment for https://github

张博 1 Feb 02, 2022
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

7.1k Jan 01, 2023
This is a NLP based project to extract effective date of the contract from their text files.

Date-Extraction-from-Contracts This is a NLP based project to extract effective date of the contract from their text files. Problem statement This is

Sambhav Garg 1 Jan 26, 2022
An Explainable Leaderboard for NLP

ExplainaBoard: An Explainable Leaderboard for NLP Introduction | Website | Download | Backend | Paper | Video | Bib Introduction ExplainaBoard is an i

NeuLab 319 Dec 20, 2022
TEACh is a dataset of human-human interactive dialogues to complete tasks in a simulated household environment.

TEACh is a dataset of human-human interactive dialogues to complete tasks in a simulated household environment.

Alexa 98 Dec 09, 2022
Maha is a text processing library specially developed to deal with Arabic text.

An Arabic text processing library intended for use in NLP applications Maha is a text processing library specially developed to deal with Arabic text.

Mohammad Al-Fetyani 184 Nov 27, 2022
🏆 • 5050 most frequent words in 109 languages

🏆 Most Common Words Multilingual 5000 most frequent words in 109 languages. Uses wordfrequency.info as a source. 🔗 License source code license data

14 Nov 24, 2022
Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms

FNet: Mixing Tokens with Fourier Transforms Pytorch implementation of Fnet : Mixing Tokens with Fourier Transforms. Citation: @misc{leethorp2021fnet,

Rishikesh (ऋषिकेश) 217 Dec 05, 2022
End-to-end MLOps pipeline of a BERT model for emotion classification.

image source EmoBERT-MLOps The goal of this repository is to build an end-to-end MLOps pipeline based on the MLOps course from Made with ML, but this

Dimitre Oliveira 4 Nov 06, 2022
Research code for the paper "Fine-tuning wav2vec2 for speaker recognition"

Fine-tuning wav2vec2 for speaker recognition This is the code used to run the experiments in https://arxiv.org/abs/2109.15053. Detailed logs of each t

Nik 103 Dec 26, 2022
Signature remover is a NLP based solution which removes email signatures from the rest of the text.

Signature Remover Signature remover is a NLP based solution which removes email signatures from the rest of the text. It helps to enchance data conten

Forges Alterway 8 Jan 06, 2023
TFIDF-based QA system for AIO2 competition

AIO2 TF-IDF Baseline This is a very simple question answering system, which is developed as a lightweight baseline for AIO2 competition. In the traini

Masatoshi Suzuki 4 Feb 19, 2022
[AAAI 21] Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning

◥ Curriculum Labeling ◣ Revisiting Pseudo-Labeling for Semi-Supervised Learning Paola Cascante-Bonilla, Fuwen Tan, Yanjun Qi, Vicente Ordonez. In the

UVA Computer Vision 113 Dec 15, 2022
One Stop Anomaly Shop: Anomaly detection using two-phase approach: (a) pre-labeling using statistics, Natural Language Processing and static rules; (b) anomaly scoring using supervised and unsupervised machine learning.

One Stop Anomaly Shop (OSAS) Quick start guide Step 1: Get/build the docker image Option 1: Use precompiled image (might not reflect latest changes):

Adobe, Inc. 148 Dec 26, 2022