TextFlint is a multilingual robustness evaluation platform for natural language processing tasks,

Overview

Github Runner Covergae Status

Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing

[TextFlint Documentation on ReadTheDocs]

AboutSetupUsageDesign

Github Runner Covergae Status PyPI version

About

TextFlint is a multilingual robustness evaluation platform for natural language processing tasks, which unifies general text transformation, task-specific transformation, adversarial attack, sub-population, and their combinations to provide a comprehensive robustness analysis.

Features:

There are lots of reasons to use TextFlint:

  • Full coverage of transformation types, including 20 general transformations, 8 subpopulations and 60 task-specific transformations, as well as thousands of their combinations, which basically covers all aspects of text transformations to comprehensively evaluate the robustness of your model. TextFlint also supports adversarial attack to generate model specific transformed datas.
  • Generate targeted augmented data, and you can use the additional data to train or fine-tune your model to improve your model's robustness.
  • Provide a complete analytical report automatically to accurately explain where your model's shortcomings are, such as the problems in syntactic rules or syntactic rules.

Setup

Installation

You can either use pip or clone this repo to install TextFlint.

  1. Using pip (recommended)
pip install TextFlint
  1. Cloning this repo
git clone https://github.com/textflint/textflint.git
cd TextFlint
python setup.py install

Usage

Workflow

The general workflow of TextFlint is displayed above. Evaluation of target models could be devided into three steps:

  1. For input preparation, the original dataset for testing, which is to be loaded by Dataset, should be firstly formatted as a series of JSON objects. TextFlint configuration is specified by Config. Target model is also loaded as FlintModel.
  2. In adversarial sample generation, multi-perspective transformations (i.e., Transformation,Subpopulation and AttackRecipe), are performed on Dataset to generate transformed samples. Besides, to ensure semantic and grammatical correctness of transformed samples, Validator calculates confidence of each sample to filter out unacceptable samples.
  3. Lastly, Analyzer collects evaluation results and ReportGenerator automatically generates a comprehensive report of model robustness.

Quick Start

The following code snippet shows how to generate transformed data on the Sentiment Analysis task.

from TextFlint.engine import Engine

# load the data samples
sample1 = {'x': 'Titanic is my favorite movie.', 'y': 'pos'}
sample2 = {'x': 'I don\'t like the actor Tim Hill', 'y': 'neg'}
data_samples = [sample1, sample2]

# define the output directory
out_dir_path = './test_result/'

# run transformation/subpopulation/attack and save the transformed data to out_dir_path in json format
engine = Engine('SA')
engine.run(data_samples, out_dir_path, config)

You can also feed data to TextFlintEngine in other ways (e.g., json or csv) where one line represents for a sample. We have defined some transformations and subpopulations in SA.json, and you can also pass your own configuration file as you need.

Transformed Datasets

After transformation, here are the contents in ./test_result/:

ori_AddEntitySummary-movie_1.json
ori_AddEntitySummary-person_1.json
trans_AddEntitySummary-movie_1.json
trans_AddEntitySummary-person_1.json
...

where the trans_AddEntitySummary-movie_1.json contains 1 successfully transformed sample by transformation AddEntitySummary and ori_AddEntitySummary-movie_1.json contains the corresponding original sample. The content in ori_AddEntitySummary-movie_1.json:

{'x': 'Titanic is my favorite movie.', 'y': 'pos', "sample_id": 0}

The content in trans_AddEntitySummary-movie_1.json:

{"x": "Titanic (A seventeen-year-old aristocrat falls in love with a kind but poor artist aboard the luxurious, ill-fated R.
M.S. Titanic .) is my favorite movie.", "y": "pos", "sample_id": 0}

Design

Architecture

Input layer: receives textual datasets and models as input, represented as Dataset and FlintModel separately.

  • DataSet: a container for Sample, provides efficiently and handily operation interfaces for Sample. Dataset supports loading, verification, and saving data in Json or CSV format for various NLP tasks.
  • FlintModel: a target model used in an adversarial attack.

Generation layer: there are mainly four parts in generation layer:

  • Subpopulation: generates a subset of a DataSet.
  • Transformation: transforms each sample of Dataset if it can be transformed.
  • AttackRecipe: attacks the FlintModel and generate a DataSet of adversarial examples.
  • Validator: verifies the quality of samples generated by Transformation and AttackRecipe.

Report layer: analyzes model testing results and provides robustness report for users.

Transformation

In order to verify the robustness comprehensively, TextFlint offers 20 universal transformations and 60 task-specific transformations, covering 12 NLP tasks. The following table summarizes the Transformation currently supported and the examples for each transformation can be found in our web site.

Task Transformation Description Reference
UT (Universal Transformation) AppendIrr Extend sentences by irrelevant sentences -
BackTrans BackTrans (Trans short for translation) replaces test data with paraphrases by leveraging back translation, which is able to figure out whether or not the target models merely capture the literal features instead of semantic meaning. -
Contraction Contraction replaces phrases like `will not` and `he has` with contracted forms, namely, `won’t` and `he’s` -
InsertAdv Transforms an input by add adverb word before verb -
Keyboard Keyboard turn to the way how people type words and change tokens into mistaken ones with errors caused by the use of keyboard, like `word → worf` and `ambiguous → amviguius`. -
MLMSuggestion MLMSuggestion (MLM short for masked language model) generates new sentences where one syntactic category element of the original sentence is replaced by what is predicted by masked language models. -
Ocr Transformation that simulate ocr error by random values. -
Prejudice Transforms an input by Reverse gender or place names in sentences. -
Punctuation Transforms input by add punctuation at the end of sentence. -
ReverseNeg Transforms an affirmative sentence into a negative sentence, or vice versa. -
SpellingError Transformation that leverage pre-defined spelling mistake dictionary to simulate spelling mistake. Text Data Augmentation Made Simple By Leveraging NLP Cloud APIs (https://arxiv.org/ftp/arxiv/papers/1812/1812.04718.pdf)
SwapAntWordNet Transforms an input by replacing its words with antonym provided by WordNet. -
SwapNamedEnt Swap entities with other entities of the same category. -
SwapNum Transforms an input by replacing the numbers in it. -
SwapSynWordEmbedding Transforms an input by replacing its words by Glove. -
SwapSynWordNet Transforms an input by replacing its words with synonyms provided by WordNet. -
Tense Transforms all verb tenses in sentence. -
TwitterType Transforms input by common abbreviations in TwitterType. -
Typos Randomly inserts, deletes, swaps or replaces a single letter within one word (Ireland → Irland). Synthetic and noise both break neural machine translation (https://arxiv.org/pdf/1711.02173.pdf)
WordCase Transform an input to upper and lower case or capitalize case. -
RE (Relation Extraction) InsertClause InsertClause is a transformation method which inserts entity description for head and tail entity -
SwapEnt-LowFreq SwapEnt-LowFreq is a sub-transformation method from EntitySwap which replace entities in text with random same typed entities with low frequency. -
SwapTriplePos-Birth SwapTriplePos-Birth is a transformation method specially designed for birth relation. It paraphrases the sentence and keeps the original birth relation between the entity pairs. -
SwapTriplePos-Employee SwapTriplePos-Employee is a transformation method specially designed for employee relation. It deletes the TITLE description of each employee and keeps the original employee relation between the entity pairs. -
SwapEnt-SamEtype SwapEnt-SamEtype is a sub-transformation method from EntitySwap which replace entities in text with random entities with the same type. -
SwapTriplePos-Age SwapTriplePos-Age is a transformation method specially designed for age relation. It paraphrases the sentence and keeps the original age relation between the entity pairs. -
SwapEnt-MultiType SwapEnt-MultiType is a sub-transformation method from EntitySwap which replace entities in text with random same-typed entities with multiple possible types. -
NER (Named Entity Recognition) EntTypos Swap/delete/add random character for entities -
ConcatSent Concatenate sentences to a longer one. -
SwapLonger Substitute short entities to longer ones -
CrossCategory Entity Swap by swaping entities with ones that can be labeled by different labels. -
OOV Entity Swap by OOV entities. -
POS (Part-of-Speech Tagging) SwapMultiPOSRB It is implied by the phenomenon of conversion that some words hold multiple parts of speech. That is to say, these multi-part-of-speech words might confuse the language models in terms of POS tagging. Accordingly, we replace adverbs with words holding multiple parts of speech. -
SwapPrefix Swapping the prefix of one word and keeping its part of speech tag. -
SwapMultiPOSVB It is implied by the phenomenon of conversion that some words hold multiple parts of speech. That is to say, these multi-part-of-speech words might confuse the language models in terms of POS tagging. Accordingly, we replace verbs with words holding multiple parts of speech. -
SwapMultiPOSNN It is implied by the phenomenon of conversion that some words hold multiple parts of speech. That is to say, these multi-part-of-speech words might confuse the language models in terms of POS tagging. Accordingly, we replace nouns with words holding multiple parts of speech. -
SwapMultiPOSJJ It is implied by the phenomenon of conversion that some words hold multiple parts of speech. That is to say, these multi-part-of-speech words might confuse the language models in terms of POS tagging. Accordingly, we replace adjectives with words holding multiple parts of speech. -
COREF (Coreference Resolution) RndConcat RndConcat is a task-specific transformation of coreference resolution, this transformation will randomly retrieve an irrelevant paragraph from the corpus, and concatenate it after the original document -
RndDelete RndDelete is a task-specific transformation of coreference resolution, through this transformation, there is a possibility (20% by default) for each sentence in the original document to be deleted, and at least one sentence will be deleted; related coreference labels will also be deleted -
RndReplace RndInsert is a task-specific transformation of coreference resolution, this transformation will randomly retrieve irrelevant sentences from the corpus, and replace sentences from the original document with them (the proportion of replaced sentences and original sentences is 20% by default) -
RndShuffle RndShuffle is a task-specific transformation of coreference resolution, during this transformation, a certain number of swapping will be processed, which swap the order of two adjacent sentences of the original document (the number of swapping is 20% of the number of original sentences by default) -
RndInsert RndInsert is a task-specific transformation of coreference resolution, this transformation will randomly retrieve irrelevant sentences from the corpus, and insert them into the original document (the proportion of inserted sentences and original sentences is 20% by default) -
RndRepeat RndRepeat is a task-specific transformation of coreference resolution, this transformation will randomly pick sentences from the original document, and insert them somewhere else in the document (the proportion of inserted sentences and original sentences is 20% by default) -
ABSA (Aspect-based Sentiment Analysis) RevTgt RevTgt: reverse the sentiment of the target aspect. Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis (https://www.aclweb.org/anthology/2020.emnlp-main.292.pdf)
AddDiff RevNon: Reverse the sentiment of the non-target aspects with originally the same sentiment as target.
RevNon AddDiff: Add aspects with the opposite sentiment from the target aspect.
CWS (Chinese Word Segmentation) SwapContraction SwapContriction is a task-specific transformation of Chinese Word Segmentation, this transformation will replace some common abbreviations in the sentence with complete words with the same meaning -
SwapNum SwapNum is a task-specific transformation of Chinese Word Segmentation, this transformation will replace the numerals in the sentence with other numerals of similar size -
SwapSyn SwapSyn is a task-specific transformation of Chinese Word Segmentation, this transformation will replace some words in the sentence with some very similar words -
SwapName SwapName is a task-specific transformation of Chinese Word Segmentation, this transformation will replace the last name or first name of the person in the sentence to produce some local ambiguity that has nothing to do with the sentence -
SwapVerb SwapName is a task-specific transformation of Chinese Word Segmentation, this transformation will transform some of the verbs in the sentence to other forms in Chinese -
SM (Semantic Matching) SwapWord This transformation will add some meaningless sentence to premise, which do not change the semantics. -
SwapNum This transformation will find some num words in sentences and replace them with different num word. -
Overlap This method generate some data by some template, whose hypotheis and sentence1 have many overlap but different meaning. -
SA (Sentiment Analysis) SwapSpecialEnt-Person SpecialEntityReplace-Person is a task-specific transformation of sentiment analysis, this transformation will identify some special person name in the sentence, randomly replace it with other entity names of the same kind -
SwapSpecialEnt-Movie SpecialEntityReplace is a task-specific transformation of sentiment analysis, this transformation will identify some special movie name in the sentence, randomly replace it with other movie name. -
AddSum-Movie AddSummary-Movie is a task-specific transformation of sentiment analysis, this transformation will identify some special movie name in the sentence, and insert the summary of these entities after them (the summary content is from wikipedia). -
AddSum-Person AddSummary-Person is a task-specific transformation of sentiment analysis, this transformation will identify some special person name in the sentence, and insert the summary of these entities after them (the summary content is from wikipedia). -
DoubleDenial SpecialWordDoubleDenial is a task-specific transformation of sentiment analysis, this transformation will find some special words in the sentence and replace them with double negation -
NLI (Natural Language Inference) NumWord This transformation will find some num words in sentences and replace them with different num word. Stress Test Evaluation for Natural Language Inference (https://www.aclweb.org/anthology/C18-1198/)
SwapAnt This transformation will find some keywords in sentences and replace them with their antonym.
AddSent This transformation will add some meaningless sentence to premise, which do not change the semantics.
Overlap This method generate some data by some template, whose hypotheis and premise have many overlap but different meaning. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference (https://www.aclweb.org/anthology/P19-1334/)
MRC (Machine Reading Comprehension) PerturbQuestion-MLM PerturbQuestion is a task-specific transformation of machine reading comprehension, this transformation paraphrases the question. -
PerturbQuestion-BackTrans PerturbQuestion is a task-specific transformation of machine reading comprehension, this transformation paraphrases the question. -
AddSentDiverse AddSentenceDiverse is a task-specific transformation of machine reading comprehension, this transformation generates a distractor with altered question and fake answer. Adversarial Augmentation Policy Search for Domain and Cross-LingualGeneralization in Reading Comprehension (https://arxiv.org/pdf/2004.06076)
PerturbAnswer PerturbAnswer is a task-specific transformation of machine reading comprehension, this transformation transforms the sentence with golden answer based on specific rules.
ModifyPos ModifyPosition is a task-specific transformation of machine reading comprehension, this transformation rotates the sentences of context. -
DP (Dependency Parsing) AddSubtree AddSubtree is a task-specific transformation of dependency parsing, this transformation will transform the input sentence by adding a subordinate clause from WikiData. -
RemoveSubtree RemoveSubtree is a task-specific transformation of dependency parsing, this transformation will transform the input sentence by removing a subordinate clause. -

Subpopulation

Subpopulation is to identify the specific part of dataset on which the target model performs poorly. To retrieve a subset that meets the configuration, Subpopulation divides the dataset through sorting samples by certain attributes. We also support the following Subpopulation:

Subpopulation Description Reference
LMSubPopulation_0%-20% Filter samples based on the text perplexity from a language model (i.e., GPT-2), 0-20% is the lower part of the scores. Robustness Gym: Unifying the NLP Evaluation Landscape (https://arxiv.org/pdf/2101.04840)
LMSubPopulation_80%-100% Filter samples based on the text perplexity from a language model (i.e., GPT-2), 80-100% is the higher part of the scores.
LengthSubPopulation_0%-20% Filter samples based on text length, 0-20% is the lower part of the length.
LengthSubPopulation_80%-100% Filter samples based on text length, 80-100% is the higher part of the length.
PhraseSubPopulation-negation Filter samples based on a group of phrases, the remaining samples contain negation words (e.g., not, don't, aren't, no).
PhraseSubPopulation-question Filter samples based on a group of phrases, the remaining samples contain question words (e.g., what, which, how, when).
PrejudiceSubpopulation-man Filter samples based on gender bias, the chosen samples only contain words related to male (e.g., he, his, father, boy).
PrejudiceSubpopulation-woman Filter samples based on gender bias, the chosen samples only contain words related to female (e.g., she, her, mother, girl)

AttackRecipe

AttackRecipe aims to find a perturbation of an input text satisfies the attack's goal to fool the given FlintModel. In contrast to Transformation, AttackRecipe requires the prediction scores of the target model. TextFlint provides an interface to integrate the easy-to-use adversarial attack recipes implemented based on textattack. Users can refer to textattack for more information about the supported AttackRecipe.

Validator

It is crucial to verify the quality of samples generated by Transformation and AttackRecipe. TextFlint provides several metrics to calculate confidence:

Validator Description Reference
MaxWordsPerturbed Word replacement ratio in the generated text compared with the original text based on LCS. -
LevenshteinDistance The edit distance between original text and generated text -
DeCLUTREncoder Semantic similarity calculated based on Universal Sentence Encoder Universal sentence encoder (https://arxiv.org/pdf/1803.11175.pdf)
GPT2Perplexity Language model perplexity calculated based on the GPT2 model Language models are unsupervised multitask learners (http://www.persagen.com/files/misc/radford2019language.pdf)
TranslateScore BLEU/METEOR/chrF score Bleu: a method for automatic evaluation of machine translation (https://www.aclweb.org/anthology/P02-1040.pdf)
METEOR: An automatic metric for MT evaluation with improved correlation with human judgments (https://www.aclweb.org/anthology/W05-0909.pdf)
chrF: character n-gram F-score for automatic MT evaluation (https://www.aclweb.org/anthology/W15-3049.pdf)

Report

In Generation Layer, TextFlint can generate three types of adversarial samples and verify the robustness of the target model. Based on the results from Generation Layer, Report Layer aims to provide users with a standard analysis report from lexics, syntax, and semantic levels. For example, on the Sentiment Analysis (SA) task, this is a statistical chart of the performance ofXLNET with different types of Transformation/Subpopulation/AttackRecipe on the IMDB dataset. We can find that the model performance is lower than the original results in all the transformed dataset.

Citation

If you are using TextFlint for your work, please cite:

@article{gui2021textflint,
  title={TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing},
  author={Gui, Tao and Wang, Xiao and Zhang, Qi and Liu, Qin and Zou, Yicheng and Zhou, Xin and Zheng, Rui and Zhang, Chong and Wu, Qinzhuo and Ye, Jiacheng and others},
  journal={arXiv preprint arXiv:2103.11441},
  year={2021}
}
Comments
  • Quick start Error

    Quick start Error

    Hi, I just installed the textflint using 'pip install textflint' as recommended, under a python 3.7 environment, CentOS Linux release 7.8.2003. And I ran the quick start command 'from textflint import Engine'. The process downloaded a 764M model.zip and a 10.8M wordnet.zip, then a error appeared:

    Traceback (most recent call last): File "", line 1, in File "/data/users/wuchen/anaconda3/envs/textflint/lib/python3.7/site-packages/textflint/init.py", line 5, in from .input_layer import * File "/data/users/wuchen/anaconda3/envs/textflint/lib/python3.7/site-packages/textflint/input_layer/init.py", line 3, in from .dataset import Dataset File "/data/users/wuchen/anaconda3/envs/textflint/lib/python3.7/site-packages/textflint/input_layer/dataset/init.py", line 1, in from .dataset import Dataset, sample_map File "/data/users/wuchen/anaconda3/envs/textflint/lib/python3.7/site-packages/textflint/input_layer/dataset/dataset.py", line 23, in sample_map = get_sample_map() File "/data/users/wuchen/anaconda3/envs/textflint/lib/python3.7/site-packages/textflint/input_layer/dataset/dataset.py", line 20, in get_sample_map filter_str='_sample') File "/data/users/wuchen/anaconda3/envs/textflint/lib/python3.7/site-packages/textflint/common/utils/load.py", line 34, in task_class_load for module in modules: File "/data/users/wuchen/anaconda3/envs/textflint/lib/python3.7/site-packages/textflint/common/utils/load.py", line 27, in module_loader yield importlib.import_module(module_name) File "/data/users/wuchen/anaconda3/envs/textflint/lib/python3.7/importlib/init.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) ModuleNotFoundError: No module named 'textflint.lib'

    opened by wuchen95 10
  • NER Task Specific Transformed CoNLL 2003 and OntoNotes v5 Training Sets

    NER Task Specific Transformed CoNLL 2003 and OntoNotes v5 Training Sets

    Hi,

    I would like to use the TextFlint task-specific transformed CoNLL 2003 and OntoNotes v5 Training Sets for experiments in my paper. Are these transformed training sets readily downloadable? On textflint.io I was only able to find the transformed test sets of these datasets.

    Thank you for the help,

    Aaron

    opened by agr505 5
  • textflint是否支持Chinese Semantic Matching任务

    textflint是否支持Chinese Semantic Matching任务

    首先非常感谢textflint这个开源项目跟作者们~ 我在主页上看到demo,textflint应该是已支持Chinese Semantic Matching任务的。 但我尝试使用时遇到了些困难,可能是我对textflint框架还不够熟悉。 我仿照Chinese Word Segmentation任务写了下测试代码,但并没有获得相应的鲁棒性增广数据。以下是我的代码

    from textflint.engine import Engine
    from textflint.adapter import auto_config
    from textflint.input.config import Config
    import os
    engine = Engine()
    # 是否只支持英文SM,还是中英文都共用SM,不像CWS特指chinese word segmentation
    config = auto_config(task='SM')
    config.trans_methods = [
        "SwapWord",
        ]
    config.sub_methods = []
    engine.run(os.path.normcase('test.json'), config=config)
    

    测试数据test.json是参照主页中的示例 { "sentence1": "我喜欢这本书。", "sentence2": "这本书是我喜欢的。", "y": "1" } 希望能尽快得到回复,谢谢~

    opened by guanghuixu 4
  • Chinese NER 问题

    Chinese NER 问题

    @qzhangFDU @gpengzhi @Tribleave @jiacheng-ye @Tannidy 请问中文命名实体增强与英文命名实体存在哪些区别呢?

    中文按词做,当插入标点在实体中间,预测标签是将实体拆为两部分还是加上标点的也算实体 同理,当替换的同音词,同义词,反义词词语在实体中间,预测不是实体还是预测为实体呢

    若是替换交叉实体增强,替换的类别该如何确定呢

    谢谢啦

    good first issue 
    opened by 447428054 4
  • CrossCategory, OOV, SwapLonger not working for NER Task

    CrossCategory, OOV, SwapLonger not working for NER Task

    Hi,

    I was able to use TextFlint to augment the CoNLL2003 training data with the EntTypos and the ConcatSent transformations for the NER task. Thank you very much for providing this. It is presenting an error however when trying to use the other task-specific transformations: CrossCategory, OOV, SwapLonger on the NER Task showing this for each of those transformations:

    ValueError: Method CrossCategory is not allowed in task NER

    Are these transformations supported?

    Best Regards,

    Aaron

    opened by agr505 3
  • Unable to download detachable_word file

    Unable to download detachable_word file

    https://textflint.oss-cn-beijing.aliyuncs.com/download/CWS_DATA/detachable_word seems broken, which leads to CWS SwapVerb does not work properly. Could you help me check it? Thank you!

    opened by JackGao09 3
  • problems of

    problems of "pip install textflint''

    I have successfully install textflint. But when I import textflint, it have the error :ModuleNotFoundError: No module named 'textflint.lib'. Could you help me solve this problem? Thank you.

    opened by random2719 3
  • Run Quick start Error

    Run Quick start Error

    Traceback (most recent call last): File "d:/pyscript/test_text.py", line 1, in <module> from TextFlint import Engine File "D:\pyscript\work\lib\site-packages\textflint-0.0.1-py3.6.egg\TextFlint\__init__.py", line 7, in <module> from .generation_layer.generator import Generator File "D:\pyscript\work\lib\site-packages\textflint-0.0.1-py3.6.egg\TextFlint\generation_layer\generator\__init__.py", line 1, in <module> from .generator import Generator File "D:\pyscript\work\lib\site-packages\textflint-0.0.1-py3.6.egg\TextFlint\generation_layer\generator\generator.py", line 12, in <module> from ...input_layer.dataset import Dataset File "D:\pyscript\work\lib\site-packages\textflint-0.0.1-py3.6.egg\TextFlint\input_layer\dataset\__init__.py", line 1, in <module> from .dataset import Dataset, sample_map File "D:\pyscript\work\lib\site-packages\textflint-0.0.1-py3.6.egg\TextFlint\input_layer\dataset\dataset.py", line 23, in <module> sample_map = get_sample_map() File "D:\pyscript\work\lib\site-packages\textflint-0.0.1-py3.6.egg\TextFlint\input_layer\dataset\dataset.py", line 20, in get_sample_map filter_str='_sample') File "D:\pyscript\work\lib\site-packages\textflint-0.0.1-py3.6.egg\TextFlint\common\utils\load.py", line 34, in task_class_load for module in modules: File "D:\pyscript\work\lib\site-packages\textflint-0.0.1-py3.6.egg\TextFlint\common\utils\load.py", line 27, in module_loader yield importlib.import_module(module_name) File "c:\users\huisuan016\appdata\local\programs\python\python36\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ModuleNotFoundError: No module named 'TextFlint-0'

    opened by ValarMorghulis2018 3
  • Click dependency

    Click dependency

    Hello,

    I am wondering why textflint has a pinned dependency on click (click==7.1.1 in the requirements.txt), as from my understanding click is not even used in the project, the cli script seems to use argparse instead. Would it be possible to get rid of that dependency?

    opened by helpmefindaname 2
  • Is anyone still maintaining this project?

    Is anyone still maintaining this project?

    There are so many typo in code misleading users. For example: In 4_Sample_Dataset.ipynb, the first line is "from TextFlint.input_layer.component.sample.sa_sample import SASample", which should be "from textflint.input.component.sample.sa_sample import SASample". Try to be a user-friendly project, at least for the tutorial material.

    documentation 
    opened by EdwardMao 2
  • Error about installing textflint with setup.py

    Error about installing textflint with setup.py

    I installed textflint in my project by setup.py. However, an error occurred when I try to import the package, below is the screenshot:

    无标题

    I have checked the variable module_name in importlib.import_module(module_name), which is textflint-0.0.3-py3.8.egg.textflint.input_layer.component.sample.absa_sample

    Any suggestions to solve this issue?

    opened by JackGao09 2
  • POS Swap failing with VB a lot

    POS Swap failing with VB a lot

    I am running the pos swap with the parameter 'VB' and I am just concerned with how much it is failing. Of 1000 samples, only 82 were successful. Here are some examples of failures:

    Failed to get result for ['two', 'child', 'playing', 'in', 'the', 'house'] with transform SwapMultiPOS-VB Failed to get result for ['a', 'goat', 'attacks', 'a', 'man', 'and', 'the', 'man', 'fights', 'back'] with transform SwapMultiPOS-VB Failed to get result for ['a', 'man', 'shows', 'how', 'a', 'video', 'game', 'works'] with transform SwapMultiPOS-VB Failed to get result for ['someone', 'is', 'frying', 'food'] with transform SwapMultiPOS-VB Failed to get result for ['a', 'woman', 'is', 'ripping', 'off', 'a', 'man', 'clothes'] with transform SwapMultiPOS-VB Failed to get result for ['a', 'diver', 'goes', 'underwater'] with transform SwapMultiPOS-VB Failed to get result for ['a', 'young', 'boy', 'rocks', 'out', 'on', 'a', 'guitar'] with transform SwapMultiPOS-VB Failed to get result for ['someone', 'is', 'drawing', 'pictures'] with transform SwapMultiPOS-VB

    There are clearly verbs present, so maybe not strict verbs but maybe also including pos like VBG???

    opened by Maddy12 0
  • Negate where add at end

    Negate where add at end

    There was an internal error reported that you wanted me to create an issue for.

    This is for transforming ReverseNeg in UT.

    The tokens were input was ['guy', 'explaining', 'what', 'stiff', 'person', 'syndrome', 'is'] It is trying to add next to the root id, which is in this case is. However, there is index out of range error because is is at the end of the phrase.

    This occurs here: https://github.com/textflint/textflint/blob/master/textflint/generation/transformation/UT/reverse_neg.py#L150

    opened by Maddy12 0
  • About OCR generation

    About OCR generation

    Hi, guys! I am trying to reuse the OCR transformation module in TextFlint, but I somehow find it rather trivial... I quote the code about the OCR rules in the source code as below:

    mapping = {
                '0': ['8', '9', 'o', 'O', 'D'],
                '1': ['4', '7', 'l', 'I'],
                '2': ['z', 'Z'],
                '5': ['8'],
                '6': ['b'],
                '8': ['s', 'S', '@', '&'],
                '9': ['g'],
                'o': ['u'],
                'r': ['k'],
                'C': ['G'],
                'O': ['D', 'U'],
                'E': ['B']
            }
    

    Here, the rules do not even cover the alphabet... And there are for sure more rules, eg., "w" => "vv". "m" => "rn". I have found a dataset here (https://github.com/jie-mei/MiBio-OCR-dataset), which contains some OCR errors retrieved from real-world. Although I find it quite annoying to parse the files in the aforementioned dataset... I believe that it may be benefitial to this work!

    good first issue TODO 
    opened by LC-John 2
Releases(v0.1.0)
  • v0.1.0(Mar 15, 2022)

    New Features

    Add 6 Chinese NLP tasks support

    This update adds preprocessing and transformations for 6 Chinese NLP tasks, including Machine Reading Comprehension, Semantic Matching, Named Entity Recognition, and Sentiment Analysis.

    It provides 15 universal transformations and 12 specific transformations.

    Add 3 English NLP task support

    Now support transformations of Neural Machine Translation transformation between English and German.

    Now support transformations of Word Sense Disambiguation.

    Now support transformations of the Winograd Schema Challenge.

    Fix

    Update requirements.

    Update tutorial docs to synchronize with toolset version.

    Source code(tar.gz)
    Source code(zip)
  • v0.0.5(Aug 30, 2021)

    Performance

    1> Update README: provide more tutorial docs and relate links

    Fix

    1> Fix bug of pos tagging components which was not initialized; 2> Fix CSV load bug of NER sample; 3> Fixed some bugs for FlintmodelNER, and the tutorial for that is updated.

    Source code(tar.gz)
    Source code(zip)
  • v0.0.4(Jul 9, 2021)

    Performance

    1> Optimize the installation , remove the textattack in the requirements. Because textattack relies on too many packages which may cause the failure of installation. It is recommended to install the package manually for adversarial attack.

    2> Speed up the loading process of textflint from 1 minute to 3 seconds.

    Source code(tar.gz)
    Source code(zip)
  • v0.0.3(Apr 21, 2021)

    Features

    1. Add command supports
    2. Reconstruct Engine interfaces

    Fix

    1. Barchat incomplete display
    2. UT sample is_legal bug
    3. Specify importlib-metadata lib version
    4. Flintmodel load bug
    Source code(tar.gz)
    Source code(zip)
  • v0.0.2(Apr 9, 2021)

    Input layer: receives textual datasets and models as input, represented as Dataset and FlintModel separately.

    • DataSet: a container for Sample, provides efficiently and handily operation interfaces for Sample. Dataset supports loading, verification, and saving data in Json or CSV format for various NLP tasks.
    • FlintModel: a target model used in an adversarial attack.

    Generation layer: there are mainly four parts in generation layer:

    • Subpopulation: generates a subset of a DataSet.
    • Transformation: transforms each sample of Dataset if it can be transformed.
    • AttackRecipe: attacks the FlintModel and generate a DataSet of adversarial examples.
    • Validator: verifies the quality of samples generated by Transformation and AttackRecipe.

    Report layer: analyzes model testing results and provides robustness report for users.

    Source code(tar.gz)
    Source code(zip)
Owner
TextFlint
Text Robustness Evaluation Toolkit
TextFlint
Python utility library for compositing PDF documents with reportlab.

pdfdoc-py Python utility library for compositing PDF documents with reportlab. Installation The pdfdoc-py package can be installed directly from the s

Michael Gale 1 Jan 06, 2022
Code for using and evaluating SpanBERT.

SpanBERT This repository contains code and models for the paper: SpanBERT: Improving Pre-training by Representing and Predicting Spans. If you prefer

Meta Research 798 Dec 30, 2022
[Preprint] Escaping the Big Data Paradigm with Compact Transformers, 2021

Compact Transformers Preprint Link: Escaping the Big Data Paradigm with Compact Transformers By Ali Hassani[1]*, Steven Walton[1]*, Nikhil Shah[1], Ab

SHI Lab 367 Dec 31, 2022
The Sudachi synonym dictionary in Solar format.

solr-sudachi-synonyms The Sudachi synonym dictionary in Solar format. Summary Run a script that checks for updates to the Sudachi dictionary every hou

Karibash 3 Aug 19, 2022
This is an incredibly powerful calculator that is capable of many useful day-to-day functions.

Description 💻 This is an incredibly powerful calculator that is capable of many useful day-to-day functions. Such functions include solving basic ari

Jordan Leich 37 Nov 19, 2022
Training code of Spatial Time Memory Network. Semi-supervised video object segmentation.

Training-code-of-STM This repository fully reproduces Space-Time Memory Networks Performance on Davis17 val set&Weights backbone training stage traini

haochen wang 128 Dec 11, 2022
(ACL 2022) The source code for the paper "Towards Abstractive Grounded Summarization of Podcast Transcripts"

Towards Abstractive Grounded Summarization of Podcast Transcripts We provide the source code for the paper "Towards Abstractive Grounded Summarization

10 Jul 01, 2022
Mycroft Core, the Mycroft Artificial Intelligence platform.

Mycroft Mycroft is a hackable open source voice assistant. Table of Contents Getting Started Running Mycroft Using Mycroft Home Device and Account Man

Mycroft 6.1k Jan 09, 2023
An open source library for deep learning end-to-end dialog systems and chatbots.

DeepPavlov is an open-source conversational AI library built on TensorFlow, Keras and PyTorch. DeepPavlov is designed for development of production re

Neural Networks and Deep Learning lab, MIPT 6k Dec 30, 2022
Phomber is infomation grathering tool that reverse search phone numbers and get their details, written in python3.

A Infomation Grathering tool that reverse search phone numbers and get their details ! What is phomber? Phomber is one of the best tools available fo

S41R4J 121 Dec 27, 2022
The FinQA dataset from paper: FinQA: A Dataset of Numerical Reasoning over Financial Data

Data and code for EMNLP 2021 paper "FinQA: A Dataset of Numerical Reasoning over Financial Data"

Zhiyu Chen 114 Dec 29, 2022
An open collection of annotated voices in Japanese language

声庭 (Koniwa): オープンな日本語音声とアノテーションのコレクション Koniwa (声庭): An open collection of annotated voices in Japanese language 概要 Koniwa(声庭)は利用・修正・再配布が自由でオープンな音声とアノテ

Koniwa project 32 Dec 14, 2022
CredData is a set of files including credentials in open source projects

CredData is a set of files including credentials in open source projects. CredData includes suspicious lines with manual review results and more information such as credential types for each suspicio

Samsung 19 Sep 07, 2022
LUKE -- Language Understanding with Knowledge-based Embeddings

LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transf

Studio Ousia 587 Dec 30, 2022
DomainWordsDict, Chinese words dict that contains more than 68 domains, which can be used as text classification、knowledge enhance task

DomainWordsDict, Chinese words dict that contains more than 68 domains, which can be used as text classification、knowledge enhance task。涵盖68个领域、共计916万词的专业词典知识库,可用于文本分类、知识增强、领域词汇库扩充等自然语言处理应用。

liuhuanyong 357 Dec 24, 2022
chaii - hindi & tamil question answering

chaii - hindi & tamil question answering This is the solution for rank 5th in Kaggle competition: chaii - Hindi and Tamil Question Answering. The comp

abhishek thakur 33 Dec 18, 2022
PyTorch impelementations of BERT-based Spelling Error Correction Models.

PyTorch impelementations of BERT-based Spelling Error Correction Models

Heng Cai 209 Dec 30, 2022
Data and evaluation code for the paper WikiNEuRal: Combined Neural and Knowledge-based Silver Data Creation for Multilingual NER (EMNLP 2021).

Data and evaluation code for the paper WikiNEuRal: Combined Neural and Knowledge-based Silver Data Creation for Multilingual NER. @inproceedings{tedes

Babelscape 40 Dec 11, 2022
pytorch implementation of Attention is all you need

A Pytorch Implementation of the Transformer: Attention Is All You Need Our implementation is largely based on Tensorflow implementation Requirements N

230 Dec 07, 2022
Indobenchmark are collections of Natural Language Understanding (IndoNLU) and Natural Language Generation (IndoNLG)

Indobenchmark Toolkit Indobenchmark are collections of Natural Language Understanding (IndoNLU) and Natural Language Generation (IndoNLG) resources fo

Samuel Cahyawijaya 11 Aug 26, 2022