A deep learning-based translation library built on Huggingface transformers

Overview

DL Translate

A deep learning-based translation library built on Huggingface transformers and Facebook's mBART-Large

💻 GitHub Repository
📚 Documentation / Readthedocs
🐍 PyPi project
🧪 Colab Demo / Kaggle Demo

Quickstart

Install the library with pip:

pip install dl-translate

To translate some text:

import dl_translate as dlt

mt = dlt.TranslationModel()  # Slow when you load it for the first time

text_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
mt.translate(text_hi, source=dlt.lang.HINDI, target=dlt.lang.ENGLISH)

Above, you can see that dlt.lang contains variables representing each of the 50 available languages with auto-complete support. Alternatively, you can specify the language (e.g. "Arabic") or the language code (e.g. "fr_XX" for French):

text_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا."
mt.translate(text_ar, source="Arabic", target="fr_XX")

If you want to verify whether a language is available, you can check it:

print(mt.available_languages())  # All languages that you can use
print(mt.available_codes())  # Code corresponding to each language accepted
print(mt.get_lang_code_map())  # Dictionary of lang -> code

Usage

Selecting a device

When you load the model, you can specify the device:

mt = dlt.TranslationModel(device="auto")

By default, the value will be device="auto", which means it will use a GPU if possible. You can also explicitly set device="cpu" or device="gpu", or some other strings accepted by torch.device(). In general, it is recommend to use a GPU if you want a reasonable processing time.

Loading from a path

By default, dlt.TranslationModel will download the model from the huggingface repo and cache it. However, you are free to load from a path:

mt = dlt.TranslationModel("/path/to/your/model/directory/")

Make sure that your tokenizer is also stored in the same directory if you use this approach.

Using a different model

You can also choose another model that has a similar format, e.g.

mt = dlt.TranslationModel("facebook/mbart-large-50-one-to-many-mmt")

Note that the available languages will change if you do this, so you will not be able to leverage dlt.lang or dlt.utils.

Breaking down into sentences

It is not recommended to use extremely long texts as it takes more time to process. Instead, you can try to break them down into sentences with the help of nltk. First install the library with pip install nltk, then run:

import nltk

nltk.download("punkt")

text = "Mr. Smith went to his favorite cafe. There, he met his friend Dr. Doe."
sents = nltk.tokenize.sent_tokenize(text, "english")  # don't use dlt.lang.ENGLISH
" ".join(mt.translate(sents, source=dlt.lang.ENGLISH, target=dlt.lang.FRENCH))

Batch size and verbosity when using translate

It's possible to set a batch size (i.e. the number of elements processed at once) for mt.translate and whether you want to see the progress bar or not:

...
mt = dlt.TranslationModel()
mt.translate(text, source, target, batch_size=32, verbose=True)

If you set batch_size=None, it will compute the entire text at once rather than splitting into "chunks". We recommend lowering batch_size if you do not have a lot of RAM or VRAM and run into CUDA memory error. Set a higher value if you are using a high-end GPU and the VRAM is not fully utilized.

dlt.utils module

An alternative to mt.available_languages() is the dlt.utils module. You can use it to find out which languages and codes are available:

print(dlt.utils.available_languages('mbart50'))  # All languages that you can use
print(dlt.utils.available_codes('mbart50'))  # Code corresponding to each language accepted
print(dlt.utils.get_lang_code_map('mbart50'))  # Dictionary of lang -> code

Advanced

The following section assumes you have knowledge of PyTorch and Huggingface Transformers.

Saving and loading

If you wish to accelerate the loading time the translation model, you can use save_obj:

mt = dlt.TranslationModel()
mt.save_obj('saved_model')
# ...

Then later you can reload it with load_obj:

mt = dlt.TranslationModel.load_obj('saved_model')
# ...

Warning: Only use this if you are certain the torch module saved in saved_model/weights.pt can be correctly loaded. Indeed, it is possible that the huggingface, torch or some other dependencies change between when you called save_obj and load_obj, and that might break your code. Thus, it is recommend to only run load_obj in the same environment/session as save_obj. Note this method might be deprecated in the future once there's no speed benefit in loading this way.

Interacting with underlying model and tokenizer

When initializing model, you can pass in arguments for the underlying BART model and tokenizer (which will respectively be passed to MBartForConditionalGeneration.from_pretrained and MBart50TokenizerFast.from_pretrained):

mt = dlt.TranslationModel(
    model_options=dict(
        state_dict=...,
        cache_dir=...,
        ...
    ),
    tokenizer_options=dict(
        tokenizer_file=...,
        eos_token=...,
        ...
    )
)

You can also access the underlying transformers model and tokenizer:

bart = mt.get_transformers_model()
tokenizer = mt.get_tokenizer()

See the huggingface docs for more information.

bart_model.generate() keyword arguments

When running mt.translate, you can also give a generation_options dictionary that is passed as keyword arguments to the underlying bart_model.generate() method:

mt.translate(
    text,
    source=dlt.lang.GERMAN,
    target=dlt.lang.SPANISH,
    generation_options=dict(num_beams=5, max_length=...)
)

Learn more in the huggingface docs.

Acknowledgement

dl-translate is built on top of Huggingface's implementation of multilingual BART finetuned on many-to-many translation of over 50 languages, which is documented here. The original paper was written by Tang et. al from Facebook AI Research; you can find it here and cite it using the following:

@article{tang2020multilingual,
  title={Multilingual translation with extensible multilingual pretraining and finetuning},
  author={Tang, Yuqing and Tran, Chau and Li, Xian and Chen, Peng-Jen and Goyal, Naman and Chaudhary, Vishrav and Gu, Jiatao and Fan, Angela},
  journal={arXiv preprint arXiv:2008.00401},
  year={2020}
}

dlt is a wrapper with useful utils to save you time. For huggingface's transformers, the following snippet is shown as an example:

from transformers import MBartForConditionalGeneration, MBart50TokenizerFast

article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
article_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا."

model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")

# translate Hindi to French
tokenizer.src_lang = "hi_IN"
encoded_hi = tokenizer(article_hi, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire en Syria."

# translate Arabic to English
tokenizer.src_lang = "ar_AR"
encoded_ar = tokenizer(article_ar, return_tensors="pt")
generated_tokens = model.generate(**encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "The Secretary-General of the United Nations says there is no military solution in Syria."

With dlt, you can run:

import dl_translate as dlt

article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
article_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا."

mt = dlt.TranslationModel()
translated_fr = mt.translate(article_hi, source=dlt.lang.HINDI, target=dlt.lang.FRENCH)
translated_en = mt.translate(article_ar, source=dlt.lang.ARABIC, target=dlt.lang.ENGLISH)

Notice you don't have to think about tokenizers, condition generation, pretrained models, and regional codes; you can just tell the model what to translate!

If you are experienced with huggingface's ecosystem, then you should be familiar enough with the example above that you wouldn't need this library. However, if you've never heard of huggingface or mBART, then I hope using this library will give you enough motivation to learn more about them :)

Comments
  • module 'torch' has no attribute 'device'

    module 'torch' has no attribute 'device'

    Hello , @xhlulu Please find attached the part of the tutorial that I tried to execute and where I find the error. NB : I used the guide of Pytorch to install torch according to the command appropriate to my system which is: pip3 install torch torchvision torchaudio . The version of torch is 1.10.1 and my python version is 3.8.5 . image

    image

    Thank you for your help.

    opened by gitassia 9
  • Offline mode tutorial

    Offline mode tutorial

    hi, sorry for my bad English, and I am quite a newbie I am quite confused with the offline tutorial "Now, move everything in the dlt directory to your offline environment. Create a virtual environment:" -where is the "offline environment"? and -how to Create a "virtual environment"? I using windows 11 and python 3.9

    opened by kucingkembar 6
  • error on pyw extention

    error on pyw extention

    hi, it's me again, sorry again for bad English I tried this code in py file, open using python IDLE, run -> run module F5 ===> no problem then rename the extension to pyw, open like exe (double click), and this is the result:

    Traceback (most recent call last):
      File "D:\Script\translate.pyw", line 67, in FB_Loading
        import dl_translate as dlt
      File "C:\Python\Python39\lib\site-packages\dl_translate\__init__.py", line 3, in <module>
        from ._translation_model import TranslationModel
      File "C:\Python\Python39\lib\site-packages\dl_translate\_translation_model.py", line 5, in <module>
        import transformers
      File "C:\Python\Python39\lib\site-packages\transformers\__init__.py", line 43, in <module>
        from . import dependency_versions_check
      File "C:\Python\Python39\lib\site-packages\transformers\dependency_versions_check.py", line 36, in <module>
        from .file_utils import is_tokenizers_available
      File "C:\Python\Python39\lib\site-packages\transformers\file_utils.py", line 58, in <module>
        logger = logging.get_logger(__name__)  # pylint: disable=invalid-name
      File "C:\Python\Python39\lib\site-packages\transformers\utils\logging.py", line 119, in get_logger
        _configure_library_root_logger()
      File "C:\Python\Python39\lib\site-packages\transformers\utils\logging.py", line 82, in _configure_library_root_logger
        _default_handler.flush = sys.stderr.flush
    AttributeError: 'NoneType' object has no attribute 'flush'
    

    any guide to fix this?

    opened by kucingkembar 4
  • Add MarianNMT

    Add MarianNMT

    See Marian: https://huggingface.co/transformers/model_doc/marian.html See helsinki-nlp's models: https://huggingface.co/Helsinki-NLP

    We'd need

    • [ ] Add option to load the marian architecture at initialization (e.g. dlt.TranslationModel("marian"))
    • [ ] Add an option to find all of the languages (and code) available for a certain variant trained using marian, e.g. dlt.utils.available_languages("opus-en-romance")
    • [ ] An option to leverage autocomplete such as dlt.lang.opus.en_romance.ENGLISH, but the options would be limited to only what's available with the variance (i.e. romance)
    • [ ] TBD
    enhancement 
    opened by xhluca 3
  • no load to ram mode

    no load to ram mode

    hi, it me again, and sorry about my bad English, I have a project to use this software for windows tablets with 4GB of ram, the problem is the ram consumption using this software is quite high, about 2,3GB, is there any way to use this software read storage data(SSD or HDD) instead of ram data?

    thank you for reading, and have a nice day

    opened by kucingkembar 2
  • error: when using  torch(1.8.0+cu111)

    error: when using torch(1.8.0+cu111)

    Traceback (most recent call last):
    
      File "translate_test.py", line 66, in <module>
    
        translate_test()
    
      File "translate_test.py", line 30, in translate_test
    
        rest = mt.predict(texts, _from = 'en',batch_size = size)
    
      File "/mnt/eclipse-glority/receipt/deploy/branches/dev/ms_deploy/util/translate_util.py", line 29, in predict
    
        rest = self.mt.translate(texts, source=_from, target=_to, batch_size = batch_size)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/dl_translate/_translation_model.py", line 197, in translate
    
        **encoded, **generation_options
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    
        return func(*args, **kwargs)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/transformers/generation_utils.py", line 927, in generate
    
        model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/transformers/generation_utils.py", line 412, in _prepare_encoder_decoder_kwargs_for_generation
    
        model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    
        result = self.forward(*input, **kwargs)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 780, in forward
    
        output_attentions=output_attentions,
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    
        result = self.forward(*input, **kwargs)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 388, in forward
    
        hidden_states = self.activation_fn(self.fc1(hidden_states))
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    
        result = self.forward(*input, **kwargs)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 94, in forward
    
        return F.linear(input, self.weight, self.bias)
    
      File "/home/hyj/anaconda3/envs/tf25/lib/python3.7/site-packages/torch/nn/functional.py", line 1753, in linear
    
        return torch._C._nn.linear(input, weight, bias)
    
    RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
    
    torch                            1.8.0+cu111
    
    torchvision                      0.9.0+cu111
    

    it is ok, when

    torch 1.7.1+cu101

    how to fix ?

    opened by hongyinjie 2
  • Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)

    Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)

    When I use dl_translate, the following problem appears, how do I set TOKENIZERS_PARALLELISM.

    huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using tokenizers before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)

    opened by Kouuh 2
  • Incorporating ISO-639

    Incorporating ISO-639

    opened by xhluca 2
  • Cannot run with device = 'gpu' on Macbook M1 Pro

    Cannot run with device = 'gpu' on Macbook M1 Pro

    I have tried to using gpu on Macbook 16inch M1 Pro, then I got this error: "AssertionError: Torch not compiled with CUDA enabled"

    Please help!

    opened by htnha 1
  • how to make (slow) translation faster

    how to make (slow) translation faster

    Hi, I am testing this code on a list of 5 short sentences, the average time for translation is 2 seconds/sentence. which is slow for my requirements. any hints on how to speed-up the translation ? Thanks

    import dl_translate as dlt
    import time 
    
    french_sentence = 'oh mon dieu c mechant c pas possible jamais je reviendrai, a deconseiller. je vous recommende de visiter un autre produit apres vous pouvez voire la difference'
    arabic_sentence = '  لقد جربت عدة نسخ من هذا المنتج لكن لم استطع ان اجد فبه ما ينتج ما هذا الهراء'
    ar2 = 'المنتج الاصلى سريع الذوبان فى الماء ويذوب بشكل مثالى على عكس المكمل المغشوش ...منتج كويس انا حبيتو و بنصح فيه'
    ar3= 'امشي سيدا لفه الثانيه يسار تعدد المطالبات المتعلقة بالأراضي وما ينتج عن ذلك من تناحر يولد باستمرار نزاعات متجددة. ... ويمكن دمج ما ينتج عن ذلك من معارف في إطار برنامج عمل نيروبي' 
    nepali ='यो मृत्युदर विकासशील देशहरुमा धेरै छ'
    sent_list =[french_sentence, arabic_sentence, ar2, ar3, nepali]
    print(sent_list)
    mt = dlt.TranslationModel()  # Slow when you load it for the first time
    map_langdetect_to_translate = {'ar':'Arabic', 'en':'English', 'es':'Spanish', 'fr':'French', 'ne':'Nepali'}
    start = time.time() 
    for sent in sent_list:
    	print('-------------------------------------')
    	print('original sentence is : ',sent)
    	print('detected lang ',detect(sent))
    	mapped = map_langdetect_to_translate[detect(sent)]
    	translated = mt.translate(sent, source=mapped, target="en")
    	print('Translation is : ',translated)
    
    end = time.time()	
    tt = time.strftime("%H:%M:%S", time.gmtime(end-start))
    time_message = 'Query execution time : {}'.format( tt )
    print(time_message)
    
    opened by banyous 1
  • Generate docs with sphinx or something else

    Generate docs with sphinx or something else

    Right now I have some docstrings but it would require some refactoring. Using readthedocs.io would be nice, we could start by looking at what numpy or pydata is using

    documentation 
    opened by xhluca 1
  • Detect source language with langdetect package

    Detect source language with langdetect package

    The langdetect has worked well for me in the past for language detection problems. How would you feel about allowing users to pass 'auto' as an option for source? I could see some pros and cons:

    Pros

    • Users don't need to be able to recognize a language to translate
    • Eliminates pre-classification of languages if your dataset contains multiple languages

    Cons

    I'm a little new to open source but I would love to contribute 🙂 Of course, if you feel this doesn't fit this package's mission that's totally understandable.

    enhancement help wanted good first issue 
    opened by awalker88 5
  • Support for sentence splitting

    Support for sentence splitting

    Right now TranslationModel.translate will translate each input string as is, which can be extremely slow for longer sequences due to the quadratic runtime of the architecture. The current recommended way is to use nltk:

    import nltk
    
    nltk.load("punkt")
    
    text = "Mr. Smith went to his favorite cafe. There, he met his friend Dr. Doe."
    sents = nltk.tokenize.sent_tokenize(text, "english")  # don't use dlt.lang.ENGLISH
    " ".join(model.translate(sents, source=dlt.lang.ENGLISH, target=dlt.lang.FRENCH))
    

    Which works well but doesn't include all possible languages. It would be interesting to train the punkt model on each of the language made available (though we'd need to use a very large dataset for that). Once that's done, the snippet above could be a simple argument, e.g. model.translate(..., max_length="sentence"). With some more effort, max_length parameter could also be an integer n between 0 and 512, which represents the length of the max token. Moreover, rather than truncating at that length, we could break down the input text into sequences of length n or less, which would include the aggregated sentences.

    enhancement help wanted 
    opened by xhluca 3
Releases(v0.2.6)
  • v0.2.6(Jul 13, 2022)

  • v0.2.2.post1(Aug 21, 2021)

  • v0.2.2(Apr 9, 2021)

    Change languages available in dlt.lang

    Changed

    • Docs: Available languages now include "Khmer" (which maps to central khmer)

    Fixed

    • dlt.lang will now have all the languages corresponding to m2m100 instead of mbart50
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Apr 8, 2021)

    Fix dlt.TranslationModel.load_obj

    Added

    • New tests for saving and loading.

    Fixed

    • dlt.TranslationModel.load_obj: Will now work without having to explicitly give the model family.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 8, 2021)

    Add m2m100 as the new default model to support 100 languages

    Added

    • dlt.lang.m2m100 module: Now has variables for over 100 languages, also auto-complete ready. Example: dlt.lang.m2m100.ENGLISH.
    • dlt.utils.available_languages, dlt.utils.available_codes: Now supports argument "m2m100"
    • Available languages for each model family
    • Script and template to generate available languages

    Changed

    • [BREAKING] dlt.lang.TranslationModel: A new model parameter called model_family in the initialization function. Either "mbart50" or "m2m100". By default, it will be inferred based on model_or_path. Needs to be explicitly set if model_or_path is a path.
    • [BREAKING] Default model changed to m2m100
    • Docs and readme about mbart50 were reframed to take into account the new model
    • dlt.TranslationModel.translate: Improved docstring to be more general.
    • Tests pertaining to m2m100
    • scripts/generate_langs.py: Renamed, mechanism now changed to loading from json files
    • docs/index.md: Expand the "Usage" and "Advanced" sections
    • README.md: Add acknowledgement about m2m100, significantly trim "Advanced" section, make "Usage" more concise

    Fixed

    • dlt.TranslationModel.available_codes() was returning the languages instead of the codes. It will now correctly return the code.

    Removed

    • Output type hints for TranslationModel.get_transformers_model and TranslationModel.get_tokenizer
    • [BREAKING] dlt.TranslationModel.bart_model and dlt.TranslationModel.tokenizer are no longer available to be used directly. Please use dlt.TranslationModel.get_transformers_model and dlt.TranslationModel.get_tokenizer instead.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0rc1(Mar 21, 2021)

    Add m2m100 as an alternative to mbart50

    m2m100 has more languages available (~110) and has also reported their absolute BLEU scores.

    Added

    • dlt.lang.m2m100 module: Now has variables for over 100 languages, also auto-complete ready. Example: dlt.lang.m2m100.ENGLISH.
    • dlt.utils.available_languages, dlt.utils.available_codes: Now supports argument "m2m100"

    Changed

    • [BREAKING] dlt.lang.TranslationModel: A new model parameter called model_family in the initialization function. Either "mbart50" or "m2m100". By default, it will be inferred based on model_or_path. Needs to be explicitly set if model_or_path is a path.
    • dlt.TranslationModel.translate: Improved docstring to be more general.
    • Tests pertaining to m2m100
    • scripts/generate_langs.py: Renamed, mechanism now changed to loading from json files

    Fixed

    • dlt.TranslationModel.available_codes() was returning the languages instead of the codes. It will now correctly return the code.

    Removed

    • Output type hints for TranslationModel.get_transformers_model and TranslationModel.get_tokenizer
    • [BREAKING] dlt.TranslationModel.bart_model and dlt.TranslationModel.tokenizer are no longer available to be used directly. Please use dlt.TranslationModel.get_transformers_model and dlt.TranslationModel.get_tokenizer instead.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Mar 17, 2021)

GPT-3: Language Models are Few-Shot Learners

GPT-3: Language Models are Few-Shot Learners arXiv link Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-trainin

OpenAI 12.5k Jan 05, 2023
Code for "Finetuning Pretrained Transformers into Variational Autoencoders"

transformers-into-vaes Code for Finetuning Pretrained Transformers into Variational Autoencoders (our submission to NLP Insights Workshop 2021). Gathe

Seongmin Park 22 Nov 26, 2022
Deeply Supervised, Layer-wise Prediction-aware (DSLP) Transformer for Non-autoregressive Neural Machine Translation

Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision Training Efficiency We show the training efficiency of our DSLP model b

Chenyang Huang 37 Jan 04, 2023
Crie tokens de autenticação íntegros e seguros com UToken.

UToken - Tokens seguros. UToken (ou Unhandleable Token) é uma bilioteca criada para ser utilizada na geração de tokens seguros e íntegros, ou seja, nã

Jaedson Silva 0 Nov 29, 2022
Write Alphabet, Words and Sentences with your eyes.

The-Next-Gen-AI-Eye-Writer The Eye tracking Technique has become one of the most popular techniques within the human and computer interaction era, thi

Rohan Kasabe 2 Apr 05, 2022
Pytorch-Named-Entity-Recognition-with-BERT

BERT NER Use google BERT to do CoNLL-2003 NER ! Train model using Python and Inference using C++ ALBERT-TF2.0 BERT-NER-TENSORFLOW-2.0 BERT-SQuAD Requi

Kamal Raj 1.1k Dec 25, 2022
Accurately generate all possible forms of an English word e.g "election" --> "elect", "electoral", "electorate" etc.

Accurately generate all possible forms of an English word Word forms can accurately generate all possible forms of an English word. It can conjugate v

Dibya Chakravorty 570 Dec 31, 2022
:P Some basic stuff I'm gonna use for my upcoming Agile Software Development and Devops

reverse-image-search-py bash script.sh img_name.jpg Requirements pip install requests pip install pyshorteners Dry run [ Sudhanva M 3 Dec 18, 2021

Knowledge Graph,Question Answering System,基于知识图谱和向量检索的医疗诊断问答系统

Knowledge Graph,Question Answering System,基于知识图谱和向量检索的医疗诊断问答系统

wangle 823 Dec 28, 2022
Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP

Pretrain and Fine-tune a T5 model with Flax on GCP This tutorial details how pretrain and fine-tune a FlaxT5 model from HuggingFace using a TPU VM ava

Gabriele Sarti 41 Nov 18, 2022
Finally decent dictionaries based on Wiktionary for your beloved eBook reader.

eBook Reader Dictionaries Finally, decent dictionaries based on Wiktionary for your beloved eBook reader. Dictionaries Catalan 🚧 Ελληνικά (help welco

Mickaël Schoentgen 163 Dec 31, 2022
SentAugment is a data augmentation technique for semi-supervised learning in NLP.

SentAugment SentAugment is a data augmentation technique for semi-supervised learning in NLP. It uses state-of-the-art sentence embeddings to structur

Meta Research 363 Dec 30, 2022
A simple Speech Emotion Recognition (SER) API created using Flask and running in a Docker container.

keyword_searching Steps to use this Python scripts: (1)Paste this script into the file folder containing the PDF files you need to search from; (2)Thi

2 Nov 11, 2022
A demo of chinese asr

chinese_asr_demo 一个端到端的中文语音识别模型训练、测试框架 具备数据预处理、模型训练、解码、计算wer等等功能 训练数据 训练数据采用thchs_30,

4 Dec 09, 2021
Intent parsing and slot filling in PyTorch with seq2seq + attention

PyTorch Seq2Seq Intent Parsing Reframing intent parsing as a human - machine translation task. Work in progress successor to torch-seq2seq-intent-pars

Sean Robertson 159 Apr 04, 2022
NLP library designed for reproducible experimentation management

Welcome to the Transfer NLP library, a framework built on top of PyTorch to promote reproducible experimentation and Transfer Learning in NLP You can

Feedly 290 Dec 20, 2022
Translators - is a library which aims to bring free, multiple, enjoyable translation to individuals and students in Python

Translators - is a library which aims to bring free, multiple, enjoyable translation to individuals and students in Python

UlionTse 907 Dec 27, 2022
The official implementation of "BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?, ACL 2021 main conference"

BERT is to NLP what AlexNet is to CV This is the official implementation of BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Iden

Asahi Ushio 20 Nov 03, 2022
It analyze the sentiment of the user, whether it is postive or negative.

Sentiment-Analyzer-Tool It analyze the sentiment of the user, whether it is postive or negative. It uses streamlit library for creating this sentiment

Paras Patidar 18 Dec 17, 2022
A natural language processing model for sequential sentence classification in medical abstracts.

NLP PubMed Medical Research Paper Abstract (Randomized Controlled Trial) A natural language processing model for sequential sentence classification in

Hemanth Chandran 1 Jan 17, 2022