💥 Fast State-of-the-Art Tokenizers optimized for Research and Production

Overview



Build GitHub

Provides an implementation of today's most used tokenizers, with a focus on performance and versatility.

Main features:

  • Train new vocabularies and tokenize, using today's most used tokenizers.
  • Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server's CPU.
  • Easy to use, but also extremely versatile.
  • Designed for research and production.
  • Normalization comes with alignments tracking. It's always possible to get the part of the original sentence that corresponds to a given token.
  • Does all the pre-processing: Truncate, Pad, add the special tokens your model needs.

Bindings

We provide bindings to the following languages (more to come!):

Quick example using Python:

Choose your model between Byte-Pair Encoding, WordPiece or Unigram and instantiate a tokenizer:

from tokenizers import Tokenizer
from tokenizers.models import BPE

tokenizer = Tokenizer(BPE())

You can customize how pre-tokenization (e.g., splitting into words) is done:

from tokenizers.pre_tokenizers import Whitespace

tokenizer.pre_tokenizer = Whitespace()

Then training your tokenizer on a set of files just takes two lines of codes:

from tokenizers.trainers import BpeTrainer

trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.train(files=["wiki.train.raw", "wiki.valid.raw", "wiki.test.raw"], trainer=trainer)

Once your tokenizer is trained, encode any text with just one line:

output = tokenizer.encode("Hello, y'all! How are you 😁 ?")
print(output.tokens)
# ["Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?"]

Check the python documentation or the python quicktour to learn more!

You might also like...
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.

English | 简体中文 | 繁體中文 State-of-the-art Natural Language Processing for Jax, PyTorch and TensorFlow 🤗 Transformers provides thousands of pretrained mo

Learn meanings behind words is a key element in NLP. This project concentrates on the disambiguation of preposition senses. Therefore, we train a bert-transformer model and surpass the state-of-the-art.

New State-of-the-Art in Preposition Sense Disambiguation Supervisor: Prof. Dr. Alexander Mehler Alexander Henlein Institutions: Goethe University TTLa

A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5
Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5

NLP-Summarizer Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5 This project aimed to provide in

Easy to use, state-of-the-art Neural Machine Translation for 100+ languages

EasyNMT - Easy to use, state-of-the-art Neural Machine Translation This package provides easy to use, state-of-the-art machine translation for more th

A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. IMPORTANT: (30.08.2020) We moved our models

State of the Art Natural Language Processing

Spark NLP: State of the Art Natural Language Processing Spark NLP is a Natural Language Processing library built on top of Apache Spark ML. It provide

A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. IMPORTANT: (30.08.2020) We moved our models

State of the Art Natural Language Processing

Spark NLP: State of the Art Natural Language Processing Spark NLP is a Natural Language Processing library built on top of Apache Spark ML. It provide

Comments
  • Can't import any modules

    Can't import any modules

    What is says on the tin. Every module I try importing into a script is spitting out a "module not found" rror.

    Traceback (most recent call last): File "ab2.py", line 3, in from tokenizers.tools import BertWordPieceTokenizer ImportError: cannot import name 'BertWordPieceTokenizer' from 'tokenizers.tools' (/home/../anaconda3/envs/tokenizers/lib/python3.7/site-packages/tokenizers/tools/init.py)

    Traceback (most recent call last): File "ab2.py", line 3, in from transformers import BertWordPieceTokenizer ImportError: cannot import name 'BertWordPieceTokenizer' from 'transformers' (/home/../anaconda3/envs/tokenizers/lib/python3.7/site-packages/transformers/init.py)

    I've tried:

    import BertWordPieceTokenizer from tokenizers.toold import AutoTokenizer from tokenizers import BartTokenizer

    To Illustrate a few examples.

    I've installed Tokenizers in an anaconda3 venv via pip, via conda forge, and compiled from source.

    I've tried installing Transformers as well and get the same errors. I've tried installing Tokenizers and then installing Transformers and got the same errors.

    I've tried installing Transformers and then Tokenizers and gotten the same error.

    I've looked through the Tokenizers code and unless I'm missing something (entirely possible) autotokenize isn't even a part of the package? I'll admit I'm not a very experienced programmer but I'll be damned if I can find it.

    Help would be appreciated.

    System specs are:

    Linux mint 21.1 RTX2080 ti i78700k

    cudnn 8.1.1 cuda 11.2.0 Tensor rt 7.2.3 Python 3.7 (by the way, figuring out what was needed here, finding the files, and actually installing them was beyond arduous. There has to be a better way. It's the only way I could get anything at all to work though).

    opened by kronkinatorix 1
  • How to decode with the existing tokenizer

    How to decode with the existing tokenizer

    I train the tokenizer following the tutorial of the huggingface:

    from tokenizers import Tokenizer
    from tokenizers.models import BPE
    from tokenizers.trainers import BpeTrainer
    from tokenizers.pre_tokenizers import Whitespace
    
    tokenizer = Tokenizer(BPE(unk_token="[UNK]"))
    trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
    tokenizer.pre_tokenizer = Whitespace()
    files = [f"wikitext-103-raw/wiki.{split}.raw" for split in ["test", "train", "valid"]]
    tokenizer.train(files, trainer)
    tokenizer.save("tokenizer-wiki.json")
    

    But I don't know how to use the existing tokenizer for decoding:

    tokenizer = Tokenizer.from_file("tokenizer-wiki.json")
    o=tokenizer.encode("sd jk sds  sds")
    tokenizer.decode(o.ids)
    # s d j k s ds s ds
    

    I know we can recover the output with the o.offsets, but what if we do not know the offsets, like we are decoding from a language model or NMT.

    opened by ZhiYuanZeng 4
  • Is there any support for 'google/tapas-mini-finetuned-wtq' tokenizer?

    Is there any support for 'google/tapas-mini-finetuned-wtq' tokenizer?

    I'm trying to run a tokenizer in java then eventually compile it to run on android for an open domain question and answer project. I'm wondering why 'google/tapas-mini-finetuned-wtq' doesn't work with DeepJavaLibrary. For more popular models the tokenizer is working. I'm assuming there is no fast tokenizer for tapas, so i was wondering if anyone had any advice on how to go about running tapas tokenizer and model on android/java?

    opened by memetrusidovski 4
  • OpenSSL internal error when importing tokenizers module

    OpenSSL internal error when importing tokenizers module

    When importing tokenizers 0.13.2 or 0.13.1 in a Fips mode enabled environment with Red Hat Enterprise Linux 8.6 (Ootpa) we see this error:

    sh-4.4# python3 -c "import tokenizers"
    fips.c(145): OpenSSL internal error, assertion failed: FATAL FIPS SELFTEST FAILURE
    Aborted (core dumped)
    

    Additional info:

    No errors when using tokenizers==0.13.0 or tokenizers==0.11.4
    Python 3.8.13
    OpenSSL 1.1.1g FIPS  21 Apr 2020 or OpenSSL 1.1.1k  FIPS 25 Mar 2021
    
    opened by wai25 3
Releases(v0.13.2)
  • v0.13.2(Nov 7, 2022)

  • python-v0.13.2(Nov 7, 2022)

  • node-v0.13.2(Nov 7, 2022)

  • v0.13.1(Oct 6, 2022)

  • python-v0.13.1(Oct 6, 2022)

  • node-v0.13.1(Oct 6, 2022)

  • python-v0.13.0(Sep 21, 2022)

    [0.13.0]

    • [#956] PyO3 version upgrade
    • [#1055] M1 automated builds
    • [#1008] Decoder is now a composable trait, but without being backward incompatible
    • [#1047, #1051, #1052] Processor is now a composable trait, but without being backward incompatible

    Both trait changes warrant a "major" number since, despite best efforts to not break backward compatibility, the code is different enough that we cannot be exactly sure.

    Source code(tar.gz)
    Source code(zip)
  • v0.13.0(Sep 19, 2022)

    [0.13.0]

    • [#1009] unstable_wasm feature to support building on Wasm (it's unstable !)
    • [#1008] Decoder is now a composable trait, but without being backward incompatible
    • [#1047, #1051, #1052] Processor is now a composable trait, but without being backward incompatible

    Both trait changes warrant a "major" number since, despite best efforts to not break backward compatibility, the code is different enough that we cannot be exactly sure.

    Source code(tar.gz)
    Source code(zip)
  • node-v0.13.0(Sep 19, 2022)

    [0.13.0]

    • [#1008] Decoder is now a composable trait, but without being backward incompatible
    • [#1047, #1051, #1052] Processor is now a composable trait, but without being backward incompatible
    Source code(tar.gz)
    Source code(zip)
  • python-v0.12.1(Apr 13, 2022)

  • v0.12.0(Mar 31, 2022)

    [0.12.0]

    Bump minor version because of a breaking change.

    The breaking change was causing more issues upstream in transformers than anticipated: https://github.com/huggingface/transformers/pull/16537#issuecomment-1085682657

    The decision was to rollback on that breaking change, and figure out a different way later to do this modification

    • [#938] Breaking change. Decoder trait is modified to be composable. This is only breaking if you are using decoders on their own. tokenizers should be error free.

    • [#939] Making the regex in ByteLevel pre_tokenizer optional (necessary for BigScience)

    • [#952] Fixed the vocabulary size of UnigramTrainer output (to respect added tokens)

    • [#954] Fixed not being able to save vocabularies with holes in vocab (ConvBert). Yell warnings instead, but stop panicking.

    • [#961] Added link for Ruby port of tokenizers

    • [#960] Feature gate for cli and its clap dependency

    Source code(tar.gz)
    Source code(zip)
  • python-v0.12.0(Mar 31, 2022)

    [0.12.0]

    The breaking change was causing more issues upstream in transformers than anticipated: https://github.com/huggingface/transformers/pull/16537#issuecomment-1085682657

    The decision was to rollback on that breaking change, and figure out a different way later to do this modification

    Bump minor version because of a breaking change.

    • [#938] Breaking change. Decoder trait is modified to be composable. This is only breaking if you are using decoders on their own. tokenizers should be error free.

    • [#939] Making the regex in ByteLevel pre_tokenizer optional (necessary for BigScience)

    • [#952] Fixed the vocabulary size of UnigramTrainer output (to respect added tokens)

    • [#954] Fixed not being able to save vocabularies with holes in vocab (ConvBert). Yell warnings instead, but stop panicking.

    • [#962] Fix tests for python 3.10

    • [#961] Added link for Ruby port of tokenizers

    Source code(tar.gz)
    Source code(zip)
  • node-v0.12.0(Mar 31, 2022)

    [0.12.0]

    The breaking change was causing more issues upstream in transformers than anticipated: https://github.com/huggingface/transformers/pull/16537#issuecomment-1085682657

    The decision was to rollback on that breaking change, and figure out a different way later to do this modification

    Bump minor version because of a breaking change. Using 0.12 to match other bindings.

    • [#938] Breaking change. Decoder trait is modified to be composable. This is only breaking if you are using decoders on their own. tokenizers should be error free.

    • [#939] Making the regex in ByteLevel pre_tokenizer optional (necessary for BigScience)

    • [#952] Fixed the vocabulary size of UnigramTrainer output (to respect added tokens)

    • [#954] Fixed not being able to save vocabularies with holes in vocab (ConvBert). Yell warnings instead, but stop panicking.

    • [#961] Added link for Ruby port of tokenizers

    Source code(tar.gz)
    Source code(zip)
  • v0.11.2(Feb 28, 2022)

  • python-v0.11.6(Feb 28, 2022)

  • node-v0.8.3(Feb 28, 2022)

  • python-v0.11.5(Feb 16, 2022)

  • v0.11.1(Jan 17, 2022)

    • [#882] Fixing Punctuation deserialize without argument.
    • [#868] Fixing missing direction in TruncationParams
    • [#860] Adding TruncationSide to TruncationParams
    Source code(tar.gz)
    Source code(zip)
  • python-v0.11.3(Jan 17, 2022)

    • [#882] Fixing Punctuation deserialize without argument.
    • [#868] Fixing missing direction in TruncationParams
    • [#860] Adding TruncationSide to TruncationParams
    Source code(tar.gz)
    Source code(zip)
  • node-v0.8.2(Jan 17, 2022)

  • node-v0.8.1(Jan 17, 2022)

  • python-v0.11.4(Jan 17, 2022)

  • python-v0.11.2(Jan 4, 2022)

  • python-v0.11.1(Dec 28, 2021)

  • python-v0.11.0(Dec 24, 2021)

    Fixed

    • [#585] Conda version should now work on old CentOS
    • [#844] Fixing interaction between is_pretokenized and trim_offsets.
    • [#851] Doc links

    Added

    • [#657]: Add SplitDelimiterBehavior customization to Punctuation constructor
    • [#845]: Documentation for Decoders.

    Changed

    • [#850]: Added a feature gate to enable disabling http features
    • [#718]: Fix WordLevel tokenizer determinism during training
    • [#762]: Add a way to specify the unknown token in SentencePieceUnigramTokenizer
    • [#770]: Improved documentation for UnigramTrainer
    • [#780]: Add Tokenizer.from_pretrained to load tokenizers from the Hugging Face Hub
    • [#793]: Saving a pretty JSON file by default when saving a tokenizer
    Source code(tar.gz)
    Source code(zip)
  • node-v0.8.0(Sep 2, 2021)

    BREACKING CHANGES

    • Many improvements on the Trainer (#519). The files must now be provided first when calling tokenizer.train(files, trainer).

    Features

    • Adding the TemplateProcessing
    • Add WordLevel and Unigram models (#490)
    • Add nmtNormalizer and precompiledNormalizer normalizers (#490)
    • Add templateProcessing post-processor (#490)
    • Add digitsPreTokenizer pre-tokenizer (#490)
    • Add support for mapping to sequences (#506)
    • Add splitPreTokenizer pre-tokenizer (#542)
    • Add behavior option to the punctuationPreTokenizer (#657)
    • Add the ability to load tokenizers from the Hugging Face Hub using fromPretrained (#780)

    Fixes

    • Fix a bug where long tokenizer.json files would be incorrectly deserialized (#459)
    • Fix RobertaProcessing deserialization in PostProcessorWrapper (#464)
    Source code(tar.gz)
    Source code(zip)
  • python-v0.10.3(May 24, 2021)

    Fixed

    • [#686]: Fix SPM conversion process for whitespace deduplication
    • [#707]: Fix stripping strings containing Unicode characters

    Added

    • [#693]: Add a CTC Decoder for Wave2Vec models

    Removed

    • [#714]: Removed support for Python 3.5
    Source code(tar.gz)
    Source code(zip)
  • python-v0.10.2(Apr 5, 2021)

    Fixed

    • [#652]: Fix offsets for Precompiled corner case
    • [#656]: Fix BPE continuing_subword_prefix
    • [#674]: Fix Metaspace serialization problems
    Source code(tar.gz)
    Source code(zip)
  • python-v0.10.1(Feb 4, 2021)

    Fixed

    • [#616]: Fix SentencePiece tokenizers conversion
    • [#617]: Fix offsets produced by Precompiled Normalizer (used by tokenizers converted from SPM)
    • [#618]: Fix Normalizer.normalize with PyNormalizedStringRefMut
    • [#620]: Fix serialization/deserialization for overlapping models
    • [#621]: Fix ByteLevel instantiation from a previously saved state (using __getstate__())
    Source code(tar.gz)
    Source code(zip)
  • python-v0.10.0(Jan 12, 2021)

    Added

    • [#508]: Add a Visualizer for notebooks to help understand how the tokenizers work
    • [#519]: Add a WordLevelTrainer used to train a WordLevel model
    • [#533]: Add support for conda builds
    • [#542]: Add Split pre-tokenizer to easily split using a pattern
    • [#544]: Ability to train from memory. This also improves the integration with datasets
    • [#590]: Add getters/setters for components on BaseTokenizer
    • [#574]: Add fust_unk option to SentencePieceBPETokenizer

    Changed

    • [#509]: Automatically stubbing the .pyi files
    • [#519]: Each Model can return its associated Trainer with get_trainer()
    • [#530]: The various attributes on each component can be get/set (ie. tokenizer.model.dropout = 0.1)
    • [#538]: The API Reference has been improved and is now up-to-date.

    Fixed

    • [#519]: During training, the Model is now trained in-place. This fixes several bugs that were forcing to reload the Model after a training.
    • [#539]: Fix BaseTokenizer enable_truncation docstring
    Source code(tar.gz)
    Source code(zip)
Owner
Hugging Face
Solving NLP, one commit at a time!
Hugging Face
A Python/Pytorch app for easily synthesising human voices

Voice Cloning App A Python/Pytorch app for easily synthesising human voices Documentation Discord Server Video guide Voice Sharing Hub FAQ's System Re

Ben Andrew 840 Jan 04, 2023
Autoregressive Entity Retrieval

The GENRE (Generative ENtity REtrieval) system as presented in Autoregressive Entity Retrieval implemented in pytorch. @inproceedings{decao2020autoreg

Meta Research 611 Dec 16, 2022
A unified tokenization tool for Images, Chinese and English.

ICE Tokenizer Token id [0, 20000) are image tokens. Token id [20000, 20100) are common tokens, mainly punctuations. E.g., icetk[20000] == 'unk', ice

THUDM 42 Dec 27, 2022
Unofficial Implementation of Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration

Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration This repo contains only model Implementation of Zero-Shot Text-to-Speech for Text

Rishikesh (ऋषिकेश) 33 Sep 22, 2022
Text editor on python tkinter to convert english text to other languages with the help of ployglot.

Transliterator Text Editor This is a simple transliteration program which is used to convert english word to phonetically matching word in another lan

Merin Rose Tom 1 Jan 16, 2022
This converter will create the exact measure for your cappuccino recipe from the grandiose Rafaella Ballerini!

About CappuccinoJs This converter will create the exact measure for your cappuccino recipe from the grandiose Rafaella Ballerini! Este conversor criar

Arthur Ottoni Ribeiro 48 Nov 15, 2022
Creating a Feed of MISP Events from ThreatFox (by abuse.ch)

ThreatFox2Misp Creating a Feed of MISP Events from ThreatFox (by abuse.ch) What will it do? This will fetch IOCs from ThreatFox by Abuse.ch, convert t

17 Nov 22, 2022
Treemap visualisation of Maya scene files

Ever wondered which nodes are responsible for that 600 mb+ Maya scene file? Features Fast, resizable UI Parsing at 50 mb/sec Dependency-free, single-f

Marcus Ottosson 76 Nov 12, 2022
Reading Wikipedia to Answer Open-Domain Questions

DrQA This is a PyTorch implementation of the DrQA system described in the ACL 2017 paper Reading Wikipedia to Answer Open-Domain Questions. Quick Link

Facebook Research 4.3k Jan 01, 2023
Word Bot for JKLM Bomb Party

Word Bot for JKLM Bomb Party A bot for Bomb Party on https://www.jklm.fun (Only English) Requirements pynput pyperclip pyautogui Usage: Step 1: Run th

Nicolas 7 Oct 30, 2022
Code repository for "It's About Time: Analog clock Reading in the Wild"

it's about time Code repository for "It's About Time: Analog clock Reading in the Wild" Packages required: pytorch (used 1.9, any reasonable version s

52 Nov 10, 2022
Extract city and country mentions from Text like GeoText without regex, but FlashText, a Aho-Corasick implementation.

flashgeotext ⚡ 🌍 Extract and count countries and cities (+their synonyms) from text, like GeoText on steroids using FlashText, a Aho-Corasick impleme

Ben 57 Dec 16, 2022
Minimal GUI for accessing the Watson Text to Speech service.

Description Minimal graphical application for accessing the Watson Text to Speech service. Requirements Python 3 plus all dependencies listed in requi

Moritz Maxeiner 1 Oct 22, 2021
RuCLIP tiny (Russian Contrastive Language–Image Pretraining) is a neural network trained to work with different pairs (images, texts).

RuCLIPtiny Zero-shot image classification model for Russian language RuCLIP tiny (Russian Contrastive Language–Image Pretraining) is a neural network

Shahmatov Arseniy 26 Sep 20, 2022
CDLA: A Chinese document layout analysis (CDLA) dataset

CDLA: A Chinese document layout analysis (CDLA) dataset 介绍 CDLA是一个中文文档版面分析数据集,面向中文文献类(论文)场景。包含以下10个label: 正文 标题 图片 图片标题 表格 表格标题 页眉 页脚 注释 公式 Text Title

buptlihang 84 Dec 28, 2022
This is a project built for FALLABOUT2021 event under SRMMIC, This project deals with NLP poetry generation.

FALLABOUT-SRMMIC 21 POETRY-GENERATION HINGLISH DESCRIPTION We have developed a NLP(natural language processing) model which automatically generates a

7 Sep 28, 2021
[AAAI 21] Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning

◥ Curriculum Labeling ◣ Revisiting Pseudo-Labeling for Semi-Supervised Learning Paola Cascante-Bonilla, Fuwen Tan, Yanjun Qi, Vicente Ordonez. In the

UVA Computer Vision 113 Dec 15, 2022
An open collection of annotated voices in Japanese language

声庭 (Koniwa): オープンな日本語音声とアノテーションのコレクション Koniwa (声庭): An open collection of annotated voices in Japanese language 概要 Koniwa(声庭)は利用・修正・再配布が自由でオープンな音声とアノテ

Koniwa project 32 Dec 14, 2022
Poetry PEP 517 Build Backend & Core Utilities

Poetry Core A PEP 517 build backend implementation developed for Poetry. This project is intended to be a light weight, fully compliant, self-containe

Poetry 293 Jan 02, 2023