🧪 Cutting-edge experimental spaCy components and features

Overview

spacy-experimental: Cutting-edge experimental spaCy components and features

This package includes experimental components and features for spaCy v3.x, for example model architectures, pipeline components and utilities.

Azure Pipelines pypi Version

Installation

Install with pip:

python -m pip install -U pip setuptools wheel
python -m pip install spacy-experimental

Using spacy-experimental

Components and features may be modified or removed in any release, so always specify the exact version as a package requirement if you're experimenting with a particular component, e.g.:

spacy-experimental==0.147.0

Then you can add the experimental components to your config or import from spacy_experimental:

[components.experimental_edit_tree_lemmatizer]
factory = "experimental_edit_tree_lemmatizer"

Components

Edit tree lemmatizer

[components.experimental_edit_tree_lemmatizer]
factory = "experimental_edit_tree_lemmatizer"
# token attr to use as backoff with the predicted trees are not applicable; null to leave unset
backoff = "orth"
# prune trees that are applied less than this frequency in the training data
min_tree_freq = 2
# whether to overwrite existing lemma annotation
overwrite = false
scorer = {"@scorers":"spacy.lemmatizer_scorer.v1"}
# try to apply at most the k most probable edit trees
top_k = 1

Trainable character-based tokenizers

Two trainable tokenizers represent tokenization as a sequence tagging problem over individual characters and use the existing spaCy tagger and NER architectures to perform the tagging.

In the spaCy pipeline, a simple "pretokenizer" is applied as the pipeline tokenizer to split each doc into individual characters and the trainable tokenizer is a pipeline component that retokenizes the doc. The pretokenizer needs to be configured manually in the config or with spacy.blank():

nlp = spacy.blank(
    "en",
    config={
        "nlp": {
            "tokenizer": {"@tokenizers": "spacy-experimental.char_pretokenizer.v1"}
        }
    },
)

The two tokenizers currently reset any existing tag or entity annotation respectively in the process of retokenizing.

Character-based tagger tokenizer

In the tagger version experimental_char_tagger_tokenizer, the tagging problem is represented internally with character-level tags for token start (T), token internal (I), and outside a token (O). This representation comes from Elephant: Sequence Labeling for Word and Sentence Segmentation (Evang et al., 2013).

This is a sentence.
TIIIOTIOTOTIIIIIIIT

With the option annotate_sents, S replaces T for the first token in each sentence and the component predicts both token and sentence boundaries.

This is a sentence.
SIIIOTIOTOTIIIIIIIT

A config excerpt for experimental_char_tagger_tokenizer:

[nlp]
pipeline = ["experimental_char_tagger_tokenizer"]
tokenizer = {"@tokenizers":"spacy-experimental.char_pretokenizer.v1"}

[components]

[components.experimental_char_tagger_tokenizer]
factory = "experimental_char_tagger_tokenizer"
annotate_sents = true
scorer = {"@scorers":"spacy-experimental.tokenizer_senter_scorer.v1"}

[components.experimental_char_tagger_tokenizer.model]
@architectures = "spacy.Tagger.v1"
nO = null

[components.experimental_char_tagger_tokenizer.model.tok2vec]
@architectures = "spacy.Tok2Vec.v2"

[components.experimental_char_tagger_tokenizer.model.tok2vec.embed]
@architectures = "spacy.MultiHashEmbed.v2"
width = 128
attrs = ["ORTH","LOWER","IS_DIGIT","IS_ALPHA","IS_SPACE","IS_PUNCT"]
rows = [1000,500,50,50,50,50]
include_static_vectors = false

[components.experimental_char_tagger_tokenizer.model.tok2vec.encode]
@architectures = "spacy.MaxoutWindowEncoder.v2"
width = 128
depth = 4
window_size = 4
maxout_pieces = 2

Character-based NER tokenizer

In the NER version, each character in a token is part of an entity:

T	B-TOKEN
h	I-TOKEN
i	I-TOKEN
s	I-TOKEN
 	O
i	B-TOKEN
s	I-TOKEN
	O
a	B-TOKEN
 	O
s	B-TOKEN
e	I-TOKEN
n	I-TOKEN
t	I-TOKEN
e	I-TOKEN
n	I-TOKEN
c	I-TOKEN
e	I-TOKEN
.	B-TOKEN

A config excerpt for experimental_char_ner_tokenizer:

[nlp]
pipeline = ["experimental_char_ner_tokenizer"]
tokenizer = {"@tokenizers":"spacy-experimental.char_pretokenizer.v1"}

[components]

[components.experimental_char_ner_tokenizer]
factory = "experimental_char_ner_tokenizer"
scorer = {"@scorers":"spacy-experimental.tokenizer_scorer.v1"}

[components.experimental_char_ner_tokenizer.model]
@architectures = "spacy.TransitionBasedParser.v2"
state_type = "ner"
extra_state_tokens = false
hidden_width = 64
maxout_pieces = 2
use_upper = true
nO = null

[components.experimental_char_ner_tokenizer.model.tok2vec]
@architectures = "spacy.Tok2Vec.v2"

[components.experimental_char_ner_tokenizer.model.tok2vec.embed]
@architectures = "spacy.MultiHashEmbed.v2"
width = 128
attrs = ["ORTH","LOWER","IS_DIGIT","IS_ALPHA","IS_SPACE","IS_PUNCT"]
rows = [1000,500,50,50,50,50]
include_static_vectors = false

[components.experimental_char_ner_tokenizer.model.tok2vec.encode]
@architectures = "spacy.MaxoutWindowEncoder.v2"
width = 128
depth = 4
window_size = 4
maxout_pieces = 2

The NER version does not currently support sentence boundaries, but it would be easy to extend using a B-SENT entity type.

Biaffine parser

A biaffine dependency parser, similar to that proposed in [Deep Biaffine Attention for Neural Dependency Parsing](Deep Biaffine Attention for Neural Dependency Parsing) (Dozat & Manning, 2016). The parser consists of two parts: an edge predicter and an edge labeler. For example:

[components.experimental_arc_predicter]
factory = "experimental_arc_predicter"

[components.experimental_arc_labeler]
factory = "experimental_arc_labeler"

The arc predicter requires that a previous component (such as senter) sets sentence boundaries during training. Therefore, such a component must be added to annotating_components:

[training]
annotating_components = ["senter"]

The biaffine parser sample project provides an example biaffine parser pipeline.

Architectures

None currently.

Other

Tokenizers

  • spacy-experimental.char_pretokenizer.v1: Tokenize a text into individual characters.

Scorers

  • spacy-experimental.tokenizer_scorer.v1: Score tokenization.
  • spacy-experimental.tokenizer_senter_scorer.v1: Score tokenization and sentence segmentation.

Bug reports and issues

Please report bugs in the spaCy issue tracker or open a new thread on the discussion board for other issues.

Older documentation

See the READMEs in earlier tagged versions for details about components in earlier releases.

Comments
  • Coref Components

    Coref Components

    This is a continuation of https://github.com/explosion/spaCy/pull/7264, since we decided to add the coref components here first. It's still a work in progress.

    enhancement 
    opened by polm 15
  • Add experimental Span Suggesters

    Add experimental Span Suggesters

    This PR adds three new experimental suggester functions for the spancat component and a spaCy project showcasing how to use them in a config.cfg file.

    Subtree Suggester:

    • Uses annotations from the Tagger and Parser to suggests subtrees of individual tokens

    Chunk Suggester:

    • Uses annotations from the Tagger and Parser to suggest noun_chunks

    Sentence Suggester:

    • Uses sentence boundaries to suggest sentences

    These suggesters also come with the ngram functionality which allows users to set a list of sizes for suggesting individual ngrams

    The spaCy project covers:

    • How to source components from existing models
    • How to use frozen_components & annotating_components
    • How to use custom suggester functions registered in the registry
    enhancement 
    opened by thomashacker 12
  • Span Finder Suggester

    Span Finder Suggester

    This PR adds a new experimental component for learning span boundaries and a custom suggester function for spancat. It further adds a spaCy project showcasing how to use the SpanFinder component on 3 different datasets (Healthsea, ToxicSpans, Genia) with 2 configurations (tok2vec & transformer). The project also provides the possibility to train spancat with ngram and compare it to SpanFinder with a custom evaluation script that calculates the performance and overall coverage of the suggester functions.

    Features

    • spaCy project for comparing SpanFinder vs Ngram
    • SpanFinder model
    • SpanFinder component
    • SpanFinder suggester
    • Unit tests for component, model and suggester
    enhancement 
    opened by thomashacker 10
  • Fix handling of small docs in coref

    Fix handling of small docs in coref

    Docs with one or zero tokens fail in the coref component. This doesn't have a fix yet, just a failing test. (There is also a test for the span resolver, which does not fail.)

    bug 
    opened by polm 2
  • Fix issue with resolving final token in SpanResolver

    Fix issue with resolving final token in SpanResolver

    The SpanResolver seems unable to include the final token in a Doc in output spans. It will even produce empty spans instead of doing so.

    This makes changes so that within the model span end indices are treated as inclusive, and converts them back to exclusive when annotating docs. This has been tested to work, though an automated test should be added.

    bug 
    opened by polm 2
  • Make coref entry points work without PyTorch

    Make coref entry points work without PyTorch

    Before this PR, in environments without PyTorch, using spacy experimental can fail due to attempts to load entry points. This change makes it so the types required for class definitions (torch.nn.Module and torch.Tensor) are stubbed to object when torch is not available.

    opened by polm 2
  • Fix device issue with indices in coref

    Fix device issue with indices in coref

    It looks like Torch 1.13.0 has some changes in the way devices are handled and can result in subtle errors in code that worked previously. This explicitly specifies a device in one place, and may resolve https://github.com/explosion/spaCy/issues/11734. For another example of this issue, see https://github.com/pytorch/pytorch/issues/85450.

    The core problem is that a CPU tensor is being indexed using a non-CPU tensor.

    Leaving as a draft in case this doesn't resolve this issue, which might happen if we're doing the same thing somewhere else.

    bug 
    opened by polm 1
  • Add test step before PyTorch is installed

    Add test step before PyTorch is installed

    spacy-experimental is supposed to be safe to load without PyTorch, even if large parts of it aren't functional, but that wasn't checked in tests. This adds a check for that by simply running the tests before installing PyTorch and again afterwards.

    Given the current state of master, which doesn't have #23, this should fail.

    opened by polm 1
  • `Coref`: Optimize `SpanResolver.set_annotations`

    `Coref`: Optimize `SpanResolver.set_annotations`

    Coerce scalar tensors to native Python integers to avoid comparison overhead.

    With the above change (and another optimization to SpanGroups.copy; PR), we see a 90% reduction in execution time of the set_annotation pipeline phase.

    Before

    Screenshot_20220825_154448

    After

    Screenshot_20220825_154626

    opened by shadeMe 0
  • `Coref`: Optimize `create_gold_scores`

    `Coref`: Optimize `create_gold_scores`

    Copy the entire mentions array to the CPU, create tuple keys on-demand, pre-allocate output matrix as Float2d. Also remove unnecessary casts in CoreferenceResolver.update.

    Before

    Screenshot_20220824_153937

    After

    Screenshot_20220824_154002

    opened by shadeMe 0
  • MultiEmbed

    MultiEmbed

    Embedding component that is the deterministic version of MultiHashEmbed i.e.: each token gets mapped to an index unless they are not in the vocabulary in which case they get mapped to a learned unknown vector.

    The mechanism to initialize MultiEmbed is a bit strange. The Model gets created first with dummy Embed layers. Then when init gets called MultiEmbed expects the model.attrs["tables"] to be already set, which provides the mapping from token attributes to indices. During initialization the dummy Embed layers get replaced by ones that adjust their sizes to the number of symbols in the tables.

    A helper callback is provided in set_attr.py that should be placed in the initialize.before_init section in the config. It can be used to set the tables for MultiEmbed.

    Currently the token_map.py is a script that has the structure of the usual spacy init scrips.

    opened by kadarakos 0
  • Support lazy, recursive sentence splitting

    Support lazy, recursive sentence splitting

    We use sentence splitting in the biaffine parser to keep the O(n^2) biaffine attention model tractable. However, since the sentence splitter makes errors, the parser may not have the correct head available.

    This change adds another splitting strategy as the preferred splitting. The goal of this strategy is to split up a Doc into pieces that are as large as possible given a maximum n_max. This reduces the number of attachment errors as a result of incorrect sentence splits, while providing an upper bound on complexity (O(n_max^2)).

    The algorithm works as follows:

    • If the length |d| > max_length:
      • Find the highest-probability split in d according to senter.
      • Split d into d_1 and d_2 using the highest probability split.
      • Recursively apply this algorithm to d_1 and d_2.
    • Otherwise: do nothing

    Note: draft, requires functionality from PR https://github.com/explosion/spaCy/pull/11002, which targets spaCy v4.

    opened by danieldk 0
Releases(v0.6.1)
  • v0.6.1(Nov 4, 2022)

    • Coref: Docs of one or fewer tokens resulted in an error (#28).
    • Coref: resolved spans could not include the final token in a Doc (#27).
    • Biaffine parser: place tensors on the right device (#25).

    This release includes an updated trained pipeline for demonstration purposes. You can install it like this:

    pip install https://github.com/explosion/spacy-experimental/releases/download/v0.6.1/en_coreference_web_trf-3.4.0a2-py3-none-any.whl
    

    Downloads (wheel)

    Source code(tar.gz)
    Source code(zip)
    en_coreference_web_trf-3.4.0a2-py3-none-any.whl(467.55 MB)
  • v0.6.0(Sep 28, 2022)

    • new coreference components (#17)

    ~~This release includes an experimental English coref pipeline. You can install the pipeline by downloading it from the assets in this release page, or install it directly with the following command:~~

    Update 2022-11-07: Some issues in the coref implementation have been fixed in the v0.6.1 release of this package. While the pipeline below can still be installed and will be left up for posterity, note that it should only be used with 0.6.0, but by default will pull in the newer version. If you want to use the below package (which is not recommended), be sure to pip install spacy-experimental==0.6.0.

    pip install https://github.com/explosion/spacy-experimental/releases/download/v0.6.0/en_coreference_web_trf-3.4.0a0-py3-none-any.whl
    

    For further information about the coref components, see the example project or the API documentation. We'll also be providing more detailed explanations in an upcoming blog post, video, and elsewhere.

    Downloads (wheel)

    Source code(tar.gz)
    Source code(zip)
    en_coreference_web_trf-3.4.0a0-py3-none-any.whl(467.57 MB)
  • v0.5.0(Jun 10, 2022)

    • removed edit tree lemmatizer (#12, it's in core now)
    • biaffine parser updates (#9, #13)
    • add experimental Span Suggesters exploiting parser/tagger/sentence information (#11)
    • add SpanFinder: a new experimental component for learning span boundaries (#10)
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Mar 18, 2022)

Owner
Explosion
A software company specializing in developer tools for Artificial Intelligence and Natural Language Processing
Explosion
NLP topic mdel LDA - Gathered from New York Times website

NLP topic mdel LDA - Gathered from New York Times website

1 Oct 14, 2021
Training and evaluation codes for the BertGen paper (ACL-IJCNLP 2021)

BERTGEN This repository is the implementation of the paper "BERTGEN: Multi-task Generation through BERT" (https://arxiv.org/abs/2106.03484). The codeb

<a href=[email protected]"> 9 Oct 26, 2022
Generate text line images for training deep learning OCR model (e.g. CRNN)

Generate text line images for training deep learning OCR model (e.g. CRNN)

532 Jan 06, 2023
A paper list of pre-trained language models (PLMs).

Large-scale pre-trained language models (PLMs) such as BERT and GPT have achieved great success and become a milestone in NLP.

RUCAIBox 124 Jan 02, 2023
NLP - Machine learning

Flipkart-product-reviews NLP - Machine learning About Product reviews is an essential part of an online store like Flipkart’s branding and marketing.

Harshith VH 1 Oct 29, 2021
Extract city and country mentions from Text like GeoText without regex, but FlashText, a Aho-Corasick implementation.

flashgeotext ⚡ 🌍 Extract and count countries and cities (+their synonyms) from text, like GeoText on steroids using FlashText, a Aho-Corasick impleme

Ben 57 Dec 16, 2022
Textlesslib - Library for Textless Spoken Language Processing

textlesslib Textless NLP is an active area of research that aims to extend NLP t

Meta Research 379 Dec 27, 2022
Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"

This repository contains code for the following two papers: VisualBERT: A Simple and Performant Baseline for Vision and Language (arxiv) with a short

Natural Language Processing @UCLA 464 Jan 04, 2023
NLP project that works with news (NER, context generation, news trend analytics)

СоАвтор СоАвтор – платформа и открытый набор инструментов для редакций и журналистов-фрилансеров, который призван сделать процесс создания контента ма

38 Jan 04, 2023
This is a project built for FALLABOUT2021 event under SRMMIC, This project deals with NLP poetry generation.

FALLABOUT-SRMMIC 21 POETRY-GENERATION HINGLISH DESCRIPTION We have developed a NLP(natural language processing) model which automatically generates a

7 Sep 28, 2021
Espresso: A Fast End-to-End Neural Speech Recognition Toolkit

Espresso Espresso is an open-source, modular, extensible end-to-end neural automatic speech recognition (ASR) toolkit based on the deep learning libra

Yiming Wang 919 Jan 03, 2023
Sequence-to-Sequence Framework in PyTorch

nmtpytorch allows training of various end-to-end neural architectures including but not limited to neural machine translation, image captioning and au

LIUM 395 Nov 21, 2022
This is a Prototype of an Ai ChatBot "Tea and Coffee Supplier" using python.

Ai-ChatBot-Python A chatbot is an intelligent system which can hold a conversation with a human using natural language in real time. Due to the rise o

1 Oct 30, 2021
Bot to connect a real Telegram user, simulating responses with OpenAI's davinci GPT-3 model.

AI-BOT Bot to connect a real Telegram user, simulating responses with OpenAI's davinci GPT-3 model.

Thempra 2 Dec 21, 2022
Code and data accompanying Natural Language Processing with PyTorch

Natural Language Processing with PyTorch Build Intelligent Language Applications Using Deep Learning By Delip Rao and Brian McMahan Welcome. This is a

Joostware 1.8k Jan 01, 2023
Deduplication is the task to combine different representations of the same real world entity.

Deduplication is the task to combine different representations of the same real world entity. This package implements deduplication using active learning. Active learning allows for rapid training wi

63 Nov 17, 2022
A script that automatically creates a branch name using google translation api and jira api

About google translation api와 jira api을 사용하여 자동으로 브랜치 이름을 만들어주는 스크립트 Setup 환경변수에 다음 3가지를 등록해야 한다. JIRA_USER : JIRA email (ex: hyunwook.kim 2 Dec 20, 2021

Conversational-AI-ChatBot - Intelligent ChatBot built with Microsoft's DialoGPT transformer to make conversations with human users!

Conversational AI ChatBot Intelligent ChatBot built with Microsoft's DialoGPT transformer to make conversations with human users! In this project? Thi

Rajkumar Lakshmanamoorthy 6 Nov 30, 2022
Speach Recognitions

easy_meeting Добро пожаловать в интерфейс сервиса автопротоколирования совещаний Easy Meeting. Website - http://cf5c-62-192-251-83.ngrok.io/ Принципиа

Maksim 3 Feb 18, 2022
🎐 a python library for doing approximate and phonetic matching of strings.

jellyfish Jellyfish is a python library for doing approximate and phonetic matching of strings. Written by James Turk James Turk 1.8k Dec 21, 2022