Coreference resolution for English, German and Polish, optimised for limited training data and easily extensible for further languages

Overview

Coreferee

Author: Richard Paul Hudson, msg systems ag

1. Introduction

1.1 The basic idea

Coreferences are situations where two or more words within a text refer to the same entity, e.g. John went home because he was tired. Resolving coreferences is an important general task within the natural language processing field.

Coreferee is a Python 3 library (tested with version 3.8.7) that is used together with spaCy (tested with version 3.0.5) to resolve coreferences within English, German and Polish texts. It is designed so that it is easy to add support for new languages. It uses a mixture of neural networks and programmed rules.

1.2 Getting started

1.2.1 English

Presuming you have already installed spaCy and one of the English spacy models, install Coreferee from the command line by typing:

python3 -m pip install coreferee
python3 -m coreferee install en

Note that:

  • the required command may be python rather than python3 on some operating systems;
  • in order to use the transformer-based spaCy model en_core_web_trf with Coreferee, you will need to install the spaCy model en_core_web_lg as well (see the explanation here).

Then open a Python prompt (type python3 or python at the command line):

>>> import coreferee, spacy
>>> nlp = spacy.load('en_core_web_trf')
>>> nlp.add_pipe('coreferee')
<coreferee.manager.CorefereeBroker object at 0x000002DE8E9256D0>
>>>
>>> doc = nlp("Although he was very busy with his work, Peter had had enough of it. He and his wife decided they needed a holiday. They travelled to Spain because they loved the country very much.")
>>>
>>> doc._.coref_chains.print()
0: he(1), his(6), Peter(9), He(16), his(18)
1: work(7), it(14)
2: [He(16); wife(19)], they(21), They(26), they(31)
3: Spain(29), country(34)
>>>
>>> doc[16]._.coref_chains.print()
0: he(1), his(6), Peter(9), He(16), his(18)
2: [He(16); wife(19)], they(21), They(26), they(31)
>>>
>>> doc._.coref_chains.resolve(doc[31])
[Peter, wife]
>>>

1.2.2 German

Presuming you have already installed spaCy and one of the German spacy models, install Coreferee from the command line by typing:

python3 -m pip install coreferee
python3 -m coreferee install de

Note that the required command may be python rather than python3 on some operating systems.

Then open a Python prompt (type python3 or python at the command line):

>>> import coreferee, spacy
>>> nlp = spacy.load('de_core_news_lg')
>>> nlp.add_pipe('coreferee')
<coreferee.manager.CorefereeBroker object at 0x0000026E84C63B50>
>>>
>>> doc = nlp("Weil er mit seiner Arbeit sehr beschäftigt war, hatte Peter genug davon. Er und seine Frau haben entschieden, dass ihnen ein Urlaub gut tun würde. Sie sind nach Spanien gefahren, weil ihnen das Land sehr gefiel.")
>>>
>>> doc._.coref_chains.print()
0: er(1), seiner(3), Peter(10), Er(14), seine(16)
1: Arbeit(4), davon(12)
2: [Er(14); Frau(17)], ihnen(22), Sie(29), ihnen(36)
3: Spanien(32), Land(38)
>>>
>>> doc[14]._.coref_chains.print()
0: er(1), seiner(3), Peter(10), Er(14), seine(16)
2: [Er(14); Frau(17)], ihnen(22), Sie(29), ihnen(36)
>>>
>>> doc._.coref_chains.resolve(doc[36])
[Peter, Frau]
>>>

1.2.3 Polish

Presuming you have already installed spaCy and one of the Polish spacy models, install Coreferee from the command line by typing:

python3 -m pip install coreferee
python3 -m coreferee install pl

Note that the required command may be python rather than python3 on some operating systems.

Then open a Python prompt (type python3 or python at the command line):

>>> import coreferee, spacy
>>> nlp = spacy.load('pl_core_news_lg')
>>> nlp.add_pipe('coreferee')
<coreferee.manager.CorefereeBroker object at 0x0000027304C63B50>
>>>
>>> doc = nlp("Ponieważ bardzo zajęty był swoją pracą, Janek miał jej dość. Postanowili z jego żoną, że potrzebują wakacji. Pojechali do Hiszpanii, bo bardzo im się ten kraj podobał.")
>>>
>>> doc._.coref_chains.print()
0: był(3), swoją(4), Janek(7), Postanowili(12), jego(14)
1: pracą(5), jej(9)
2: [Postanowili(12); żoną(15)], potrzebują(18), Pojechali(21), im(27)
3: Hiszpanii(23), kraj(30)
>>>
>>> doc[12]._.coref_chains.print()
0: był(3), swoją(4), Janek(7), Postanowili(12), jego(14)
2: [Postanowili(12); żoną(15)], potrzebują(18), Pojechali(21), im(27)
>>>
>>> doc._.coref_chains.resolve(doc[27])
[Janek, żoną]
>>>

1.3 Background information

Handling coreference resolution successfully requires training corpora that have been manually annotated with coreferences. The state of the art in coreference resolution is progressing rapidly, but is largely focussed on techniques that require training corpora that are larger than what is available for most languages and software developers. The CONLL 2012 training corpus, which is most widely used, has the following restrictions:

  • CONLL 2012 covers English, Chinese and Arabic; there is nothing of comparable size for most other languages. For example, the corpus we used to train Coreferee for German is around a tenth of the size of CONLL 2012;

  • CONLL 2012 is not publicly available and has a license that precludes non-members of the Linguistic Data Consortium from using models commercially that CONLL 2012 was used to train.

Earlier versions of spaCy had an extension, Neuralcoref, that was excellent but that was never made publicly available for any language other than English. The aim of Coreferee, on the other hand, is to get coreference resolution working for a variety of languages: our focus is less on necessarily achieving the best possible precision and recall for English than on enabling the functionality to be reproduced for new languages as easily and as quickly as possible. Because training data is in such short supply for most languages and is very effort-intensive to produce, it is important to use what is available as effectively as possible.

There are three essential strategies that human readers employ to recognise coreferences within a text:

  1. Hard grammatical rules that completely preclude entities within a text from coreferring, e.g. The house stood tall. They went on walking. Such rules play an especially important role in languages that have grammatical gender, which includes most continental European languages.

  2. Pragmatic tendencies, e.g. a word that begins a sentence and that is a grammatical subject is more likely than a word that is in the middle of a sentence and that forms part of a prepositional phrase to be referred back to by a pronoun that follows it in the next sentence.

  3. Semantic restrictions, i.e. which entities can realistically do what to which entities in the world being described. For example, in the sentence The child saddled her up, a reader's experience of the world will make it clear that her must refer to a horse.

With unlimited training data, it would be possible to train a system to employ all three strategies effectively from first principles using word vectors. The features of Coreferee that allow effective learning with the limited training data that is available are:

  • Strategy 1) is covered by hardcoded rules for each language that the system is then not required to learn from the training data. Because detailed knowledge of the grammar of a specific natural language is a separate skill set from knowledge of machine learning, the two concerns have been fully separated in Coreferee: rules are covered in a separate module from tendencies. This means that a model for a new language can be generated by a competent Python programmer with no knowledge of machine learning or neural networks;

  • Because the pragmatic tendencies for strategy 2) are very complex and only partially understood by linguists, machine learning and neural networks represent the only realistic way of tackling them. In order to reduce the amount of training data required for neural networks to learn effectively, the syntactic and morphological information supplied by the spaCy models, which have typically been trained with considerably more training data than will be available for coreference resolution, is used as input to neural networks alongside the standard word vectors.

  • Especially with limited training data but probably even with the largest available training datasets, it is unlikely that a system will learn more than the very simplest tendencies for strategy 3). However, making word vectors available to neural networks ensures that Coreferee can make use of whatever tendencies are discernable.

Coreferee started life to assist the Holmes project, which is used for information extraction and intelligent search. Coreferee is in no way dependent on Holmes, but this original aim has led to several design decisions that may seem somewhat atypical. Several of them could easily be altered by someone with a requirement to do so:

  • A mention within Coreferee does not consist of a span, but rather of a single token or of a list of tokens that stand in a coordination relationship to one another.

  • Coreferee does not capture coreferences that are unambiguously evident from the structure of a sentence. For example, the identity of he and doctor in the sentence He was a doctor is not reported by Coreferee because it can easily be derived from a simple analysis of the copular structure of the phrase.

  • Repetitions of first- and second-person pronouns (I was tired. I went home) are not captured as they add no value either for information extraction or for intelligent search.

  • Coreferee focusses heavily on anaphors (for English: pronouns). There is only relatively limited capture of coreference between noun phrases, and it is entirely rule-based. (In turn, however, this serves the aim of working with limited training data: noun-phrase coreference is a more exacting task than anaphor resolution.)

  • Because search performance is much more important for Holmes than document parsing performance, Coreferee performs all analysis eagerly as each document passes through the pipe.

1.4 Facts and figures

1.4.1 Covered relevant linguistic features
Language ISO 639-1 Anaphor expression Agreement classes Coordination expression
Pronominal Verbal Prepositional Conjunctive Comitative
English en My friend came in. He was happy. - - Three singular (natural genders) and one plural class. Peter and Mary -
German de Mein Freund kam rein. Er war glücklich. - Ich benutzte das Auto und hatte damit einige Probleme. Three singular (grammatical genders) and one plural class. Peter und Maria -
Polish pl Wszedł mój kolega. On był szczęśliwy. Wszedł mój kolega. Szczęśliwy był.1 -2 Three singular (grammatical genders) and two plural (natural genders) classes. Piotr i Kasia 1) Piotr z Kasią przyszli;
2) Widziałem Piotra i przyszli z Kasią
  1. Only subject zero anaphors are covered. Object zero anaphors, e.g. Wypiłeś wodę? Tak, wypiłem. are not in scope because they are mainly used colloquially and do not normally occur in the types of text for which Coreferee is primarily designed. Handling them would require creating or locating a detailed dictionary of verb valencies.

  2. Polish has a restricted use of anaphoric prepositions in some formal registers, e.g. Skończyło się to dlań smutno. Because the Polish spaCy models were trained on news texts, they do not recognise such prepositions, meaning that Coreferee cannot capture them either.

1.4.2 Model performance
Language ISO 639-1 Training corpora Total words in training corpora *_trf models *_lg models *_md models *_sm models
Anaphors in 20% Accuracy (%) Anaphors in 20% Accuracy (%) Anaphors in 20% Accuracy (%) Anaphors in 20% Accuracy (%)
English en ParCor/ LitBank 393564 2967 83.52 2903 83.98 2907 83.21 2878 82.49
German de ParCor 164300 - - 625 77.28 620 77.10 625 76.00
Polish pl PCC 548268 - - 1553 72.12 1521 71.07 1383 70.21

Coreferee produces a range of neural-network models for each language corresponding to the various spaCy models for that language. The neural network inputs include word vectors. With _sm (small) models, both spaCy and Coreferee use context-sensitive tensors as an alternative to word vectors. _trf (transformer-based) models, on the other hand, do not use or offer word vectors at all. To remedy this problem, the model configuration files (config.cfg in the directory for each language) allow a vectors model to be specified for use when a main model does not have its own vectors. Coreferee then combines the linguistic information generated by the main model with vector information returned for the individual words in each document by the vectors model.

Because the Coreferee models are rather large (70GB-80GB for the group of models for a given language) and because many users will only be interested in one language, the group of models for a given language is installed using python3 -m coreferee install as demonstrated in the introduction. All Coreferee models are more or less the same size; a larger spaCy model does not equate to a larger Coreferee model. As the figures above demonstrate, the accuracy of Coreferee corresponds closely to the size of the underlying spaCy model, and users are urged to use the larger spaCy models. It is in any case unclear whether there is a situation in which it would make sense to use Coreferee with an _sm model as the Coreferee model would then be considerably larger than the spaCy model!

Assessing and comparing the precision and recall of anaphor resolution algorithms is notoriously difficult. For one thing, two human annotators of the same data will not always agree (and, indeed, there are some cases where Coreferee and a training annotator disagree where Coreferee's interpretation seems the more plausible!) And the same algorithm may perform with wildly different accuracies with different test documents depending on how clearly the documents are written and how often there are competing interpretations of individual anaphors.

Because Coreferee decides where there are anaphors to resolve (as opposed to what to resolve them to) in a purely rule-based fashion and because there is not necessarily a perfect correspondence between the types of anaphor these rules are aiming to capture and the types of anaphor covered by any given training corpus, a recall measure would not be meaningful. Instead, we compare the performance between spaCy models — and, during tuning, between different hyperparameter values — by counting the total number of anaphors that the rules find within the test documents as parsed by the spaCy model being used and that are also annotated with a coreference within the training data. The accuracy then expresses the percentage of these anaphors for which the coreference annotated by the corpus author is part of the chain(s) suggested by Coreferee. In situations where the training data specifies a chain C->B->A and B is a type of coreference that Coreferee is not aiming to capture, C->A is used as a valid training reference.

Assessing the performance of a model requires test data that was not used for training. At the same time, however, Coreferee is explicitly designed for use in situations where training data is at a premium, and it seems a shame to waste the learning opportunity offered by specific training documents just to assess a model a single time. To enable valid testing and at the same time to maximize the use of training data, each model is trained twice. On the first run, around 80% of the data is used for training and the remaining 20% for testing. (In practice, these percentages can vary somewhat because individual documents cannot be split between the two groups.) This first model is then discarded and a second training run is carried out with the available data in its entirity. The assumption is that, because it is based on more training data, the performance of this second model can be presumed to be at least as good as the measured performance of the first model. The obvious drawback, however, is that there is no way of verifying this.

Since coreference between noun phrases is restricted to a small number of cases captured by simple rules, the model assessment figures presented here refer solely to anaphor resolution. When anaphor resolution accuracy is being assessed for a test document, noun pairs are detected and added to chains according to the standard rules, but they do not feature in the accuracy figures. On some rare occasions, however, they may have an indirect effect on accuracy by affecting the semantic considerations that determine which anaphors can be added to which chains.

Note that Total words in training corpora in the table above refers to 100% of the available data for each language, while the Anaphors in 20% columns specify the number of anaphors found in the roughly 20% of this data that is used for model assessment.

2 Interacting with the data model

Coreferee generates Chain objects where each chain is an ordered collection of Mention objects that have been analysed as referring to the same entity. Each mention holds references to one or more spaCy token indexes; a chain can have a maximum of one mention with more than one token (most often its leftmost mention). A given token index occurs in a maximum of two mentions; if it belongs to two mentions the mentions will belong to different chains and one of the mentions will contain multiple tokens. All chains that refer to a given Doc or Token object are managed on a ChainHolder object which is accessed via ._.coref_chains. Reproducing part of the example from the introduction:

>>> doc = nlp("Although he was very busy with his work, Peter had had enough of it. He and his wife decided they needed a holiday. They travelled to Spain because they loved the country very much.")
>>>
>>> doc._.coref_chains.print()
0: he(1), his(6), Peter(9), He(16), his(18)
1: work(7), it(14)
2: [He(16); wife(19)], they(21), They(26), they(31)
3: Spain(29), country(34)
>>>
>>> doc[16]._.coref_chains.print()
0: he(1), his(6), Peter(9), He(16), his(18)
2: [He(16); wife(19)], they(21), They(26), they(31)
>>>

Chains and mentions can be navigated much as if they were lists:

>>> len(doc._.coref_chains)
4
>>> doc._.coref_chains[1].pretty_representation
'1: work(7), it(14)'
>>> len(doc._.coref_chains[1])
2
>>> doc._.coref_chains[1][1]
[14]
>>> len(doc._.coref_chains[1][1])
1
>>> doc._.coref_chains[1][1][0]
14
>>>
>>> for chain in doc._.coref_chains:
...     for mention in chain:
...             print(mention)
...
[1]
[6]
[9]
[16]
[18]
[7]
[14]
[16, 19]
[21]
[26]
[31]
[29]
[34]
>>>

A document with Coreferee annotations can be saved and loaded using the normal spaCy methods: the annotations survive the serialization and deserialization. To facilitate this, Coreferee does not store references to spaCy objects, but merely to token indexes. However, each class has a pretty representation designed for human consumption that contains information from the spaCy document and that is generated eagerly when the object is first instantiated. Additionally, the ChainHolder object has a print() method that prints its chains' pretty representations with one chain on each line:

>>> doc._.coref_chains
[0: [1], [6], [9], [16], [18], 1: [7], [14], 2: [16, 19], [21], [26], [31], 3: [29], [34]]
>>> doc._.coref_chains.pretty_representation
'0: he(1), his(6), Peter(9), He(16), his(18); 1: work(7), it(14); 2: [He(16); wife(19)], they(21), They(26), they(31); 3: Spain(29), country(34)'
>>> doc._.coref_chains.print()
0: he(1), his(6), Peter(9), He(16), his(18)
1: work(7), it(14)
2: [He(16); wife(19)], they(21), They(26), they(31)
3: Spain(29), country(34)
>>>
>>> doc._.coref_chains[0]
0: [1], [6], [9], [16], [18]
>>> doc._.coref_chains[0].pretty_representation
'0: he(1), his(6), Peter(9), He(16), his(18)'
>>>
>>> doc._.coref_chains[0][0]
[1]
>>> doc._.coref_chains[0][0].pretty_representation
'he(1)'
>>>

Each chain has an index number that is unique within the document. It is displayed in the representations of Chain and ChainHolder and can also be accessed directly:

>>> doc._.coref_chains[2].index
2

Each chain can also return the index number of the mention within it that is most specific: noun phrases are more specific than anaphors and proper names more specific than common nouns:

>>> doc = nlp("He went to Spain. He loved the country. He often told his friends about it.")
>>> doc._.coref_chains.print()
0: He(0), He(5), He(10), his(13)
1: Spain(3), country(8), it(16)
>>>
>>> doc._.coref_chains[1].most_specific_mention_index
0
>>> doc._.coref_chains[1][doc._.coref_chains[1].most_specific_mention_index].pretty_representation
'Spain(3)'

This information is used as the basis for the resolve() method shown in the initial example: the method traverses multiple chains to find the most specific mention or mentions within the text that describe a given anaphor or noun phrase head.

3 How it works

3.1 General operation and rules

3.1.1 Anaphor pair analysis

For each language, methods are implemented that determine:

  • for each token, its dependent siblings, e.g. Jane is a dependent sibling of Peter in the phrase Peter and Jane;
  • for each token, whether the token is an anaphor (broadly speaking for English: a third-person pronoun);
  • for each token, whether the token heads an independent noun phrase that an anaphor could refer to;
  • for any independent-noun/anaphor or anaphor/anaphor pair within a text, whether or not semantic and syntactic constraints would permit coreference between the members of the pair. For example, there are no circumstances in which they and her could ever corefer within a text. When an entity has dependent siblings, the method is called twice, once with and once without the siblings. Possible coreferents are considered up to five sentences away from each anaphor looking backwards through the text. The method returns 2 (coreference permitted), 1 (coreference unlikely but possible) or 0 (coreference impossible). Alongside the language-specific rules, there are a number of language-independent rules which can lead to a 1 rather than a 2 analysis.

Each anaphor in a document emerges from an analysis using these methods with a list of elements to which it could conceivably refer. The list for each anaphor is scored using the neural ensemble and the possible referents are ordered by decreasing likelihood. Regardless of their neural ensemble score, any pairs with the rules analysis 1 (coreference unlikely but possible) are ordered behind pairs with the rules analysis 2 (coreference permitted).

Note that anaphora is understood in a broad sense that includes cataphora, i.e. pronouns that refer forwards rather than backwards like the initial pronoun in the English example in the introduction. Language-independent rules are used to determine situations in which the syntactic relationship between two elements within the same sentence permits cataphora.

Replacing the neural ensemble scoring with a naive algorithm that always selects the closest potential referent for each anaphor with rules analysis 2 (or 1 if there is no 2) yields an accuracy of around 60% as opposed to the 84% reported above. This demonstrates the respective contribution of each processing strategy to the overall result and provides a useful benchmark for any further machine learning experiments.

3.1.2 Noun pair detection

For each language the following are implemented:

  • a method that determines whether a noun phrase is indefinite, or, in languages that do not mark indefiniteness, whether it could be interpreted as being indefinite;
  • a method that determines whether a noun phrase is definite, or, in languages that do not mark definiteness, whether it could be interpreted as being definite;
  • a dictionary from named entity labels to common nouns that refer to members of each named entity class. For example, the English named entity class ORG maps to the nouns ['company', 'firm', 'organisation'].

This information is used in a purely rule-based fashion to determine probable coreference between pairs of noun phrases: broadly, definite noun phrases that do not contain additional new information refer back to indefinite or definite noun phrases with the same head word, and named entities are referred back to by the common nouns that describe their classes. Noun pairs can be a maximum of two sentences apart as opposed to the five sentences that apply to anaphoric references.

3.1.3 Building the chains

Coreferee goes through each document in natural reading order from left to right building up chains of anaphors and independent noun phrases. For each anaphor, the highest scoring interpretation as suggested by the neural ensemble is preferred. However, because the semantic (but not the syntactic) restrictions on anaphoric reference apply between all pairs formed by members of a chain rather than merely between adjacent members, it may turn out that the highest scoring interpretation is not permissible because it would lead to a semantically inconsistent chain. The interpretation with the next highest score is then tried, and so on until no interpretations remain.

In the unusual situation that all suggested interpretations of a given anaphor have been found to be semantically impossible, it is likely that one of the interpretations of the preceding anaphors in the text was incorrect: authors do not normally use anaphors that do not refer to anything. Reading the text:

The woman looked down and saw Lesley. She stood up and greeted him.

most readers will initially understand she as referring to Lesley. Only when one reaches the end of the sentence does it become clear that Lesley must be a man and that she actually refers to the woman. A quick test shows that Coreferee is capable of handling such ambiguity:

>>> doc = nlp('The woman looked down and saw Lesley. She stood up and greeted her.')
>>> doc._.coref_chains.print()
0: woman(1), her(13)
1: Lesley(6), She(8)
>>>
>>> doc = nlp('The woman looked down and saw Lesley. She stood up and greeted him.')
>>> doc._.coref_chains.print()
0: woman(1), She(8)
1: Lesley(6), him(13)

This is achieved using a rewind: at a point in a text where no suitable interpretation can be found for an anaphor, alternative interpretations of preceding anaphors are investigated in an attempt to find an overall interpretation that fits.

3.2 The neural ensemble

The likelihood scores for anaphoric pairs are calculated using an ensemble of five identical multilayer perceptrons using a rectified linear activation in the input and hidden layers and a sigmoid activation in the output layer. Each of the five networks outputs a probability between 0 and 1 for a given potential anaphoric pair and the mean of the five probabilities is used as the the score for that pair.

The inputs to each of the five networks consist of:

  1. A feature map for each member of the pair. As the first step in training, Coreferee goes through the entire training corpus and notes all the relevant morphological and syntactic information that relevant tokens, their syntactic head tokens and their syntactic children can have. This information is stored with the neural ensemble for each model as a feature table. The feature map for a given token (or list of tokens) is a oneshot representation with respect to the feature table.

  2. A position map for each member of the pair capturing such information as its position within its sentence and its depth within the dependency tree generated for its sentence.

  3. Vector squeezers for each member of the pair and, where existent, for the syntactic head of each member of the pair. The input to a vector squeezer is the vector or context-sensitive tensor for the spaCy token in question. A vector squeezer consists of three neural layers and outputs a representation that is only three neurons wide and that is fed into the rest of the network within the same layer as the other, non-vector inputs.

  4. A compatibility map capturing the relationship between the members of the pair. Alongside the distance separating them in words and in sentences, this includes the number of common features in their feature maps and the cosine similarity between their syntactic heads.

Using a vector squeezer has been consistently found to offer slightly better results either than feeding the full-width vectors into the network directly or than omitting them entirely. Possible intuitions that might explain this behaviour are: the reduced width forces the network to learn and attend to a constrained number of specific semantic features relevant to coreference resolution; and the reduced width limits the attention of the network on the raw vectors in a situation where the training data is insufficient to make effective use of them.

Perhaps somewhat unusually, when a vector is required to represent a coordinated phrase, the mean of the vectors of the individual coordinated tokens is used rather than the mean of the vectors of all the tokens in the coordinated span.

The structure shared by each of the five networks in the ensemble is shown in the attached diagram:

Structure of an ensemble member

Cross-linguistically, four training epochs were found to offer the best results; adding more training epochs caused the accuracy to start to tail off again owing to overfitting. Training for all relevant spaCy models for a given language takes between one and two hours on a high-end laptop.

4. Adding support for a new language

One of the main design goals of Coreferee was to make it easy to add support for further languages. The prerequisites are:

  • you will need to know the grammar of the language you are adding well enough to make detailed decisions about which coreferences are normal, which are marginally possible and which are impossible;
  • you will need to be able to program in Python.

You should not need to get involved in the details of the neural ensemble; Coreferee should do that for you.

The steps involved are:

  1. Create a directory under coreferee/lang/ with the same structure as the existing language-specific directories; it is probably easiest to copy one of them.

  2. The file config.cfg lists the spaCy models for which you wish to generate Coreferee models. You will need to specify a separate vectors model for any of the spaCy models that lack vectors or context-dependent tensors of their own — see the English config.cfg for an example. Each config entry specifies a minimum (from_version) and maximum (to_version) spaCy model version number that the generated Coreferee model will support. During development, both numbers will normally refer to a single version number. Later, when an updated spaCy model version is brought out, testing will be required to see whether the existing Coreferee model still supports the new spaCy model version. If so, the maximum version number can be increased; if not, a new config entry will be necessary to accommodate the new Coreferee model that will then be required.

  3. The file rules.py in the main code directory contains an abstract class RulesAnalyzer that must be implemented by a class LanguageSpecificRulesAnalyzer within a file called language_specific_rules.py in each language-specific directory. The abstract class RulesAnalyzer contains docstrings that specify for each abstract property and method the contract to which implementing classes should adhere. Looking at the existing language-specific rules is also likely to be helpful. The method is_potential_anaphor() is normally the most work to create: here it is probably worth looking at the existing English method for languages with natural gender or at the existing German method for languages with grammatical gender. (Polish has an unusually complex gender system, so the Polish example is unlikely to be helpful even as a basis for working with other Slavonic languages.)

  4. There are some situations where word lists can be helpful. If a list is placed in a file <name>.dat within the data directory under a language-specific directory, the contents will be automatically made available within the LanguageSpecificRulesAnalyzer for the language in question as a variable self.<name> that contains a list where each entry corresponds to a line from the file; comments with # are supported. If you use a word list, please ensure it can be published under the Apache 2 license and give appropriate attribution within the language-specific directory in the LICENSE and, where appropriate, in a COPYING file.

  5. Male and female names are managed on a cross-linguistic basis because there is no reason why one would not want e.g. a German female name to be recognised within an English text. Names are automatically made available to all RulesAnalyzer implementations as properties self.male_names, self.female_names, self.exclusively_male_names and self.exclusively_female_names. If you can locate a suitable names list for the language you are working on that is available under a suitable license, add the attribution to the LICENSE file under common/ and merge your names into the two files. Please tidy up the result so that the files are free of duplicates and in alphabetical order.

  6. Create a language-specific directory under tests/ with a file test_rules_<ISO 639-1>.py to test the rules you have written in 3-5). Although one of the corresponding files for one of the existing languages is likely to be the best starting point, you should also be sure to test any extra features specific to the language you are working on. The test tooling is designed to run each test against all spaCy models specified in config.cfg. At this stage in development, you will need to add temporarily a parameter add_coreferee=False to the call to get_nlps() in the setUp() method. Otherwise, all tests will fail because the test tooling will attempt to add the as yet non-existent Coreferee model to the pipe.

  7. Some tests may fail with one of the smaller spaCy models because it produces incorrect syntactic representations rather than because of any issue with your rule code. For such cases, a parameter excluded_nlps can be specified within a test method to prevent it from being executed with specific spaCy models.

  8. Locate a training corpus or corpora. Again, you should make sure that the resulting models can be published under the Apache 2 license. Add new loader class(es) for the corpus or corpora to the existing loader classes in the train/loaders.py file. Loader classes must implement the GenericLoader abstract class that is located at the top of this file. The job of a loader is to read a specific training corpus format and to create and annotate spaCy documents with coreferences marked within corpora of that format. All the data for a single training run should be placed in a single directory; if there are multiple types of training data loaded by different loaders, each loader will need to be able to recognise the data it is required to read by examining the names of the files within the directory. It is worth spending some time checking with print() statements that the loaders annotate as expected, otherwise the training step that follows has little chance of success!

  9. You are now ready to begin training. The training command must be issued from the coreferee/ root directory. Coreferee will place a zip file into <log-dir>. Alongside the accuracy for each model, the files in the zip file show the coreference chains produced for each test document as well as a list of incorrect annotations where the Coreferee interpretation differed from the one specified by the training corpus author — information that is invaluable for debugging and rules improvement. As an example, the training command for English is:

python3 -m coreferee train --lang en --loader ParCorLoader,LitBankANNLoader --data <training-data-dir> --log <log-dir>
  1. Once you are happy with your models, install them. The command must be issued from the coreferee/ root directory, otherwise Coreferee will attempt to download the models from GitHub where they are not yet present:
python3 -m coreferee install <ISO 639-1>
  1. Before you attempt any regression tests that involve running Coreferee as part of the spaCy pipe, you must remove the add_coreferee=False parameter you added above. A setup where the parameter is present in one test file but absent in the other test file will not work because the spaCy models are loaded once per test run.

  2. Again using one of the existing languages as an starting point, create a test_smoke_tests_<ISO 639-1>.py file in your test directory. The smoke tests are designed to make sure that the basic features of Coreferee are working properly for the language in question and should also cover any features that have posed a particular challenge while developing the rules.

  3. Run pylint on your language_specific_rules.py. Obviously there is no need to achieve a perfect score, but issues that can be easily remedied like overlong lines should be addressed.

  4. Go through the documentation (README.md and SHORTREADME.md) adding information about the new language wherever the supported languages are listed in some way.

  5. Issue a pull request. We ask that you supply us with the zip file generated during training. Because this will contain a considerable amount of raw information from the training corpora, it will normally be preferable from a licensing viewpoint to send it out of band rather than attaching it to the pull request.

5. Open issues / requests for assistance

  1. At present Coreferee uses Keras with TensorFlow, which leads to the limitation that nlp.pipe() cannot be called with n_process > 1 with forked processes. It would be greatly preferable if Coreferee could be converted to use Thinc instead: this would get rid of this limitation and generally fit much better into the spaCy ecosystem.

  2. Because optimising parsing speed was not a priority in the project within which Coreferee came into being, Coreferee is written purely in Python; it would be helpful if somebody could convert it to Cython.

  3. There are almost certainly changes to the inputs and structure of the neural ensemble that would lead to improvements in accuracy, both cross-linguistically and for specific languages. The only caveat to bear in mind when trying out changes is that it should be possible for someone who does not understand neural networks to write rules for a new language. This means that Coreferee should detect necessary differences in the neural network behaviour between languages automatically rather than requiring the trainer to configure them.

  4. It is unclear at present why the accuracy for English is better than for German and why the accuracy for German is better than for Polish. One wholly speculative possibility is that the contents of the compatibility map are better suited to pronominal than to verbal anaphora. This looks to be a promising avenue of research; understanding why the difference is occurring may well reveal a means of improving accuracy across the board.

  5. It would be useful if somebody could find a way of benchmarking Coreferee against other coreference resolution solutions, especially for English. One problem this would probably present is that using a benchmark necessitates a normative scope where a system aims to find exactly those types of coreference marked within the benchmark corpus, whereas the scope of Coreferee was determined by project requirements.

Comments
  • Cannot install on Apple Silicon

    Cannot install on Apple Silicon

    I am trying to install this library on macOS 11 on an Apple Silicon Mac. The requirements seem to conflict no matter what I try.

    The following are all the things I've tried. It would be great if you provided a way to install from source because I got M1 tensorflow and spacy to work just fine. I just don't know how to install coreferee and the respective models with the libraries that I do have available.

    INFO: pip is looking at multiple versions of coreferee to determine which version is compatible with other requirements. This could take a while.
    ERROR: Could not find a version that satisfies the requirement h5py~=3.1.0 (from tensorflow-macos) (from versions: 2.2.1, 2.3.0b1, 2.3.0, 2.3.1, 2.4.0b1, 2.4.0, 2.5.0, 2.6.0, 2.7.0rc2, 2.7.0, 2.7.1, 2.8.0rc1, 2.8.0, 2.9.0rc1, 2.9.0, 2.10.0, 3.0.0rc1, 3.0.0, 3.1.0, 3.2.0, 3.2.1, 3.3.0, 3.4.0, 3.5.0, 3.6.0)
    ERROR: No matching distribution found for h5py~=3.1.0
    

    I tried building h5py from source using the home-brew version using HDF5_DIR=/opt/homebrew/Cellar/hdf5/1.12.1 pip3.9 install --no-binary=h5py h5py==3.1.0 because there was no prebuilt binary available. That worked.

    But installing carefree doesn't seem to see this version, and I wind up with the error above. I very much depend on this library. It would be great if we could get this working again. How can the issue be resolved?

    I think it's a problem with several libraries including tensorflow.

    EDIT: I tried installing the intel python3.9 hoping Rosetta would force everything to work under x86_64 emulation mode. Everything installed except for: python3 -m coreferee install en, which yielded "zsh: illegal hardware instruction".

    EDIT2: I noticed that 1.1.1 was explicitly for resolving Apple Silicon issues. Why am I experiencing these installation problems then? Is there something subtle that I am missing?

    I'm out of ideas.

    EDIT3: This is the official tensorflow instructions for Apple Silicon link , but the procedure requires Conda. I'd still like to know what is going wrong if you were successful in installing on Apple Silicon for v. 1.1.1. Spacy seems to be having trouble with transformers too.

    EDIT4: It took a while, but I got things working by just ignoring using pip and just downloading the repo and dependencies separately using whatever is recommended for mac:

    
    curl -O https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-MacOSX-arm64.sh
    ./Miniforge3-MacOSX-arm64.sh
    conda config --set auto_activate_base false
    conda create --name my_env python=3.9
    conda activate my_env
    conda install rust 
    export CARGO_BUILD_TARGET="aarch64-apple-darwin"
    conda install -c conda-forge spacy=3.1.4
    
    python -m spacy download en_core_web_trf
    python -m spacy download en_core_web_lg
    
    conda install -c apple tensorflow-deps
    pip install tensorflow-macos
    pip install tensorflow-metal
    git clone https://github.com/msg-systems/coreferee.git
    mv ./coreferee ./coreferee_container
    cd ./coreferee_container
    python -m coreferee install en
    cd ..
    

    I hope this process becomes simpler. Also, hopefully you'll update to spacy 3.20.


    OLD: I get this error on macOS 11.3 on an M1 (ARM) machine when I try installing:

    pip3 install coreferee
    python3 -m coreferee install en
    
    ERROR: Cannot install coreferee==1.1.0, coreferee==1.1.1 and coreferee==1.1.2 because these package versions have conflicting dependencies.
    
    The conflict is caused by:
        coreferee 1.1.2 depends on tensorflow-macos~=2.6.0; platform_system == "Darwin"
        coreferee 1.1.1 depends on tensorflow-macos~=2.6.0; platform_system == "Darwin"
        coreferee 1.1.0 depends on tensorflow~=2.5.0
    
    To fix this you could try to:
    1. loosen the range of package versions you've specified
    2. remove package versions to allow pip attempt to solve the dependency conflict
    
    ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies
    

    How can I resolve this?

    Also, I am using python 3.10, but the same thing happens with 3.9. I am also hoping Spacy 3.2 support is around the corner.

    EDIT: I specified an older version of coreferee (1.1.0), but now I get:

    ERROR: Could not find a version that satisfies the requirement tensorflow~=2.5.0 (from coreferee) (from versions: none)
    ERROR: No matching distribution found for tensorflow~=2.5.0
    

    It seems a lot of these package requirements don't exist. I really depend on this project. How can we get it working?

    EDIT: It looks like the issue is h5py. There's no pre-built wheel for it on ARM Macs. It's unclear how to install it from source for this purpose.

    opened by KTRosenberg 17
  • Use coreferee with spacy >3.1.2

    Use coreferee with spacy >3.1.2

    Hey there, I am having problems to use coreferee together with newest spacy version (3.1.2). To add a Language factory it is necassary to use Decorators as suggested by spacy.

    So I am not that experienced with programming and python as well. But for me I have to write a function where I initialize an object. Then it is possible to apply spacy Decorator @Language.factory on the function.

    I figured out to do this for different packages, except this. For example it works for spacy-langdetect like this: from spacy_langdetect import LanguageDetector @Language.factory("language_detector") def init_LanguageDetector(nlp, name): return LanguageDetector(language_detection_function=None)

    and afterwards I can add this new factory easily to the existing pipeline with: nlp.add_pipe()

    Is there a similar way to do this for coreferee, because snippets provided by the READ ME do not work..?

    Thanks in advance!

    opened by YannHau 8
  • Persistent Class TypeError

    Persistent Class TypeError

    With most documents of longer than a few sentences (news articles), I am getting a recurrent error:

    text = """
    France retains its centuries-long status as a global centre of art, science, and philosophy, says Ljubomir Geric. He also notes it
    hosts the world's fifth-largest number of UNESCO World Heritage Sites and is the leading tourist destination, 
    receiving over 89 million foreign visitors in 2018. France is a developed country with the world's 
    seventh-largest economy by nominal GDP, and the ninth-largest by PPP. In terms of aggregate household wealth, 
    it ranks fourth in the world. France performs well in international rankings of education, health care, 
    life expectancy, and human development. It remains a great power in global affairs, being one of the five 
    permanent members of the United Nations Security Council (UNSC) and an official nuclear-weapon state. France is a 
    founding and leading member of the European Union (EU) and the Eurozone, and a member of the Group of 7, North 
    Atlantic Treaty Organization (NATO), Organisation for Economic Co-operation and Development (OECD), and the 
    World Trade Organization (TWO).
    """
    doc = nlp(text)
    
    Unexpected error annotating document, skipping ....
    <class 'TypeError'>
    Implicit conversion to a NumPy array is not allowed. Please use `.get()` to construct a NumPy array explicitly.
      File "/opt/conda/envs/nlp/lib/python3.8/site-packages/coreferee/manager.py", line 110, in __call__
        self.annotator.annotate(doc)
      File "/opt/conda/envs/nlp/lib/python3.8/site-packages/coreferee/annotation.py", line 270, in annotate
        self.tendencies_analyzer.score(doc, self.keras_ensemble)
      File "/opt/conda/envs/nlp/lib/python3.8/site-packages/coreferee/tendencies.py", line 390, in score
        keras_inputs, scoring_necessary = self.prepare_keras_data([doc])
      File "/opt/conda/envs/nlp/lib/python3.8/site-packages/coreferee/tendencies.py", line 326, in prepare_keras_data
        self.get_vectors(potential_referred, doc)
      File "/opt/conda/envs/nlp/lib/python3.8/site-packages/coreferee/tendencies.py", line 263, in get_vectors
        this_object_vector = np.mean( np.array([t.vector for t in tokens]), axis=0)
      File "cupy/core/core.pyx", line 1188, in cupy.core.core.ndarray.__array__
    

    My numpy is 1.19.5. tensorflow 2.4.2. I wonder if this is what your issue about versions being too permissive was about? I'll try to replicate those more restricted installs.

    enhancement help wanted 
    opened by arnicas 8
  • projet ready for pull : support for french

    projet ready for pull : support for french

    Finally support for french is completed !

    List of the added features :

    • added /fr/ directory in /coreferee/lang/
      • language_specific_rules.py
      • config.cfg
      • data files
      • Licence file citing the ressources used
    • added /fr/ directory in /tests/
      • test_rules_fr.py
      • test_smoke_tests.py
    • added DEMOCRATConllLoader class in coreferee/training/loaders.py
    • added /fr/ directory in models : contains the trained models

    I'll the zip file of the training by mail.

    opened by Pantalaymon 6
  • How to clean up GPU memory after this runs?

    How to clean up GPU memory after this runs?

    The method is using all available GPUs and I notice that the GPU memory isn't cleaned up after it runs.

    Is there any way to:

    1. Specify what GPUs should be used (and also the max memory to consume)?
    2. Clear out the GPU memory after this runs?

    Thanks much. The results are pretty impressive.

    help wanted 
    opened by ohmeow 6
  • pip package update

    pip package update

    Please update package via pip instalation (ver. 1.1.0)... I got an error "could not find a version ....=1.1.0" and in generall, pip installed ver. 1.0.1

    • Thanks a lot for a new release.
    opened by AleksandrTulenkov 5
  • Failed copying input tensor

    Failed copying input tensor

    Hi, I get data for language spanish through of files *.conll, I tranform this data *.conll in format *.ann, and I try train coreferee with this data, after of change the rules for my langage. This data are 3.000 files approximately, but when I try train this model I get a error:

    Traceback (most recent call last):
      File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
        return _run_code(code, main_globals, None,
      File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
        exec(code, run_globals)
      File "/home/creangel/info/image/coreferee/__main__.py", line 51, in <module>
        TrainingManager(
      File "/home/creangel/info/image/coreferee/training/train.py", line 409, in train_models
        self.train_model(config_entry_name, config_entry, temp_log_file)
      File "/home/creangel/info/image/coreferee/training/train.py", line 378, in train_model
        keras_ensemble = self.generate_keras_ensemble(
      File "/home/creangel/info/image/coreferee/training/train.py", line 219, in generate_keras_ensemble
        keras_history = model_generator.train_keras_model(training_docs, tendencies_analyzer,
      File "/home/creangel/info/image/coreferee/training/model.py", line 288, in train_keras_model
        #print('keras_inputs: ',keras_inputs)
      File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1134, in fit
        data_handler = data_adapter.get_data_handler(
      File "/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py", line 1383, in get_data_handler
        return DataHandler(*args, **kwargs)
      File "/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py", line 1138, in __init__
        self._adapter = adapter_cls(
      File "/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py", line 230, in __init__
        x, y, sample_weights = _process_tensorlike((x, y, sample_weights))
      File "/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py", line 1031, in _process_tensorlike
        inputs = tf.nest.map_structure(_convert_numpy_and_scipy, inputs)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/nest.py", line 869, in map_structure
        structure[0], [func(*x) for x in entries],
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/nest.py", line 869, in <listcomp>
        structure[0], [func(*x) for x in entries],
      File "/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py", line 1026, in _convert_numpy_and_scipy
        return tf.convert_to_tensor(x, dtype=dtype)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper
        return target(*args, **kwargs)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1430, in convert_to_tensor_v2_with_dispatch
        return convert_to_tensor_v2(
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1436, in convert_to_tensor_v2
        return convert_to_tensor(
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/profiler/trace.py", line 163, in wrapped
        return func(*args, **kwargs)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1566, in convert_to_tensor
        ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/tensor_conversion_registry.py", line 52, in _default_conversion_function
        return constant_op.constant(value, dtype, name=name)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 271, in constant
        return _constant_impl(value, dtype, shape, name, verify_shape=False,
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 283, in _constant_impl
        return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 308, in _constant_eager_impl
        t = convert_to_eager_tensor(value, ctx, dtype)
      File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 106, in convert_to_eager_tensor
        return ops.EagerTensor(value, ctx.device_name, dtype)
    
    tensorflow.python.framework.errors_impl.InternalError: Failed copying input tensor from /job:localhost/replica:0/task:0/device:CPU:0 to /job:localhost/replica:0/task:0/device:GPU:0 in order to run _EagerConst: Dst tensor is not initialized.
    

    I think that is the memory. I have a GPU of 8 GB.

    Note: I try with 100 files from my data and the train of coreferee work very well, but with a with 3000 or 200 files I get the error.

    Maybe, someone know about this error and which is the solution?. Thanks.

    opened by csgomezg0 4
  • Documentation website?

    Documentation website?

    Hi! I stumbled upon this repo after looking up tools for corefs, which ended up not working or were completely confusing. This one seems approachable. Is there a documentation website for coreferee?

    opened by XsongyangX 3
  • "zsh: illegal hardware instruction python -m coreferee install en" on MacOS 12.1 M1

    Hi there

    I tried installing coreferee after installing spaCy and it works well until I try the second of the install commands:

    python3 -m pip install coreferee python3 -m coreferee install en

    The error message sounds to me like there's an incompatibility between the software and my hardware.

    I used python 3.9.7 first, saw that someone said 3.8.0 should work but it gave me the same result there. Same with 3.6.9.

    Grateful for any pointers.

    opened by CarlosHartmann 3
  • spaCy 3.1, 3.2 models not supported

    spaCy 3.1, 3.2 models not supported

    I have tried to use this library with spaCy 3.1 and 3.2 models. The setup config says spacy>=3.1.0,<3.2.0 is supported, yet I get the ModelNotSupported error for both en_core_web_lg version 3.1.0 and en_core_web_trf version 3.1.0 models (both are installed). Coreferee en is also installed.

    import coreferee
    import spacy
    
    nlp = spacy.load("en_core_web_lg")
    nlp.add_pipe("coreferee")
    

    Info about spaCy

    • spaCy version: 3.1.4
    • Platform: Windows-10-10.0.19041-SP0
    • Python version: 3.8.8
    • Pipelines: en_core_web_lg (3.1.0), en_core_web_trf (3.1.0)

    Error trace

    ---------------------------------------------------------------------------
    ModelNotSupportedError                    Traceback (most recent call last)
    Input In [2], in <module>
          3 import spacy
          5 nlp = spacy.load("en_core_web_lg")
    ----> 6 nlp.add_pipe("coreferee")
    
    File ~\.virtualenvs\lecontra-qSg6Vfcc\lib\site-packages\spacy\language.py:787, in Language.add_pipe(self, factory_name, name, before, after, first, last, source, config, raw_config, validate)
        779     if not self.has_factory(factory_name):
        780         err = Errors.E002.format(
        781             name=factory_name,
        782             opts=", ".join(self.factory_names),
       (...)
        785             lang_code=self.lang,
        786         )
    --> 787     pipe_component = self.create_pipe(
        788         factory_name,
        789         name=name,
        790         config=config,
        791         raw_config=raw_config,
        792         validate=validate,
        793     )
        794 pipe_index = self._get_pipe_index(before, after, first, last)
        795 self._pipe_meta[name] = self.get_factory_meta(factory_name)
    
    File ~\.virtualenvs\lecontra-qSg6Vfcc\lib\site-packages\spacy\language.py:670, in Language.create_pipe(self, factory_name, name, config, raw_config, validate)
        667 cfg = {factory_name: config}
        668 # We're calling the internal _fill here to avoid constructing the
        669 # registered functions twice
    --> 670 resolved = registry.resolve(cfg, validate=validate)
        671 filled = registry.fill({"cfg": cfg[factory_name]}, validate=validate)["cfg"]
        672 filled = Config(filled)
    
    File ~\.virtualenvs\lecontra-qSg6Vfcc\lib\site-packages\thinc\config.py:729, in registry.resolve(cls, config, schema, overrides, validate)
        720 @classmethod
        721 def resolve(
        722     cls,
       (...)
        727     validate: bool = True,
        728 ) -> Dict[str, Any]:
    --> 729     resolved, _ = cls._make(
        730         config, schema=schema, overrides=overrides, validate=validate, resolve=True
        731     )
        732     return resolved
    
    File ~\.virtualenvs\lecontra-qSg6Vfcc\lib\site-packages\thinc\config.py:778, in registry._make(cls, config, schema, overrides, resolve, validate)
        776 if not is_interpolated:
        777     config = Config(orig_config).interpolate()
    --> 778 filled, _, resolved = cls._fill(
        779     config, schema, validate=validate, overrides=overrides, resolve=resolve
        780 )
        781 filled = Config(filled, section_order=section_order)
        782 # Check that overrides didn't include invalid properties not in config
    
    File ~\.virtualenvs\lecontra-qSg6Vfcc\lib\site-packages\thinc\config.py:850, in registry._fill(cls, config, schema, validate, resolve, parent, overrides)
        847     getter = cls.get(reg_name, func_name)
        848     # We don't want to try/except this and raise our own error
        849     # here, because we want the traceback if the function fails.
    --> 850     getter_result = getter(*args, **kwargs)
        851 else:
        852     # We're not resolving and calling the function, so replace
        853     # the getter_result with a Promise class
        854     getter_result = Promise(
        855         registry=reg_name, name=func_name, args=args, kwargs=kwargs
        856     )
    
    File ~\.virtualenvs\lecontra-qSg6Vfcc\lib\site-packages\coreferee\manager.py:103, in CorefereeBroker.__init__(self, nlp, name)
        101 self.nlp = nlp
        102 self.pid = os.getpid()
    --> 103 self.annotator = CorefereeManager().get_annotator(nlp)
    
    File ~\.virtualenvs\lecontra-qSg6Vfcc\lib\site-packages\coreferee\manager.py:95, in CorefereeManager.get_annotator(nlp)
         93         keras_ensemble = keras.models.load_model(absolute_keras_model_filename)
         94         return Annotator(nlp, vectors_nlp, feature_table, keras_ensemble)
    ---> 95 raise ModelNotSupportedError(''.join((nlp.meta['lang'], '_', nlp.meta['name'],
         96     ' version ', nlp.meta['version'])))
    
    ModelNotSupportedError: en_core_web_lg version 3.1.0
    
    opened by BramVanroy 3
  • Rules Improvement for French

    Rules Improvement for French

    Hello ,

    As I will be using coreferee in a new project I am still working on improving the rules.

    I added a few more rules inlang/fr/language_rules.py as well as a few tests intests/fr to make sure they work as expected. There is also some edits in lang/fr/data files which are used by the rules

    Regarding the new rules, I don't know if you plan to use the same rules for the spacy native solution that you are developing but I just wanted to share that on top of the language specific rules for noun/anaphora - anaphora pairs, the system would greatly benefit from language specific rules for noun - noun coreferring pairs. For instance to prevent singular named entities (say John Doe) from coreferring with plural nouns (say the people) or gender-incompatible nouns.

    opened by Pantalaymon 3
  • Updated french model

    Updated french model

    Hi !

    I have done some edits to the rules improve coreference resolution. But most of the work was updating the french rules for them to be compatible with spacy 3.2 french model. I also retrained the model (Thinc this time thanks to your amazing work!) with those new rules and spacy 3.2 language model.

    How should I send you the models and training data ?

    opened by Pantalaymon 0
Releases(v1.2.0)
  • v1.2.0(May 6, 2022)

    Removed dependencies to TensorFlow and Keras, switching to Thinc as the neural network platform. Switching to Thinc has led to serialized models that are around 30% of the size of the old models, and has also allowed the old limitation to be removed where nlp.pipe() could not be called with n_process > 1 with forked processes. Implemented a softmax layer to select the best potential referent for each anaphor as opposed to calculating independent scores for each pair. Added matrix tests to support a variety of Python and spaCy versions, including spaCy 3.2 and spaCy 3.3. Implemented a stable-random split into train and test corpora as opposed to using the last 20% of loaded documents as the test corpus. Improved the training script so that it remembers the model state at each epoch and chooses the best-performing state from the training history as the model to save. Added the coreferee check command to enable performance measurement for an existing Coreferee model with a new spaCy model.

    Source code(tar.gz)
    Source code(zip)
  • v1.1.3(Feb 11, 2022)

  • v1.1.2(Nov 24, 2021)

  • v1.1.1(Nov 11, 2021)

    • Changed the dependencies to allow Coreferee to run on the Apple M1 chipset
    • Sorted out a problem with the supported spaCy versions
    • Improved some of the tests
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Aug 24, 2021)

    • Upgrade to Python 3.9 and spaCy 3.1
    • Fixing of minor issues in all three rule-sets
    • Regeneration of all models
    • Improvement of the Polish examples in section 1.4.1 to make them more pragmatically correct - many thanks to Małgorzata Styś for her valuable advice on this.
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Apr 20, 2021)

  • v1.0.0(Apr 19, 2021)

Owner
msg systems ag
msg systems ag
A tool helps build a talk preview image by combining the given background image and talk event description

talk-preview-img-builder A tool helps build a talk preview image by combining the given background image and talk event description Installation and U

PyCon Taiwan 4 Aug 20, 2022
Unofficial Parallel WaveGAN (+ MelGAN & Multi-band MelGAN & HiFi-GAN & StyleMelGAN) with Pytorch

Parallel WaveGAN implementation with Pytorch This repository provides UNOFFICIAL pytorch implementations of the following models: Parallel WaveGAN Mel

Tomoki Hayashi 1.2k Dec 23, 2022
New Modeling The Background CodeBase

Modeling the Background for Incremental Learning in Semantic Segmentation This is the updated official PyTorch implementation of our work: "Modeling t

Fabio Cermelli 9 Dec 28, 2022
Dual languaged (rus+eng) tool for packing and unpacking archives of Silky Engine.

SilkyArcTool English Dual languaged (rus+eng) GUI tool for packing and unpacking archives of Silky Engine. It is not the same arc as used in Ai6WIN. I

Tester 5 Sep 15, 2022
Simple, Fast, Powerful and Easily extensible python package for extracting patterns from text, with over than 60 predefined Regular Expressions.

patterns-finder Simple, Fast, Powerful and Easily extensible python package for extracting patterns from text, with over than 60 predefined Regular Ex

22 Dec 19, 2022
Chinese NER with albert/electra or other bert descendable model (keras)

Chinese NLP (albert/electra with Keras) Named Entity Recognization Project Structure ./ ├── NER │   ├── __init__.py │   ├── log

2 Nov 20, 2022
A simple implementation of N-gram language model.

About A simple implementation of N-gram language model. Requirements numpy Data preparation Corpus Training data for the N-gram model, a text file lik

4 Nov 24, 2021
Let Xiao Ai speakers control third-party devices

A stupid way to extend miot/xiaoai. Demo for Panasonic Bath Bully FV-RB20VL1 逆向 Panasonic Smart China,获得控制浴霸的请求信息(HTTP 请求),详见 apps/panasonic.py; 2. 通过

bin 14 Jul 07, 2022
A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. IMPORTANT: (30.08.2020) We moved our models

flair 12.3k Dec 31, 2022
Translators - is a library which aims to bring free, multiple, enjoyable translation to individuals and students in Python

Translators - is a library which aims to bring free, multiple, enjoyable translation to individuals and students in Python

UlionTse 907 Dec 27, 2022
A python gui program to generate reddit text to speech videos from the id of any post.

Reddit text to speech generator A python gui program to generate reddit text to speech videos from the id of any post. Current functionality Generate

Aadvik 17 Dec 19, 2022
sangha, pronounced "suhng-guh", is a social networking, booking platform where students and teachers can share their practice.

Flask React Project This is the backend for the Flask React project. Getting started Clone this repository (only this branch) git clone https://github

Courtney Newcomer 17 Sep 29, 2021
Incorporating KenLM language model with HuggingFace implementation of Wav2Vec2CTC Model using beam search decoding

Wav2Vec2CTC With KenLM Using KenLM ARPA language model with beam search to decode audio files and show the most probable transcription. Assuming you'v

farisalasmary 65 Sep 21, 2022
Tools and data for measuring the popularity & growth of various programming languages.

growth-data Tools and data for measuring the popularity & growth of various programming languages. Install the dependencies $ pip install -r requireme

3 Jan 06, 2022
A python package to fine-tune transformer-based models for named entity recognition (NER).

nerblackbox A python package to fine-tune transformer-based language models for named entity recognition (NER). Resources Source Code: https://github.

Felix Stollenwerk 13 Jul 30, 2022
Knowledge Management for Humans using Machine Learning & Tags

HyperTag helps humans intuitively express how they think about their files using tags and machine learning. Represent how you think using tags. Find what you look for using semantic search for your t

Ravn Tech, Inc. 166 Jan 07, 2023
WikiPron - a command-line tool and Python API for mining multilingual pronunciation data from Wiktionary

WikiPron WikiPron is a command-line tool and Python API for mining multilingual pronunciation data from Wiktionary, as well as a database of pronuncia

213 Jan 01, 2023
End-to-End Speech Processing Toolkit

ESPnet: end-to-end speech processing toolkit system/pytorch ver. 1.0.1 1.1.0 1.2.0 1.3.1 1.4.0 1.5.1 1.6.0 1.7.1 1.8.1 ubuntu18/python3.8/pip ubuntu18

ESPnet 5.9k Jan 03, 2023
This repository collects together basic linguistic processing data for using dataset dumps from the Common Voice project

Common Voice Utils This repository collects together basic linguistic processing data for using dataset dumps from the Common Voice project. It aims t

Francis Tyers 40 Dec 20, 2022
LUKE -- Language Understanding with Knowledge-based Embeddings

LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transf

Studio Ousia 587 Dec 30, 2022