🐍💯pySBD (Python Sentence Boundary Disambiguation) is a rule-based sentence boundary detection that works out-of-the-box.

Overview

PySBD logo

pySBD: Python Sentence Boundary Disambiguation (SBD)

Python package codecov License PyPi GitHub

pySBD - python Sentence Boundary Disambiguation (SBD) - is a rule-based sentence boundary detection module that works out-of-the-box.

This project is a direct port of ruby gem - Pragmatic Segmenter which provides rule-based sentence boundary detection.

pysbd_code

Highlights

'PySBD: Pragmatic Sentence Boundary Disambiguation' a short research paper got accepted into 2nd Workshop for Natural Language Processing Open Source Software (NLP-OSS) at EMNLP 2020.

Research Paper:

https://arxiv.org/abs/2010.09657

Recorded Talk:

pysbd_talk

Poster:

name

Install

Python

pip install pysbd

Usage

  • Currently pySBD supports 22 languages.
import pysbd
text = "My name is Jonas E. Smith. Please turn to p. 55."
seg = pysbd.Segmenter(language="en", clean=False)
print(seg.segment(text))
# ['My name is Jonas E. Smith.', 'Please turn to p. 55.']
import spacy
from pysbd.utils import PySBDFactory

nlp = spacy.blank('en')

# explicitly adding component to pipeline
# (recommended - makes it more readable to tell what's going on)
nlp.add_pipe(PySBDFactory(nlp))

# or you can use it implicitly with keyword
# pysbd = nlp.create_pipe('pysbd')
# nlp.add_pipe(pysbd)

doc = nlp('My name is Jonas E. Smith. Please turn to p. 55.')
print(list(doc.sents))
# [My name is Jonas E. Smith., Please turn to p. 55.]

Contributing

If you want to contribute new feature/language support or found a text that is incorrectly segmented using pySBD, then please head to CONTRIBUTING.md to know more and follow these steps.

  1. Fork it ( https://github.com/nipunsadvilkar/pySBD/fork )
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create a new Pull Request

Citation

If you use pysbd package in your projects or research, please cite PySBD: Pragmatic Sentence Boundary Disambiguation.

@inproceedings{sadvilkar-neumann-2020-pysbd,
    title = "{P}y{SBD}: Pragmatic Sentence Boundary Disambiguation",
    author = "Sadvilkar, Nipun  and
      Neumann, Mark",
    booktitle = "Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.nlposs-1.15",
    pages = "110--114",
    abstract = "We present a rule-based sentence boundary disambiguation Python package that works out-of-the-box for 22 languages. We aim to provide a realistic segmenter which can provide logical sentences even when the format and domain of the input text is unknown. In our work, we adapt the Golden Rules Set (a language specific set of sentence boundary exemplars) originally implemented as a ruby gem pragmatic segmenter which we ported to Python with additional improvements and functionality. PySBD passes 97.92{\%} of the Golden Rule Set examplars for English, an improvement of 25{\%} over the next best open source Python tool.",
}

Credit

This project wouldn't be possible without the great work done by Pragmatic Segmenter team.

Comments
  • Question marks at the end swallowed

    Question marks at the end swallowed

    Looks like the example with just question marks is good now:

    >>> segmenter.segment("??")
    ['??']
    

    but the example with double question marks as a token at the end of a sentence still loses the question marks:

    >>> segmenter.segment("T stands for the vector transposition. As shown in Fig. ??")
    ['T stands for the vector transposition.', 'As shown in Fig.']
    

    looks like this is the minimal repro:

    >>> segmenter.segment("Fig. ??")
    ['Fig.']
    
    bug edge-cases 
    opened by dakinggg 11
  • Pysbd just hangs🐛

    Pysbd just hangs🐛

    Describe the bug The process hangs .

    To Reproduce Steps to reproduce the behavior: Input text - <f.302205302116302416302500302513915bd> flat = "f.302205302116302416302500302513915bd" print(flat) x=segClean = pysbd.Segmenter(language="en", clean=True, char_span=False) for z in x.segment(flat): print(z)

    Example: Input text - "My name is Jonas E. Smith. Please turn to p. 55."

    Expected behavior Return f.302205302116302416302500302513915

    Example: ['f.302205302116302416302500302513915bd']

    Additional context Add any other context about the problem here.

    help wanted 
    opened by kariato 8
  • Incorrect text span start and end returned

    Incorrect text span start and end returned

    Looks like something weird happening in this case, note that the indices of the second text span are incorrect:

    >>> seg = pysbd.Segmenter(language='en', clean=False, char_span=True)
    >>> seg.segment("1) The first item. 2) The second item.")                                                                                
    [TextSpan(sent='1) The first item.', start=0, end=18), TextSpan(sent='2) The second item.', start=0, end=19)] 
    
    bug 
    opened by dakinggg 7
  • Performance improvement?

    Performance improvement?

    I am not certain of this, but I suspect there might be room for performance improvement by using re.compile to precompile all of the needed regexs. Otherwise they will have to be compiled regularly (once the re cache of 100 has been exceeded)

    question 
    opened by dakinggg 7
  • Slovak lang support

    Slovak lang support

    We've added support for SBD in Slovak language text.

    Language specific improvements:

    • list of common slovak abbreviations
    • list of prepositive abbreviations
    • list of number abbreviations
    • handling of roman numerals
    • handling of „ text “ quotes, that are common in Slovak language
    • handling of ordinal numerals in dates, such as 17. Apríl 2020
    • modified the replacement of periods in abbreviations, so it can consistently handle common Slovak abbreviations such as Company Name s. r. o.
    • disabled processing of alphabetical lists, because of conflicts with some common abbreviations

    The code has been tested for stability on a very large corpus of web text. The has been no rigorous testing for segmentation quality, but the subjective feeling in the team is very positive.

    language 
    opened by misotrnka 6
  • Different segmentation with Spacy and when using pySBD directly

    Different segmentation with Spacy and when using pySBD directly

    Firstly thank you for this project - I was lucky to find it and it is really useful

    I seem to have found a case where the segmentation is behaving differently when run within the Spacy pipeline and when run using pySBD directly. I stumbled on it with my own text where a sentence after a previous sentence that was in quotes was being lumped together. I looked through the Golden Rules and found this wasn't expected and then noticed that even with the text in one of your tests it acts differently in Spacy.

    To reproduce run these two bits of code:

    from pysbd.utils import PySBDFactory
    nlp = spacy.blank('en')
    nlp.add_pipe(PySBDFactory(nlp))
    doc = nlp("She turned to him, \"This is great.\" She held the book out to show him.")
    for sent in doc.sents:
        print(str(sent).strip() + '\n')
    

    She turned to him, "This is great." She held the book out to show him.

    import pysbd
    text = "She turned to him, \"This is great.\" She held the book out to show him."
    seg = pysbd.Segmenter(language="en", clean=False)
    #print(seg.segment(text))
    for sent in seg.segment(text):
        print(str(sent).strip() + '\n')
    

    She turned to him, "This is great."

    She held the book out to show him.

    The second way is the desired output (based on the rules at least)

    bug help wanted 
    opened by nmstoker 6
  • destructive behaviour in edge-cases

    destructive behaviour in edge-cases

    As of v0.3.3, pySBD shows destructive behavior in some edge-cases even when setting the option clean to False. When dealing with OCR text, pySBD removes whitespace after multiple periods.

    To reproduce

    import pysbd
    
    splitter = pysbd.Segmenter(language="fr", clean=False)
    
    text = "Maissen se chargea du reste .. Logiquement,"
    print(splitter.segment(text))
    
    text = "Maissen se chargea du reste ... Logiquement,"
    print(splitter.segment(text))
    
    text = "Maissen se chargea du reste .... Logiquement,"
    print(splitter.segment(text))
    

    Actual output Please note the missing whitespace after the final period in the example with .. and .....

    ['Maissen se chargea du reste .', '.', 'Logiquement,']
    ['Maissen se chargea du reste ... ', 'Logiquement,']
    ['Maissen se chargea du reste .', '...', 'Logiquement,']
    

    Expected output

    ['Maissen se chargea du reste .', '. ', 'Logiquement,']
    ['Maissen se chargea du reste ... ', 'Logiquement,']
    ['Maissen se chargea du reste .', '... ', 'Logiquement,']
    

    In general, pySBD works well. Many thanks @nipunsadvilkar. I can also look into this as soon as I find some time and open a pull request.

    bug edge-cases 
    opened by aflueckiger 5
  • 🏎 ⚡️ 💯 [Rough] Benchmark across Segmentation Tools, Libraries and Algorithms

    🏎 ⚡️ 💯 [Rough] Benchmark across Segmentation Tools, Libraries and Algorithms

    Segmentation Tools, Libraries and Algorithms:

    • [x] Stanza
    • [x] syntok
    • [x] NLTK
    • [x] spaCy
    • [x] blingfire

    | Tool | Accuracy | Speed (ms) | |-----------|----------|------------| | blingfire | 75.00% | 49.91 | | pySBD | 97.92% | 2449.18 | | syntok | 68.75% | 783.73 | | spaCy | 52.08% | 473.96 | | stanza | 72.92% | 120803.37 | | NLTK | 56.25% | 342.98 |

    opened by nipunsadvilkar 5
  • ✨ 💫  Support Multiple languages

    ✨ 💫 Support Multiple languages

    Languages to be supported:

    • [x] English
    • [x] Bulgarian
    • [x] Spanish
    • [x] Russian
    • [x] Arabic
    • [x] Amharic
    • [x] Marathi
    • [x] Hindi
    • [x] Armenian
    • [x] Persian
    • [x] Urdu
    • [x] Polish
    • [x] Chinese
    • [x] Dutch
    • [x] Danish
    • [x] French
    • [x] Italian
    • [x] Greek
    • [x] Burmese
    • [x] Japanese
    • [x] Deutsch
    • [x] Kazakh
    enhancement 
    opened by nipunsadvilkar 4
  • Regexp issues

    Regexp issues

    I'm getting errors because the regexp engine interprets parentesis: "unterminated subpattern" and "unbalanced parenthesis".

    I'm analysing very large amounts of text, so not sure how these were triggered.

    opened by mollerhoj 4
  • Reduce some calls to re.sub

    Reduce some calls to re.sub

    So calls to re.compile are not a problem. The main thing slowing it down is lots of calls to re.sub in abbreviation_replacer.py. I reduced some of these calls which speeds it up by a factor of ~3-3.5x on my machine, for the specific (longish) document that I tested with. I also included the script I used to test timing. Given that you are much more familiar with the codebase, see if my changes look reasonable, but all the tests do still pass. There are probably some more ways to speed up the calls in that file.

    enhancement 
    opened by dakinggg 4
  • How is accuracy on OPUS-100 computed?

    How is accuracy on OPUS-100 computed?

    Hi! Thanks for this library.

    Since there is no notion of documents in the OPUS-100 dataset it is not clear to me how accuracy is computed. I tried a naive approach using pairwise joining of sentences:

    from datasets import load_dataset
    import pysbd
    
    if __name__ == "__main__":
        sentences = [
            sample["de"].strip()
            for sample in load_dataset("opus100", "de-en", split="test")["translation"]
        ]
    
        correct = 0
        total = 0
    
        segmenter = pysbd.Segmenter(language="de")
    
        for sent1, sent2 in zip(sentences, sentences[1:]):
            out = tuple(
                s.strip() for s in segmenter.segment(sent1 + " " + sent2)
            )
    
            total += 1
    
            if out == (sent1, sent2):
                correct += 1
    
        print(f"{correct}/{total} = {correct / total}")
    

    But I get 1011/1999 = 50.6% Accuracy which is not close to the 80.95% Accuracy reported in the paper.

    Thanks for any help!

    opened by bminixhofer 1
  • Added decorator as required by latest SpaCy

    Added decorator as required by latest SpaCy

    Hello!

    In using pySBD, I've noticed that the current example script no longer works with the latest version of SpaCy (3.3.0). This is the traceback I get:

    Traceback (most recent call last):
      File "/Users/lucas/Code/significant-statements-extraction/scripts/test_pysbd.py", line 27, in <module>
        nlp.add_pipe(pysbd_sentence_boundaries)
      File "/Users/lucas/miniforge3/envs/pytorch_p39/lib/python3.9/site-packages/spacy/language.py", line 773, in add_pipe
        raise ValueError(err)
    ValueError: [E966] `nlp.add_pipe` now takes the string name of the registered component factory, not a callable component. Expected string, but got <function pysbd_sentence_boundaries at 0x11ffa9160> (name: 'None').
    
    - If you created your component with `nlp.create_pipe('name')`: remove nlp.create_pipe and call `nlp.add_pipe('name')` instead.
    
    - If you passed in a component like `TextCategorizer()`: call `nlp.add_pipe` with the string name instead, e.g. `nlp.add_pipe('textcat')`.
    
    - If you're using a custom component: Add the decorator `@Language.component` (for function components) or `@Language.factory` (for class components / factories) to your custom component and assign it a name, e.g. `@Language.component('your_name')`. You can then run `nlp.add_pipe('your_name')` to add it to the pipeline.
    

    This pull requests add a @Language.component decorator to make pySBD available in SpaCy again.

    opened by soldni 0
  • Arabic sentence split on the Arabic comma

    Arabic sentence split on the Arabic comma

    Describe the bug Arabic sentence split on the Arabic comma.

    To Reproduce Steps to reproduce the behavior:

    import pysbd
    text = "هذه تجربة، للغة العربية"
    seg = pysbd.Segmenter(language="ar", clean=True)
    >>> print(seg.segment(text))
    

    Output: ['هذه تجربة،', 'للغة العربية']

    Expected behavior The text should not be split on the Arabic comma. Expected output: ['هذه تجربة، للغة العربية']

    Additional context I locally fixed it by modifying the file: pysbd/lang/arabic.py, deleting ، from SENTENCE_BOUNDARY_REGEX.

    opened by ymoslem 0
  • Does pysbd delete sentences after detection ?

    Does pysbd delete sentences after detection ?

    Hey there, So ive been using pysbd to detect boundries in hindi and marathi language and then save the same data rearranged from a paragraph to one sentence boundry per sample. Unfortunately the storage size has gone down from 22GB to 14.5 GB after just detecting boundries and just saving them per sentence. and yes i did turn off the clean args.

    opened by StephennFernandes 0
  • Update pysbd_as_spacy_component.py

    Update pysbd_as_spacy_component.py

    Thanks for a great sentence splitting package. A small contribution, after troubleshooting, why the code was not working out of the box. The spacy v3 requires a string in the add_pipe() call. The component need to be declared using the language decorator. See also https://spacy.io/usage/processing-pipelines#custom-components. Hope it helps other users.

    opened by guebeln0 0
Releases(v0.3.4)
  • v0.3.4(Feb 11, 2021)

  • v0.3.3(Oct 8, 2020)

  • v0.3.2(Sep 11, 2020)

  • v0.3.1(Aug 11, 2020)

  • v0.3.0(Aug 11, 2020)

    v0.3.0

    • ✨ 💫 Support Multiple languages - #2
    • 🏎⚡️💯 Benchmark across Segmentation Tools, Libraries and Algorithms
    • 🎨 ♻️ Update sentence char_span logic
    • ⚡️ Performance improvements - #41
    • ♻️🐛 Refactor AbbreviationReplacer
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0rc(Jun 9, 2020)

    • ✨ 💫 sent char_span through with spaCy & regex approach - #63
    • ♻️ Refactoring to support multiple languages
    • ✨ 💫Initial language support for - Hindi, Marathi, Chinese, Spanish
    • ✅ Updated tests - more coverage & regression tests for issues
    • 👷👷🏻‍♀️ GitHub actions for CI-CD
    • 💚☂️ Add code coverage - coverage.py Add Codecov
    • 🐛 Fix incorrect text span & vanilla pysbd vs spacy output discrepancy - #49, #53, #55 , #59
    • 🐛 Fix NUMBERED_REFERENCE_REGEX for zero or one time - #58
    • 🔐Fix security vulnerability bleach - #62
    Source code(tar.gz)
    Source code(zip)
  • v0.2.3(Nov 13, 2019)

  • v0.2.2(Nov 1, 2019)

  • v0.2.1(Oct 30, 2019)

  • v0.2.0(Oct 25, 2019)

    • ✨Add char_span parameter (optional) to get sentence & its (start, end) char offsets from original text
    • ✨pySBD as a spaCy component example
    • 🐛 Fix double question mark swallow bug - #39
    Source code(tar.gz)
    Source code(zip)
  • v0.1.5(Oct 24, 2019)

  • v0.1.4(Oct 20, 2019)

    • ✨ ✅ Handle intermittent punctuations added special case: r"[。..!!?].*" to handle intermittent dots, exclaimation, etc. special cases group can be updated as per developer needs- #34
    Source code(tar.gz)
    Source code(zip)
  • v0.1.3(Oct 19, 2019)

    • 🐛 Fix lists_item_replacer - #29
    • 🐛 Fix & ♻️ refactor replace_multi_period_abbreviations - #30
    • 🐛 Fix abbreviation_replacer - #31
    • ✅ Add regression tests for issues
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Oct 18, 2019)

  • v0.1.1(Oct 9, 2019)

Owner
Nipun Sadvilkar
I like to explore Jungle of Data with Python as my swiss knife with pandas, numpy, matplotlib and scikit-learn as its multi-tools😅
Nipun Sadvilkar
Traditional Chinese Text Recognition Dataset: Synthetic Dataset and Labeled Data

Traditional Chinese Text Recognition Dataset: Synthetic Dataset and Labeled Data Authors: Yi-Chang Chen, Yu-Chuan Chang, Yen-Cheng Chang and Yi-Ren Ye

Yi-Chang Chen 5 Dec 15, 2022
Build Text Rerankers with Deep Language Models

Reranker is a lightweight, effective and efficient package for training and deploying deep languge model reranker in information retrieval (IR), question answering (QA) and many other natural languag

Luyu Gao 140 Dec 06, 2022
Lightweight utility tools for the detection of multiple spellings, meanings, and language-specific terminology in British and American English

Breame ( British English and American English) Breame is a lightweight Python package with a number of utility tools to aid in the detection of words

Charles 8 Oct 10, 2022
A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. IMPORTANT: (30.08.2020) We moved our models

flair 12.3k Dec 31, 2022
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Intel Labs 2.9k Dec 31, 2022
Trains an OpenNMT PyTorch model and SentencePiece tokenizer.

Trains an OpenNMT PyTorch model and SentencePiece tokenizer. Designed for use with Argos Translate and LibreTranslate.

Argos Open Tech 61 Dec 13, 2022
An Explainable Leaderboard for NLP

ExplainaBoard: An Explainable Leaderboard for NLP Introduction | Website | Download | Backend | Paper | Video | Bib Introduction ExplainaBoard is an i

NeuLab 319 Dec 20, 2022
Code repository of the paper Neural circuit policies enabling auditable autonomy published in Nature Machine Intelligence

Code repository of the paper Neural circuit policies enabling auditable autonomy published in Nature Machine Intelligence

9 Jan 08, 2023
中文問句產生器;使用台達電閱讀理解資料集(DRCD)

Transformer QG on DRCD The inputs of the model refers to we integrate C and A into a new C' in the following form. C' = [c1, c2, ..., [HL], a1, ..., a

Philip 1 Oct 22, 2021
An end to end ASR Transformer model training repo

END TO END ASR TRANSFORMER 本项目基于transformer 6*encoder+6*decoder的基本结构构造的端到端的语音识别系统 Model Instructions 1.数据准备: 自行下载数据,遵循文件结构如下: ├── data │ ├── train │

旷视天元 MegEngine 10 Jul 19, 2022
Lumped-element impedance calculator and frequency-domain plotter.

fastZ: Lumped-Element Impedance Calculator fastZ is a small tool for calculating and visualizing electrical impedance in Python. Features include: Sup

Wesley Hileman 47 Nov 18, 2022
📜 GPT-2 Rhyming Limerick and Haiku models using data augmentation

Well-formed Limericks and Haikus with GPT2 📜 GPT-2 Rhyming Limerick and Haiku models using data augmentation In collaboration with Matthew Korahais &

Bardia Shahrestani 2 May 26, 2022
NLP - Machine learning

Flipkart-product-reviews NLP - Machine learning About Product reviews is an essential part of an online store like Flipkart’s branding and marketing.

Harshith VH 1 Oct 29, 2021
Universal Adversarial Triggers for Attacking and Analyzing NLP (EMNLP 2019)

Universal Adversarial Triggers for Attacking and Analyzing NLP This is the official code for the EMNLP 2019 paper, Universal Adversarial Triggers for

Eric Wallace 248 Dec 17, 2022
Edge-Augmented Graph Transformer

Edge-augmented Graph Transformer Introduction This is the official implementation of the Edge-augmented Graph Transformer (EGT) as described in https:

Md Shamim Hussain 21 Dec 14, 2022
Unlimited Call - Text Bombing Tool

FastBomber Unlimited Call - Text Bombing Tool Installation On Termux

Aryan 6 Nov 10, 2022
This code is the implementation of Text Emotion Recognition (TER) with linguistic features

APSIPA-TER This code is the implementation of Text Emotion Recognition (TER) with linguistic features. The network model is BERT with a pretrained mod

kenro515 1 Feb 08, 2022
Concept Modeling: Topic Modeling on Images and Text

Concept is a technique that leverages CLIP and BERTopic-based techniques to perform Concept Modeling on images.

Maarten Grootendorst 120 Dec 27, 2022
Natural Language Processing at EDHEC, 2022

Natural Language Processing Here you will find the teaching materials for the "Natural Language Processing" course at EDHEC Business School, 2022 What

1 Feb 04, 2022
An IVR Chatbot which can exponentially reduce the burden of companies as well as can improve the consumer/end user experience.

IVR-Chatbot Achievements 🏆 Team Uhtred won the Maverick 2.0 Bot-a-thon 2021 organized by AbInbev India. ❓ Problem Statement As we all know that, lot

ARYAMAAN PANDEY 9 Dec 08, 2022