A vision library for performing sliced inference on large images/small objects

Overview

SAHI: Slicing Aided Hyper Inference

PyPI version Conda version CI

A vision library for performing sliced inference on large images/small objects

teaser

Overview

Object detection and instance segmentation are by far the most important fields of applications in Computer Vision. However, detection of small objects and inference on large images are still major issues in practical usage. Here comes the SAHI to help developers overcome these real-world problems.

Getting started

Blogpost

Check the official SAHI blog post.

Installation

  • Install sahi using conda:
conda install -c obss sahi
  • Install sahi using pip:
pip install sahi
  • Install your desired version of pytorch and torchvision:
pip install torch torchvision
  • Install your desired detection framework (such as mmdet):
pip install mmdet

Usage

  • Sliced inference:
result = get_sliced_prediction(
    image,
    detection_model,
    slice_height = 256,
    slice_width = 256,
    overlap_height_ratio = 0.2,
    overlap_width_ratio = 0.2
)

Refer to inference notebook for detailed usage.

  • Slice an image:
from sahi.slicing import slice_image

slice_image_result, num_total_invalid_segmentation = slice_image(
    image=image_path,
    output_file_name=output_file_name,
    output_dir=output_dir,
    slice_height=256,
    slice_width=256,
    overlap_height_ratio=0.2,
    overlap_width_ratio=0.2,
)
  • Slice a coco formatted dataset:
from sahi.slicing import slice_coco

coco_dict, coco_path = slice_coco(
    coco_annotation_file_path=coco_annotation_file_path,
    image_dir=image_dir,
    slice_height=256,
    slice_width=256,
    overlap_height_ratio=0.2,
    overlap_width_ratio=0.2,
)

Adding new detection framework support

sahi library currently only supports MMDetection models. However it is easy to add new frameworks.

All you need to do is, creating a new class in model.py that implements DetectionModel class. You can take the MMDetection wrapper as a reference.

Contributers

Comments
  • improve json loading

    improve json loading

    Fix the bug :

    Traceback (most recent call last):
      File "C:\ProgramData\Anaconda3\Scripts\labelme2coco-script.py", line 33, in <module>
        sys.exit(load_entry_point('labelme2coco==0.2.1', 'console_scripts', 'labelme2coco')())
      File "C:\ProgramData\Anaconda3\lib\site-packages\labelme2coco-0.2.1-py3.9.egg\labelme2coco\cli.py", line 8, in app
      File "C:\ProgramData\Anaconda3\lib\site-packages\fire\core.py", line 141, in Fire
        component_trace = _Fire(component, args, parsed_flag_args, context, name)
      File "C:\ProgramData\Anaconda3\lib\site-packages\fire\core.py", line 466, in _Fire
        component, remaining_args = _CallAndUpdateTrace(
      File "C:\ProgramData\Anaconda3\lib\site-packages\fire\core.py", line 681, in _CallAndUpdateTrace
        component = fn(*varargs, **kwargs)
      File "C:\ProgramData\Anaconda3\lib\site-packages\labelme2coco-0.2.1-py3.9.egg\labelme2coco\__init__.py", line 32, in convert
      File "C:\ProgramData\Anaconda3\lib\site-packages\labelme2coco-0.2.1-py3.9.egg\labelme2coco\labelme2coco.py", line 41, in get_coco_from_labelme_folder
      File "C:\ProgramData\Anaconda3\lib\site-packages\sahi\utils\file.py", line 66, in load_json
        data = json.load(json_file)
      File "C:\ProgramData\Anaconda3\lib\json\__init__.py", line 293, in load
        return loads(fp.read(),
    UnicodeDecodeError: 'gbk' codec can't decode byte 0x90 in position 82: illegal multibyte sequence
    
    enhancement 
    opened by qingfengtommy 20
  • add support for video prediction

    add support for video prediction

    This PR brings full support for sliced/standard inference for video input format for all MMDetection/Detectron2/YOLOv5 models :fire:

    sahi-video-inf

    • Just give video file as source:
    sahi predict --model_path yolov5s.pt --model_type yolov5 --source video.mp4 --export_visual
    

    You can also view video render during video inference with --view_video:

    sahi predict --model_path yolov5s.pt --model_type yolov5 --source video.mp4 --export_visual --view_video
    
    • To forward 100 frames, on opened window press key D
    • To revert 100 frames, on opened window press key A
    • To forward 20 frames, on opened window press key G
    • To revert 20 frames, on opened window press key F
    • To exit, on opened window press key Esc

    Note: If --view_video is slow, you can add --frame_skip_interval=20 argument to skip interval of 20 frames each time.

    enhancement 
    opened by madenburak 15
  • add YOLOX model support

    add YOLOX model support

    SAHI kütüphanesine YOLOX modelini en basit haliyle ekledim. #535

    Düzeltilecekler:

    • [X] Resize modulu yazıldı.
    • [ ] Test kodu yazılacak.
    • [ ] Notebook dosyası oluşturulacak.

    Resize kodunu özellikle yazmadım. SAHI için ortak bir resize modulu oluşturabilir miyiz? Yolox reposundaki örneği incelleyip geri dönüş yapabilir misiniz? YOLOX-RESIZE

    Ayrıca nms_thre değerini kullanıcıdan istesek nasıl olur? Şuan ben manuel değer veriyorum. Aynı durum yolov7 modeli içinde geçerli.

    enhancement 
    opened by kadirnar 13
  • add automatic slice size calculation

    add automatic slice size calculation

    • If I do not want to give slicing parameters or
    • I do not know what to give,

    it automatically calculates slicing parameters according to the resolution of the image.

    enhancement 
    opened by mcvarer 7
  • Added support for exporting predictions in COCO format

    Added support for exporting predictions in COCO format

    Added support for exporting predictions in COCO format

    changelog: -

    • COCO prediction object of class CocoPrediction can be added into COCO image object of class sahi.utils.cocoCocoImage using method <CocoImage_Instance>.add_prediction
    • __repr__ method updated to reflect predictions in class sahi.utils..coco.CocoImage
    • Additional property prediction_array added in class sahi.utils.coco.Coco to export COCO predictions in json-serializable format. Logic for the property is defined in new function sahi.utils.coco.create_coco_prediction_array
    • added the test cases for the same in test_cocoutils.py
    • updated file coco.md with necessary documentation

    Basic flow is like this

    from sahi.utils.coco import Coco, CocoImage, CocoAnnotation, CocoPrediction
    from sahi.utils.file import save_json
    
    coco_obj = Coco()
    
    # add n images to coco_obj
    for _ in range(n):
        image = CocoImage(**kwargs)
        
        # add n annotations to the image
        for _ in ange(n):
            image.add_annotation(CocoAnnotation(**kwargs))
        
        # add n predictions to the image
        for _ in range(n)
            image.add_prediction(CocoPrediction(**kwargs))
        
        # add image to coco object
        coco_obj.add_image(image)
    
    # export ground truth annotations
    coco_gt = coco_obj.json
    save_json(coco_gt , "ground_truth.json")
    
    # export preadictions 
    coco_predictions = coco_obj.prediction_array
    save_json(coco_predictions, "predictions.json"
    

    Why this is useful?

    The user can utilize the exported files to calculate standard coco metrics using offical COCO api pycocotool with relative ease. This is example of using that

    from pycocotools.cocoeval import COCOeval
    from pycocotools.coco import COCO
    
    # load json files
    coco_ground_truth = COCO(annotation_file="ground_truth.json")
    coco_predictions = coco_ground_truth.loadRes("predictions.json")
    
    coco_evaluator = COCOeval(coco_ground_truth, coco_predictions, "bbox")
    coco_evaluator.evaluate()
    coco_evaluator.accumulate()
    coco_evaluator.summarize()
    

    This can be further expanded to define class which can be conveniently used to evaluate the prediction, something like this

    class DatasetEvalutor:
        # user can inherit and overwrite the given load_model, get_annotation and get_prediction methods
        def __init__(self, **kwargs):
            self.images = []  # list of image meta data
            self.model = None # model place holder
            self.dataset = Coco() # coco object
    
        def load_model(self,**kwargs):
            # logic for loading model goes here
            # set self.model to model
    
        def get_annotation(self, **kwargs):
            # logic for annotation goes here
            # result must return a sahi.utils.coco.CocoAnnotation instance
    
        def get_prediction(self, **kwargs):
            # logic for prediction goes here
            # result must return a sahi.utils.coco.CocoPrediction instance
    
        def run_inference(self, **kwargs):
            # loop over images create CocoImage and add annotations and prediction to it
            # add those CocoImage instance into self.dataset
    
        def evaluate(self, **kwargs):
            # pycocotool code to generate metrics
    
    documentation enhancement 
    opened by PushpakBhoge 7
  • Downgrade Pillow

    Downgrade Pillow

    Issue

    When running the sahi colab notebooks, an error occurs when attempting to run inference using a pretrained model:

    ImportError                               Traceback (most recent call last)
    [<ipython-input-5-6c3be4aaa524>](https://localhost:8080/#) in <module>
    ----> 1 detection_model = AutoDetectionModel.from_pretrained(
          2     model_type='detectron2',
          3     model_path=model_path,
          4     config_path=model_path,
          5     confidence_threshold=0.5,
    
    14 frames
    [/usr/local/lib/python3.8/dist-packages/PIL/ImageFont.py](https://localhost:8080/#) in <module>
         35 from . import Image
         36 from ._deprecate import deprecate
    ---> 37 from ._util import is_directory, is_path
         38 
         39 
    
    ImportError: cannot import name 'is_directory' from 'PIL._util' (/usr/local/lib/python3.8/dist-packages/PIL/_util.py)
    

    Solution

    A solution to this problem (as per this stack overflow post) is to downgrade Pillow and freeze it at version 6.2.2. Using this version of Pillow allows users to successfully execute the entire notebook demonstrating the functionality of sahi.

    Risks

    Freezing the version of Pillow to an earlier state may result in unexpected behaviour, and lead to vulnerabilities in the future if more up to date versions of Pillow are not supported.

    opened by hlydecker 5
  • Changed Mask class to store RLE encoded masks to save RAM.

    Changed Mask class to store RLE encoded masks to save RAM.

    Changed Mask class to store RLE encoded masks to save RAM usage on large images. (Work in progress)

    When doing inference on large images (5300, 3600) pixel with around 100 instances, the sahi library uses around 14 GB of RAM. I introduced internal conversion to RLE encoded masks in the Mask class to save RAM memory and got it down to around 2GB RAM usage for our use-case (see plots).

    RAM_comprison

    Since pycocotools are a bit tricky to install on windows, the code are designed to utilize RLE encoding if pycocotools are available.

    The inference time does not seem to increase by these changes, but only ran a single test so take the results with a grain of salt.

    inference_time_comprison

    enhancement 
    opened by ChristofferEdlund 5
  • Fix bug that Unable to print for prediction time when verbose=2

    Fix bug that Unable to print for prediction time when verbose=2

    Error executing below in current code

    result = get_sliced_prediction(
        "demo_data/small-vehicles1.jpeg",
        detection_model,
        slice_height = 256,
        slice_width = 256,
        overlap_height_ratio = 0.2,
        overlap_width_ratio = 0.2,
        verbose=2
    )
    
    opened by youngjae-avikus 5
  • add detectron2 support

    add detectron2 support

    Sahi algoritmasına detectron2 kütüphanesini eklenmesi amaçlanmıştır.

    Yapılan Değişiklikler:

    Model.py: class Detectron2Model(DetectionModel) sınıfı eklendi. utils/detectron2.py: Test cfg kodu ve .pkl yolları eklendi. tests/test_detectron2.py: Modelin test edilmesi için test_detectron2.py dosyası oluşturuldu.

    enhancement 
    opened by kadirnar 5
  • add support to instantiate a `DetectionModel` from layer

    add support to instantiate a `DetectionModel` from layer

    This PR adds support to instantiate a DetectionModel out of a YOLOv5 model trained on Layer. It includes:

    • A new from_layer() method in AutoDetectionModel to load models from Layer.
    • A new set_model() method in DetectionModel to create a DetectionModel out of an already loaded model.
    • Tests for from_layer() and set_model().

    Example

    You can pass a path of a YOLOv5 model trained on Layer to instantiate a DetectionModel. Here we fetch the yolo5vs pretrained model from this Layer project

    from sahi import AutoDetectionModel
    detection_model = AutoDetectionModel.from_layer("layer/yolov5/models/yolov5s")
    

    Tests

    To run the tests related to from_layer() functionality you can:

    pytest tests/test_layer.py
    

    Screenshot

    image enhancement 
    opened by mecevit 3
  • add `get_coco_with_clipped_bboxes` utility to coco class

    add `get_coco_with_clipped_bboxes` utility to coco class

    Limit overflowing coco bounding boxes to image dimensions

    Usage:

    coco = Coco.from_coco_dict_or_path( coco_path, clip_bboxes_to_img_dims=True)
    

    or,

    coco = coco.get_coco_with_clipped_bboxes()
    
    opened by ssahinnkadir 3
  • add Tensorflow Hub detector support

    add Tensorflow Hub detector support

    Testlerden geçebilmesi için tensorflow ve tensorflow_hub kütüphanelerin kurulması gerekiyor.

    • package_testing.yml
    • ci.yml
    - name: Install tensorflow and tensorflow_hub
      run: >
        pip install  tensorflow
        pip install tensorflow_hub
    

    Bu format uygun mudur? @fcakyon

    enhancement workflows 
    opened by kadirnar 19
Releases(0.11.9)
  • 0.11.9(Dec 24, 2022)

    What's Changed

    • update yolov5 version in ci by @fcakyon in https://github.com/obss/sahi/pull/797
    • fix torch dependency in HuggingfaceDetectionModel by @fcakyon in https://github.com/obss/sahi/pull/798
    • update installation, support latest transformers version by @fcakyon in https://github.com/obss/sahi/pull/799

    Full Changelog: https://github.com/obss/sahi/compare/0.11.8...0.11.9

    Source code(tar.gz)
    Source code(zip)
  • 0.11.8(Dec 23, 2022)

    What's Changed

    • update coco to yolov5 export by @fcakyon in https://github.com/obss/sahi/pull/794

    Full Changelog: https://github.com/obss/sahi/compare/0.11.7...0.11.8

    Source code(tar.gz)
    Source code(zip)
  • 0.11.7(Dec 20, 2022)

    What's Changed

    • make default verbose false for min version check by @fcakyon in https://github.com/obss/sahi/pull/765
    • download mmdet yolox model from hfhub in tests by @fcakyon in https://github.com/obss/sahi/pull/766
    • update torchvision demo by @fcakyon in https://github.com/obss/sahi/pull/776
    • The coco.md file has been updated. by @kadirnar in https://github.com/obss/sahi/pull/777
    • Added indent parameter to save_json parameter. by @kadirnar in https://github.com/obss/sahi/pull/783
    • fix slicing tests by @fcakyon in https://github.com/obss/sahi/pull/787
    • fix json.decoder.JSONDecodeError in coco to yolov5 conversion by @kadirnar in https://github.com/obss/sahi/pull/786

    Full Changelog: https://github.com/obss/sahi/compare/0.11.6...0.11.7

    Source code(tar.gz)
    Source code(zip)
  • 0.11.6(Dec 1, 2022)

    What's Changed

    • fix detectron2 device by @fcakyon in https://github.com/obss/sahi/pull/763
    • update dependency versions in ci by @fcakyon in https://github.com/obss/sahi/pull/760
    • remove python3.6 in ci by @fcakyon in https://github.com/obss/sahi/pull/762

    Full Changelog: https://github.com/obss/sahi/compare/0.11.5...0.11.6

    Source code(tar.gz)
    Source code(zip)
  • 0.11.5(Nov 27, 2022)

    What's Changed

    • fixes a bug that prevents cuda device selection by @fcakyon in https://github.com/obss/sahi/pull/756

    Full Changelog: https://github.com/obss/sahi/compare/0.11.4...0.11.5

    Source code(tar.gz)
    Source code(zip)
  • 0.11.4(Nov 15, 2022)

    What's Changed

    • improve bbox data structure by @fcakyon in https://github.com/obss/sahi/pull/730
    • make some classes importable from highest level by @fcakyon in https://github.com/obss/sahi/pull/731
    • add min dependency version check by @fcakyon in https://github.com/obss/sahi/pull/734
    • handle numpy bbox in object annotation by @fcakyon in https://github.com/obss/sahi/pull/735
    • add 2 slicing utils by @fcakyon in https://github.com/obss/sahi/pull/736
    • fix predict script by @fcakyon in https://github.com/obss/sahi/pull/737
    • update torch torchvision versions in ci by @fcakyon in https://github.com/obss/sahi/pull/729
    • update ci workflow name by @fcakyon in https://github.com/obss/sahi/pull/732

    Full Changelog: https://github.com/obss/sahi/compare/0.11.3...0.11.4

    Source code(tar.gz)
    Source code(zip)
  • 0.11.3(Nov 14, 2022)

    What's Changed

    • update model dependency and device management by @fcakyon in https://github.com/obss/sahi/pull/725
    • implement indexing support for slice image result by @fcakyon in https://github.com/obss/sahi/pull/726
    • update version by @fcakyon in https://github.com/obss/sahi/pull/727

    Full Changelog: https://github.com/obss/sahi/compare/0.11.2...0.11.3

    Source code(tar.gz)
    Source code(zip)
  • 0.11.2(Nov 14, 2022)

    What's Changed

    • support uppercase letter image/video extensions by @fcakyon in https://github.com/obss/sahi/pull/713
    • add cited paper list and competition winners by @fcakyon in https://github.com/obss/sahi/pull/705
    • fix a typo in readme by @fcakyon in https://github.com/obss/sahi/pull/706
    • update version by @fcakyon in https://github.com/obss/sahi/pull/707
    • make ci trigger more efficient by @fcakyon in https://github.com/obss/sahi/pull/709
    • update mmcv mmdet in ci by @fcakyon in https://github.com/obss/sahi/pull/714
    • update contributing and contributors sections in readme by @fcakyon in https://github.com/obss/sahi/pull/715
    • update predict docs by @fcakyon in https://github.com/obss/sahi/pull/723

    Full Changelog: https://github.com/obss/sahi/compare/0.11.1...0.11.2

    Source code(tar.gz)
    Source code(zip)
  • 0.11.1(Nov 1, 2022)

    What's Changed

    • fix coco segm eval by @fcakyon in https://github.com/obss/sahi/pull/701
    • add float bbox and segm point support, remove mot utils by @fcakyon in https://github.com/obss/sahi/pull/702

    Full Changelog: https://github.com/obss/sahi/compare/0.11.0...0.11.1

    Source code(tar.gz)
    Source code(zip)
  • 0.11.0(Oct 29, 2022)

    What's Changed

    • refactor models structure by @fcakyon in https://github.com/obss/sahi/pull/694

    Full Changelog: https://github.com/obss/sahi/compare/0.10.8...0.11.0

    Source code(tar.gz)
    Source code(zip)
  • 0.10.8(Oct 26, 2022)

    What's Changed

    • remove layer support by @fcakyon in https://github.com/obss/sahi/pull/692
    • Fixed a bug in greedynmm func by @youngjae-avikus in https://github.com/obss/sahi/pull/691
    • flake8 dependency fix for python3.7. by @devrimcavusoglu in https://github.com/obss/sahi/pull/676
    • fix links in readme by @fcakyon in https://github.com/obss/sahi/pull/689

    Full Changelog: https://github.com/obss/sahi/compare/0.10.7...0.10.8

    Source code(tar.gz)
    Source code(zip)
  • 0.10.7(Sep 27, 2022)

    What's Changed

    • update pybboxes version by @fcakyon in https://github.com/obss/sahi/pull/598
    • add support for yolov5==6.1.9 by @fcakyon in https://github.com/obss/sahi/pull/601
    • refactor demo notebooks by utilizing newly introduced AutoDetectionMo… by @ishworii in https://github.com/obss/sahi/pull/516
    • add norfair>=2.0.0 support by @fcakyon in https://github.com/obss/sahi/pull/595
    • Changed Mask class to store RLE encoded masks to save RAM. by @ChristofferEdlund in https://github.com/obss/sahi/pull/599
    • update workflow versions by @fcakyon in https://github.com/obss/sahi/pull/603

    New Contributors

    • @ishworii made their first contribution in https://github.com/obss/sahi/pull/516
    • @ChristofferEdlund made their first contribution in https://github.com/obss/sahi/pull/599

    Full Changelog: https://github.com/obss/sahi/compare/0.10.6...0.10.7

    Source code(tar.gz)
    Source code(zip)
  • 0.10.6(Sep 24, 2022)

    What's Changed

    • support yolov5>=6.1.9 by @fcakyon in https://github.com/obss/sahi/pull/592
    • update installation in readme by @fcakyon in https://github.com/obss/sahi/pull/584

    Full Changelog: https://github.com/obss/sahi/compare/0.10.5...0.10.6

    Source code(tar.gz)
    Source code(zip)
  • 0.10.5(Sep 4, 2022)

    What's Changed

    bugfix

    • fix auto slice warning by @fcakyon in https://github.com/obss/sahi/pull/574
    • fix a slice_image error by @fcakyon in https://github.com/obss/sahi/pull/575

    other

    • fix layer tests by @fcakyon in https://github.com/obss/sahi/pull/577
    • update ci package versions by @fcakyon in https://github.com/obss/sahi/pull/576

    Full Changelog: https://github.com/obss/sahi/compare/0.10.4...0.10.5

    Source code(tar.gz)
    Source code(zip)
  • 0.10.4(Aug 12, 2022)

    What's Changed

    • hotfix for occasional segmentation fault by @fcakyon in https://github.com/obss/sahi/pull/546
    • refactor requirement checking by @fcakyon in https://github.com/obss/sahi/pull/549
    • Code formatting and checks are moved to a single module. by @devrimcavusoglu in https://github.com/obss/sahi/pull/548
    • fix a typo in type hinting by @ymerkli in https://github.com/obss/sahi/pull/553
    • fix an incorrect url in comments by @aynursusuz in https://github.com/obss/sahi/pull/554
    • update version by @fcakyon in https://github.com/obss/sahi/pull/555
    • fix linting script by @devrimcavusoglu in https://github.com/obss/sahi/pull/558
    • fix an incorrect url in comments by @aynursusuz in https://github.com/obss/sahi/pull/559

    New Contributors

    • @ymerkli made their first contribution in https://github.com/obss/sahi/pull/553
    • @aynursusuz made their first contribution in https://github.com/obss/sahi/pull/554

    Full Changelog: https://github.com/obss/sahi/compare/0.10.3...0.10.4

    Source code(tar.gz)
    Source code(zip)
  • 0.10.3(Aug 2, 2022)

    What's Changed

    • fix coco2fiftyone script by @fcakyon in https://github.com/obss/sahi/pull/540
    • fix segmentation fault in same cases by @fcakyon in https://github.com/obss/sahi/pull/541

    Full Changelog: https://github.com/obss/sahi/compare/0.10.2...0.10.3

    Source code(tar.gz)
    Source code(zip)
  • 0.10.2(Jul 28, 2022)

    What's Changed

    • add automatic slice size calculation by @mcvarer in https://github.com/obss/sahi/pull/512
    • Fix bug that Unable to print for prediction time when verbose=2 by @youngjae-avikus in https://github.com/obss/sahi/pull/521
    • pybboxes allow oob, strict=False fix. by @devrimcavusoglu in https://github.com/obss/sahi/pull/528
    • update predict verbose by @fcakyon in https://github.com/obss/sahi/pull/514
    • update pybboxes version by @fcakyon in https://github.com/obss/sahi/pull/531

    New Contributors

    • @youngjae-avikus made their first contribution in https://github.com/obss/sahi/pull/521

    Full Changelog: https://github.com/obss/sahi/compare/0.10.1...0.10.2

    Source code(tar.gz)
    Source code(zip)
  • 0.10.1(Jun 25, 2022)

    What's Changed

    • add python 3.10 support by @fcakyon in https://github.com/obss/sahi/pull/503
    • add file_name to export_visuals by @mcvarer in https://github.com/obss/sahi/pull/507
    • refactor automodel to lazyload models by @fcakyon in https://github.com/obss/sahi/pull/509
    • update automodel loading method to from_pretrained by @fcakyon in https://github.com/obss/sahi/pull/510

    New Contributors

    • @mcvarer made their first contribution in https://github.com/obss/sahi/pull/507

    Full Changelog: https://github.com/obss/sahi/compare/0.10.0...0.10.1

    Source code(tar.gz)
    Source code(zip)
  • 0.10.0(Jun 21, 2022)

    New Features

    - Layer.ai integration

    from sahi import AutoDetectionModel
    
    detection_model = AutoDetectionModel.from_layer("layer/yolov5/models/yolov5s")
    
    result = get_sliced_prediction(
        "image.jpeg",
        detection_model,
        slice_height = 512,
        slice_width = 512,
        overlap_height_ratio = 0.2,
        overlap_width_ratio = 0.2
    )
    

    - HuggingfFace Transformers object detectors

    from sahi.model import HuggingfaceDetectionModel
    
    detection_model = HuggingfaceDetectionModel(
        model_path="facebook/detr-resnet-50",
        image_size=640,
        confidence_threshold=0.5
    )
    
    result = get_sliced_prediction(
        "image.jpeg",
        detection_model,
        slice_height = 512,
        slice_width = 512,
        overlap_height_ratio = 0.2,
        overlap_width_ratio = 0.2
    )
    

    - TorchVision object detectors

    import torchvision
    from sahi.model import TorchVisionDetectionModel
    
    model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
    
    detection_model = TorchVisionDetectionModel(
        model=model,
        image_size=640,
        confidence_threshold=0.5
    )
    
    result = get_sliced_prediction(
        "image.jpeg",
        detection_model,
        slice_height = 512,
        slice_width = 512,
        overlap_height_ratio = 0.2,
        overlap_width_ratio = 0.2
    )
    

    - Support for exporting predictions in COCO format

    from sahi.utils.coco import Coco, CocoImage, CocoAnnotation, CocoPrediction
    from sahi.utils.file import save_json
    from pycocotools.cocoeval import COCOeval
    from pycocotools.coco import COCO
    
    coco_obj = Coco()
    
    # add n images to coco_obj
    for _ in range(n):
        image = CocoImage(**kwargs)
        
        # add n annotations to the image
        for _ in ange(n):
            image.add_annotation(CocoAnnotation(**kwargs))
        
        # add n predictions to the image
        for _ in range(n)
            image.add_prediction(CocoPrediction(**kwargs))
        
        # add image to coco object
        coco_obj.add_image(image)
    
    # export ground truth annotations
    coco_gt = coco_obj.json
    save_json(coco_gt , "ground_truth.json")
    
    # export predictions 
    coco_predictions = coco_obj.prediction_array
    save_json(coco_predictions, "predictions.json")
    
    coco_ground_truth = COCO(annotation_file="coco_dataset.json")
    coco_predictions = coco_ground_truth.loadRes("coco_predictions.json")
    coco_evaluator = COCOeval(coco_ground_truth, coco_predictions, "bbox")
    coco_evaluator.evaluate()
    coco_evaluator.accumulate()
    coco_evaluator.summarize()
    

    What's Changed

    • refactor torch utils by @fcakyon in https://github.com/obss/sahi/pull/468
    • add automodel structure for unified model loading by @fcakyon in https://github.com/obss/sahi/pull/469
    • add support to instantiate a DetectionModel from layer by @mecevit in https://github.com/obss/sahi/pull/462
    • refactor automodel by @fcakyon in https://github.com/obss/sahi/pull/470
    • update layer versions in workflows by @fcakyon in https://github.com/obss/sahi/pull/471
    • update version to 0.10.0 by @fcakyon in https://github.com/obss/sahi/pull/474
    • Added support for exporting predictions in COCO format by @PushpakBhoge in https://github.com/obss/sahi/pull/465
    • update contributors in readme by @fcakyon in https://github.com/obss/sahi/pull/477
    • update device priority for Detectron2DetectionModel by @fcakyon in https://github.com/obss/sahi/pull/479
    • fix pickle export for video by @fcakyon in https://github.com/obss/sahi/pull/481
    • update continuous integration by @fcakyon in https://github.com/obss/sahi/pull/483
    • refactor import and torch utils by @fcakyon in https://github.com/obss/sahi/pull/484
    • make detectionmodel classes more explicit in automodel by @fcakyon in https://github.com/obss/sahi/pull/485
    • utilize check_requirements in several modules by @fcakyon in https://github.com/obss/sahi/pull/487
    • update package versions in workflows by @fcakyon in https://github.com/obss/sahi/pull/488
    • add support for huggingface transformers object detectors by @devrimcavusoglu in https://github.com/obss/sahi/pull/475
    • add torchvision detector support by @fcakyon in https://github.com/obss/sahi/pull/486
    • remove legacy image_size parameter by @kadirnar in https://github.com/obss/sahi/pull/494
    • AutoDetectionModel can be imported from sahi by @fcakyon in https://github.com/obss/sahi/pull/498
    • add python3.6 support by @fcakyon in https://github.com/obss/sahi/pull/489
    • refactor exception handling by @kadirnar in https://github.com/obss/sahi/pull/499
    • improve requirements and import handling by @fcakyon in https://github.com/obss/sahi/pull/502

    New Contributors

    • @mecevit made their first contribution in https://github.com/obss/sahi/pull/462
    • @PushpakBhoge made their first contribution in https://github.com/obss/sahi/pull/465

    Full Changelog: https://github.com/obss/sahi/compare/0.9.4...0.10.0

    Source code(tar.gz)
    Source code(zip)
  • 0.9.4(May 28, 2022)

    What's Changed

    • reduce ram usage by adding buffer based merging by @weypro in https://github.com/obss/sahi/pull/445
    • improve json loading by @qingfengtommy in https://github.com/obss/sahi/pull/453

    New Contributors

    • @weypro made their first contribution in https://github.com/obss/sahi/pull/445
    • @qingfengtommy made their first contribution in https://github.com/obss/sahi/pull/453

    Full Changelog: https://github.com/obss/sahi/compare/0.9.3...0.9.4

    Source code(tar.gz)
    Source code(zip)
  • 0.9.3(May 8, 2022)

    What's Changed

    • add support for video prediction by @madenburak in https://github.com/obss/sahi/pull/442

    sahi-video-inf

    • export prediction visuals by default @fcakyon in ##
    • add detection_model input to predict function by @ssahinnkadir in https://github.com/obss/sahi/pull/443
    • refactor postprocess call by @fcakyon in https://github.com/obss/sahi/pull/458
    • update yolov5, mmdet, norfair versions in ci by @fcakyon in https://github.com/obss/sahi/pull/459

    New Contributors

    • @madenburak made their first contribution in https://github.com/obss/sahi/pull/442

    Full Changelog: https://github.com/obss/sahi/compare/0.9.2...0.9.3

    Source code(tar.gz)
    Source code(zip)
  • 0.9.2(Apr 9, 2022)

    What's Changed

    • fix fiftyone utils by @fcakyon in https://github.com/obss/sahi/pull/423
    • update paper doi badge by @fcakyon in https://github.com/obss/sahi/pull/424
    • update env setup in readme by @kadirnar in https://github.com/obss/sahi/pull/408
    • update contributing section in readme by @fcakyon in https://github.com/obss/sahi/pull/434
    • update cli docs by @fcakyon in https://github.com/obss/sahi/pull/437
    • update package versions in ci by @fcakyon in https://github.com/obss/sahi/pull/439
    • update version by @fcakyon in https://github.com/obss/sahi/pull/440

    Full Changelog: https://github.com/obss/sahi/compare/0.9.1...0.9.2

    Source code(tar.gz)
    Source code(zip)
    test.mp4(686.92 KB)
  • 0.9.1(Mar 17, 2022)

    What's Changed

    • add the list of competitions sahi made us win by @fcakyon in https://github.com/obss/sahi/pull/385
    • add citation to paper by @fcakyon in https://github.com/obss/sahi/pull/387
    • add arxiv url for the SAHI paper by @fcakyon in https://github.com/obss/sahi/pull/388
    • handle invalid mask prediction by @fcakyon in https://github.com/obss/sahi/pull/390
    • improve code quality by @fcakyon in https://github.com/obss/sahi/pull/398
    • improve nms postprocess by @tureckova in https://github.com/obss/sahi/pull/405

    Full Changelog: https://github.com/obss/sahi/compare/0.9.0...0.9.1

    Source code(tar.gz)
    Source code(zip)
  • 0.9.0(Feb 12, 2022)

    detectron2

    What's Changed

    • add detectron2 support by @kadirnar in https://github.com/obss/sahi/pull/322
    • update detectron notebook by @fcakyon in https://github.com/obss/sahi/pull/355
    • refactor readme by @fcakyon in https://github.com/obss/sahi/pull/316
    • refactor slice_coco script and cli by @fcakyon in https://github.com/obss/sahi/pull/359
    • update analysis gif in readme by @fcakyon in https://github.com/obss/sahi/pull/362
    • update slice_coco export naming by @fcakyon in https://github.com/obss/sahi/pull/361
    • fix broken links in readme by @fcakyon in https://github.com/obss/sahi/pull/365
    • add kaggle notebook into readme by @rkinas in https://github.com/obss/sahi/pull/366
    • handle when class name contains invalid char by @fcakyon in https://github.com/obss/sahi/pull/369
    • handle out of image bbox predictions by @fcakyon in https://github.com/obss/sahi/pull/373
    • handle annotation dict without segmentation by @tureckova in https://github.com/obss/sahi/pull/374
    • fix unused coco util by @fcakyon @oulcan in https://github.com/obss/sahi/pull/375
    • fix coco util tests by @fcakyon in https://github.com/obss/sahi/pull/376
    • update torch, torchvision, mmdet, mmcv in tests by @fcakyon in https://github.com/obss/sahi/pull/379
    • handle nms-postprocess in edge cases by @fcakyon in https://github.com/obss/sahi/pull/370

    New Contributors

    • @kadirnar made their first contribution in https://github.com/obss/sahi/pull/322
    • @tureckova made their first contribution in https://github.com/obss/sahi/pull/374

    Full Changelog: https://github.com/obss/sahi/compare/0.8.22...0.9.0

    Source code(tar.gz)
    Source code(zip)
  • 0.8.22(Jan 13, 2022)

    What's Changed

    • fix LSNMS potprocess handle numpy array index in ObjectPredictionList by @fcakyon in https://github.com/obss/sahi/pull/350
    • fix unused coco merge util by @fcakyon in https://github.com/obss/sahi/pull/353

    Full Changelog: https://github.com/obss/sahi/compare/0.8.20...0.8.22

    Source code(tar.gz)
    Source code(zip)
  • 0.8.20(Jan 11, 2022)

    What's Changed

    • fix typo in print by @fcakyon in https://github.com/obss/sahi/pull/339
    • Update error analysis by @fcakyon in https://github.com/obss/sahi/pull/340
    • fix warning messages in coco analysis by @fcakyon in https://github.com/obss/sahi/pull/341
    • add auto postprocess type to predict by @fcakyon in https://github.com/obss/sahi/pull/342
    • refactor fiftyone utils by @fcakyon in https://github.com/obss/sahi/pull/345

    Full Changelog: https://github.com/obss/sahi/compare/0.8.19...0.8.20

    Source code(tar.gz)
    Source code(zip)
  • 0.8.19(Jan 6, 2022)

  • 0.8.18(Jan 2, 2022)

    What's Changed

    • refactor postprocessing and coco eval for 100x speed up by @fcakyon in https://github.com/obss/sahi/pull/320
    • refactor image_size and model_confidence for faster inference by @fcakyon in https://github.com/obss/sahi/pull/329
    • remove deprecated coco util by @fcakyon in https://github.com/obss/sahi/pull/323
    • fix LSNMSPostprocess by @fcakyon in https://github.com/obss/sahi/pull/330
    • fix rmtree in tests by @fcakyon in https://github.com/obss/sahi/pull/326

    Full Changelog: https://github.com/obss/sahi/compare/0.8.16...0.8.18

    Source code(tar.gz)
    Source code(zip)
  • 0.8.16(Dec 26, 2021)

    What's Changed

    • refactorize model classes, handle invalid polygons, minor improvements by @fcakyon in https://github.com/obss/sahi/pull/311
    • refactor tests and coco utils by @fcakyon in https://github.com/obss/sahi/pull/313
    • fix nms postprocess by @fcakyon in https://github.com/obss/sahi/pull/314
    • update predict verbose by @fcakyon in https://github.com/obss/sahi/pull/317
    • print exports dirs after process finishes by @fcakyon in https://github.com/obss/sahi/pull/318

    Full Changelog: https://github.com/obss/sahi/compare/0.8.15...0.8.16

    Source code(tar.gz)
    Source code(zip)
  • 0.8.15(Dec 15, 2021)

    What's Changed

    • update default params to match coco eval and error analysis by @fcakyon in https://github.com/obss/sahi/pull/306
    • utilize max_detections in coco_error_analysis by @fcakyon in https://github.com/obss/sahi/pull/307
    • reformat coco_error_analysis with black by @fcakyon in https://github.com/obss/sahi/pull/308

    Full Changelog: https://github.com/obss/sahi/compare/0.8.14...0.8.15

    Source code(tar.gz)
    Source code(zip)
Owner
Open Business Software Solutions
Open Source for Open Business
Open Business Software Solutions
Reinforcement learning library in JAX.

Reinforcement learning library in JAX.

Yicheng Luo 96 Oct 30, 2022
Translation-equivariant Image Quantizer for Bi-directional Image-Text Generation

Translation-equivariant Image Quantizer for Bi-directional Image-Text Generation Woncheol Shin1, Gyubok Lee1, Jiyoung Lee1, Joonseok Lee2,3, Edward Ch

Woncheol Shin 7 Sep 26, 2022
UCSD Oasis platform

oasis UCSD Oasis platform Local project setup Install Docker Compose and make sure you have Pip installed Clone the project and go to the project fold

InSTEDD 4 Jun 16, 2021
Implementation of ICLR 2020 paper "Revisiting Self-Training for Neural Sequence Generation"

Self-Training for Neural Sequence Generation This repo includes instructions for running noisy self-training algorithms from the following paper: Revi

Junxian He 45 Dec 31, 2022
Official PyTorch implementation of "Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning" (ICCV2021 Oral)

MeTAL - Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning (ICCV2021 Oral) Sungyong Baik, Janghoon Choi, Heewon Kim, Dohee Cho, Jaes

Sungyong Baik 44 Dec 29, 2022
This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures using receptive field analysis (RFA) and create graph visualizations of your architecture.

ReceptiveFieldAnalysisToolbox This is RFA-Toolbox, a simple and easy-to-use library that allows you to optimize your neural network architectures usin

84 Nov 23, 2022
Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".

VL-BERT By Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai. This repository is an official implementation of the paper VL-BERT:

Weijie Su 698 Dec 18, 2022
Mixup for Supervision, Semi- and Self-Supervision Learning Toolbox and Benchmark

OpenSelfSup News Downstream tasks now support more methods(Mask RCNN-FPN, RetinaNet, Keypoints RCNN) and more datasets(Cityscapes). 'GaussianBlur' is

AI Lab, Westlake University 332 Jan 03, 2023
unofficial pytorch implementation of RefineGAN

RefineGAN unofficial pytorch implementation of RefineGAN (https://arxiv.org/abs/1709.00753) for CSMRI reconstruction, the official code using tensorpa

xinby17 5 Jul 21, 2022
NPBG++: Accelerating Neural Point-Based Graphics

[CVPR 2022] NPBG++: Accelerating Neural Point-Based Graphics Project Page | Paper This repository contains the official Python implementation of the p

Ruslan Rakhimov 57 Dec 03, 2022
Probabilistic Programming and Statistical Inference in PyTorch

PtStat Probabilistic Programming and Statistical Inference in PyTorch. Introduction This project is being developed during my time at Cogent Labs. The

Stefano Peluchetti 109 Nov 26, 2022
This is the implementation of our work Deep Extreme Cut (DEXTR), for object segmentation from extreme points.

This is the implementation of our work Deep Extreme Cut (DEXTR), for object segmentation from extreme points.

Sergi Caelles 828 Jan 05, 2023
A tensorflow implementation of GCN-LPA

GCN-LPA This repository is the implementation of GCN-LPA (arXiv): Unifying Graph Convolutional Neural Networks and Label Propagation Hongwei Wang, Jur

Hongwei Wang 83 Nov 28, 2022
State-of-the-art language models can match human performance on many tasks

Status: Archive (code is provided as-is, no updates expected) Grade School Math [Blog Post] [Paper] State-of-the-art language models can match human p

OpenAI 259 Jan 08, 2023
Code to train models from "Paraphrastic Representations at Scale".

Paraphrastic Representations at Scale Code to train models from "Paraphrastic Representations at Scale". The code is written in Python 3.7 and require

John Wieting 71 Dec 19, 2022
NeuralCompression is a Python repository dedicated to research of neural networks that compress data

NeuralCompression is a Python repository dedicated to research of neural networks that compress data. The repository includes tools such as JAX-based entropy coders, image compression models, video c

Facebook Research 297 Jan 06, 2023
Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021

Geometric Vector Perceptron Implementation of Geometric Vector Perceptron, a simple circuit with 3d rotation equivariance for learning over large biom

Phil Wang 59 Nov 24, 2022
The Habitat-Matterport 3D Research Dataset - the largest-ever dataset of 3D indoor spaces.

Habitat-Matterport 3D Dataset (HM3D) The Habitat-Matterport 3D Research Dataset is the largest-ever dataset of 3D indoor spaces. It consists of 1,000

Meta Research 62 Dec 27, 2022
Demo for Real-time RGBD-based Extended Body Pose Estimation paper

Real-time RGBD-based Extended Body Pose Estimation This repository is a real-time demo for our paper that was published at WACV 2021 conference The ou

Renat Bashirov 118 Dec 26, 2022
Implementation of the state of the art beat-detection, downbeat-detection and tempo-estimation model

The ISMIR 2020 Beat Detection, Downbeat Detection and Tempo Estimation Model Implementation. This is an implementation in TensorFlow to implement the

Koen van den Brink 1 Nov 12, 2021