Computational Pathology Toolbox developed by TIA Centre, University of Warwick.

Overview

TIA Toolbox

Documentation Status Travis CI Status PyPI Status
DOI

Computational Pathology Toolbox developed at the TIA Centre

Getting Started

All Users

This package is for those interested in digital pathology: including graduate students, medical staff, members of the TIA Centre and of PathLAKE, and anyone, anywhere, who may find it useful. We will continue to improve this package, taking account of developments in pathology, microscopy, computing and related disciplines. Please send comments and criticisms to [email protected].

tiatoolbox is a multipurpose name that we use for 1) a certain computer program, 2) a Python package of related programs, created by us at the TIA Centre to help people get started in Digital Pathology, 3) this repository, 4) a certain virtual environment.

Developers

Anyone wanting to contribute to this repository, please first look at our Wiki and at our web page for contributors. See also the Prepare for development section of this document.

Links, if needed

The bash shell is available on all commonly encountered platforms. Commands in this README are in bash. Windows users can use the command prompt to install conda and python packages.

conda is a management system for software packages and virtual environments. To get conda, download Anaconda, which includes hundreds of the most useful Python packages, using 2GB disk space. Alternatively, miniconda uses 400MB, and packages can be added as needed.

Github is powered by the version control system git, which has many users and uses. In Github, it is used to track versions of code and other documents.

Examples Taster

  1. Click here for jupyter notebooks, hosted on the web, with demos of tiatoolbox. All necessary resources to run the notebooks are remotely provided, so you don't need to have Python installed on your computer.
  2. Click on a filename with suffix .ipynb and the notebook will open in your browser.
  3. Click on one of the two blue checkboxes in your browser window labelled either Open in Colab or Open in Kaggle: colab and kaggle are websites providing free-of-charge platforms for running jupyter notebooks.
  4. Operate the notebook in your browser, editing, inserting or deleting cells as desired.
  5. Changes you make to the notebook will last no longer than your colab or kaggle session.

Install Python package

If you wish to use our programs, perhaps without developing them further, run the command pip install tiatoolbox or pip install --ignore-installed --upgrade tiatoolbox to upgrade from an existing installation. Detailed installation instructions can be found in the documentation.

To understand better how the programs work, study the jupyter notebooks referred to under the heading Examples Taster.

Command Line

tiatoolbox supports various features through command line. For more information, please try tiatoolbox --help

Prepare for development

Prepare a computer as a convenient platform for further development of the Python package tiatoolbox and related programs as follows.

  1. Install pre-requisite software
  2. Open a terminal window
    $ cd <future-home-of-tiatoolbox-directory>
  1. Download a complete copy of the tiatoolbox.
    $ git clone https://github.com/TissueImageAnalytics/tiatoolbox.git
  1. Change directory to tiatoolbox
    $ cd tiatoolbox
  1. Create virtual environment for TIAToolbox using
    $ conda env create -f requirements.dev.conda.yml # for linux/mac only.
    $ conda activate tiatoolbox-dev

or

    $ conda create -n tiatoolbox-dev python=3.8 # select version of your choice
    $ conda activate tiatoolbox-dev
    $ pip install -r requirements_dev.txt
  1. To use the packages installed in the environment, run the command:
    $ conda activate tiatoolbox-dev

License

The source code TIA Toolbox (tiatoolbox) as hosted on GitHub is released under the GNU General Public License (Version 3).

The full text of the licence is included in LICENSE.md.

Auxiliary Files

Auxiliary files, such as pre-trained model weights downloaded from the TIA Centre webpage (https://warwick.ac.uk/tia/), are provided under the Creative Commons Attribution-NonCommercial-ShareAlike Version 4 (CC BY-NC-SA 4.0) license.

Dual License

If you would like to use any of the source code or auxiliary files (e.g. pre-trained model weights) under a different license agreement please contact the Tissue Image Analytics (TIA) Centre at the University of Warwick ([email protected]).

Comments
  • DOC: update example_stainnorm.ipynb

    DOC: update example_stainnorm.ipynb

    Introductory markdown cell explaining why normalising stain can be a good idea. Copied several cells from example_wsiread.ipynb. It's a pity that there isn't (or I can't find) any mechanism for sharing cells that are identical in example_wsiread.ipynb and example_stainnorm.ipynb. This makes maintenance more difficult. One of the common cells is an initial cell in bash, banishing error messages and warning messages that previously occurred even when the user used the jupyter notebook correctly. I changed "normalize" to "normalise", and similarly for related words like "normalisation", which is the spelling adopted previously in the stainnorm module. I changed an erroneous "objective power x16" to the correct "objective power x20" in one markdown cell. In general, I made markdown cells somewhat shorter, more informative, clearer and used better English style. Having read a bit about Unit Tests, it doesn't seem to me that there are any appropriate tests, but I would like to see what the robot says about code coverage. Remaining tasks ( I need to research these) are:

    1. set up pre-commit hooks
    2. (try to) get Kaggle versions of the notebooks to work as expected.
    documentation 
    opened by DavidBAEpstein 51
  • EG: Example notebook for nuclei instance segmentation

    EG: Example notebook for nuclei instance segmentation

    This PR is dedicated to HoVer-Net implementation in the TIAToolbox. I have included examples of using HoVer-Net on an image tile and a WSI. I tried to make it clear how a user can load and use the output of the prediction method. I have also shown the contour visualization functionality of the TIAToolbox.

    opened by mostafajahanifar 42
  • [skip travis] EG: Add new example notebook for semantic segmentation functionality

    [skip travis] EG: Add new example notebook for semantic segmentation functionality

    A new example notebook is added based on the semantic segmentation functionality. In this notebook I tried to show two things: 1- How easy it is to to use pretrained segmentation models 2- How user can use his/her own segmentation model in the process.

    I tried to explain all the concepts as simple as possible with a nice coherency.

    TODO:

    • [x] Need to be checked on a local machine.
    • [x] Need to select a small breast cancer sample for WSI prediction part.
    documentation 
    opened by mostafajahanifar 35
  • [skip ci] DOC: DFBR Jupyter Notebook

    [skip ci] DOC: DFBR Jupyter Notebook

    Notebook to show how to perform WSI registration using the DFBR method, followed by the BSpline approach.

    Readthedocs link: https://tia-toolbox.readthedocs.io/en/dfbr_notebook/_notebooks/jnb/10-wsi_registration.html

    documentation 
    opened by ruqayya 23
  • Add recipe for conda

    Add recipe for conda

    • TIA Toolbox version: N.A.
    • Python version: N.A.
    • Operating System: N.A.

    Description

    It would be great if this was available on conda in addition to pip.

    What I Did

    N.A.

    dev tools 
    opened by sarthakpati 20
  • EXAMPLE: Add slide info notebook

    EXAMPLE: Add slide info notebook

    Just as the title says. It consists of loading WSI and sample code for extracting patches using scipy or the provided functionality from tiatoolbox. And also a more descriptive explanation.

    opened by vqdang 20
  • DEV: Add Pre-Commit Hook To Check Requirements Consistency

    DEV: Add Pre-Commit Hook To Check Requirements Consistency

    This adds a ~test~ pre-commit hook which checks that requirements files are consistent. It currently checks several criteria:

    That for main/dev pairs (e.g. requirements.txt and requirements_dev.txt):

    1. Any key present in main MUST also be in dev.
    2. The version in main MUST match the version in dev.
    3. The constraint in main MUST match the constraint dev.

    For all requirements files (pip and conda), if the key is in AT LEAST TWO files:

    1. The version MUST match across files with the requirement.
    2. The constraint MUST match across files with the requirement.

    It also prints out helpful errors such as:

    AssertionError: shapely has different constraints: None (requirements.conda.yml) vs >=
    1.8.1 (requirements.dev.conda.yml).
    
    AssertionError: imagecodecs has inconsistent constraints: >=2021.11.20 (requirements.t
    xt), >=2021.11.20 (requirements_dev.txt), >2021.11.20 (requirements.conda.yml), >2021.
    11.20 (requirements.dev.conda.yml).
    

    I have also updated the constraints for the files based on these checks in this PR. I would like to turn this into a pre-commit hook (in addition to this or instead of?), but this is more of a proof of concept and feedback PR.

    There is also a special case for setup.py. It is loaded differently but otherwise treated like pip requirements file.

    question dev tools dependencies 
    opened by John-P 19
  • [skip travis] EG: add example notebook on advanced model techniques

    [skip travis] EG: add example notebook on advanced model techniques

    Adding a new example notebook on semantic segmentation (more generally, model prediction) task where advanced topics are addressed. Particularly, in this notebook we are trying to demonestrate how user can use Tiatoolbox to solve the problems in the following three scenarios:

    • [x] 1. Instead of pretrained models embedded in Tiatoolbox's repsitory, user wants to use your own deep learning model (in Pytorch) in the Tiatoolbox prediction workflow.

    • [x] 2. User's input data is of an exotic form which the Tiatoolbox data stream functionality does not support by default.

    documentation 
    opened by mostafajahanifar 19
  • ENH: Update the patch prediction example notebook based on new API

    ENH: Update the patch prediction example notebook based on new API

    As the API has changed in the recently merged CNNPatchPredictor class, I have update the related example notebook to cover the changes and add some explanation about the new merge_predictions and overlay_patch_prediction functions.

    opened by mostafajahanifar 17
  • DOC: Add examples of read_rect and read_bounds in wsi_reader notebook

    DOC: Add examples of read_rect and read_bounds in wsi_reader notebook

    Added examples for read_rect and read_bounds. There's bug we spotted in read_region method so didn't add it. The method is failing on every "level" argument.

    documentation 
    opened by Srijay-lab 16
  • Image Resolution wrongly read in TIFFWSIReader

    Image Resolution wrongly read in TIFFWSIReader

    • TIA Toolbox version: 1.2.1
    • Python version: 3.8
    • Operating System:

    Description

    According to the Tiff description, the tags XResolutionand YResolution are of type rational. Meaning that we should compute the fraction. In the code:

    https://github.com/TissueImageAnalytics/tiatoolbox/blob/b3ace851ac61cbeee1f382b33925d8a9b0a1be55/tiatoolbox/wsicore/wsireader.py#L3245-L3248

    we are only passing the first element. Instead we should pass res_x.value[0]/res_x.value[1] and res_y.value[0]/res_y.value[0]

    bug 
    opened by rogertrullo 15
  • How to implement custom HoVer-Net Model in TIA ToolBox to segment nuclei?

    How to implement custom HoVer-Net Model in TIA ToolBox to segment nuclei?

    • TIA Toolbox version: 1.3.0
    • Python version: 3.9.12
    • Operating System: jupyter notebook on linux hpc cluster

    Description

    I used the 08-nucleus-instance-sgmentation.ipynb to segment nuclei. My dataset looks slightly different, than the pannuke dataset. So I used this tutorial to augment the pannuke dataset so that it looks like mine, and trained a new HoVer-Net model. The output i got is a .pt file.

    Now I want to use my custom HoVer-Net model in the TIA ToolBox to segment nuclei.

    How do I do that? Does anybody know, or has a link to an explanation/tutorial?

    opened by WilliWespe 0
  • :construction_worker: DEV: Add Notebook URL Replacement Hook

    :construction_worker: DEV: Add Notebook URL Replacement Hook

    This PR add a pre-commit hook which will replace URLs in notebook files to ensure they point to the current github rev (branch name). Patterns which are replaced are:

    • pip install tiatoolbox (e.g. pip install tiatoolbox -> pip install https://github.com/TissueImageAnalytics/[email protected])
    dev tools 
    opened by John-P 2
  • 🚨  ENH: Expand Tile Server API

    🚨 ENH: Expand Tile Server API

    This draft PR adds to the tileserver a variety of endpoints required by the visualization interface for changing slide, changing overlays, and controlling various aspects of how rendererd annotations are displayed.

    provided routes are:

    "/tileserver/setup": setup user id upon a new connection
    "/tileserver/change_color_prop/<prop>": change the property used to colour annotations by
    "/tileserver/change_slide/<layer>/<layer_path>":  change the slide as specified in the path
    "/tileserver/change_cmap/<cmap>": change colourmap
    "/tileserver/load_annotations/<file_path>/<float:model_mpp>": load some annotations, adding to existing if present
    "/tileserver/change_overlay/<overlay_path>": change or add an overlay. If provided a path to a file containing annotations, current annotation overlay layer is replaced. Image-based overlays are added as new layers.
    "/tileserver/commit/<save_path>": save the current overlay database
    "/tileserver/update_renderer/<prop>/<val>": generic updater for renderer props that don't need specific endpoint to handle them
    "/tileserver/change_secondary_cmap/<type>/<prop>/<cmap>": specify a colormap override for one specific type
    "/tileserver/get_props": get all unique properties that appear on annotations in the store
    "/tileserver/reset": reset the server
    
    opened by measty 3
  • Get ValueError: Unsupported axes `YX` when using OME-TIFF for nuclear segmentation

    Get ValueError: Unsupported axes `YX` when using OME-TIFF for nuclear segmentation

    • TIA Toolbox version: 1.3.0
    • Python version: 3.7
    • Operating System: Cent OS

    Description

    When I try to perform nuclear segmentation using an OME-TIFF file, I get the following error:

    ValueError: Unsupported axes YX.

    What I Did

    wsi_output = objInstSegmentor.predict(
        ['TestSet_ROI_6291.ome.tiff'], 
        ioconfig=objIOConfig, 
        masks=None, 
        save_dir=f'WSITest/', 
        mode="wsi", 
        on_gpu=ON_GPU, 
        crash_on_exception=True, 
    )
    
    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    /local/55164614/ipykernel_10371/3581917802.py in <module>
         54     mode="wsi",
         55     on_gpu=ON_GPU,
    ---> 56     crash_on_exception=True,
         57 )
    
    ~/.conda/envs/single_cell/lib/python3.7/site-packages/tiatoolbox/models/engine/semantic_segmentor.py in predict(self, imgs, masks, mode, on_gpu, ioconfig, patch_input_shape, patch_output_shape, stride_shape, resolution, units, save_dir, crash_on_exception)
       1326         for wsi_idx, img_path in enumerate(imgs):
       1327             self._predict_wsi_handle_exception(
    -> 1328                 imgs, wsi_idx, img_path, mode, ioconfig, save_dir, crash_on_exception
       1329             )
       1330 
    
    ~/.conda/envs/single_cell/lib/python3.7/site-packages/tiatoolbox/models/engine/semantic_segmentor.py in _predict_wsi_handle_exception(self, imgs, wsi_idx, img_path, mode, ioconfig, save_dir, crash_on_exception)
       1177             wsi_save_path = save_dir.joinpath(f"{wsi_idx}")
       1178             if crash_on_exception:
    -> 1179                 raise err
       1180             logging.error("Crashed on %s", wsi_save_path)
       1181 
    
    ~/.conda/envs/single_cell/lib/python3.7/site-packages/tiatoolbox/models/engine/semantic_segmentor.py in _predict_wsi_handle_exception(self, imgs, wsi_idx, img_path, mode, ioconfig, save_dir, crash_on_exception)
       1153         try:
       1154             wsi_save_path = save_dir.joinpath(f"{wsi_idx}")
    -> 1155             self._predict_one_wsi(wsi_idx, ioconfig, str(wsi_save_path), mode)
       1156 
       1157             # Do not use dict with file name as key, because it can be
    
    ~/.conda/envs/single_cell/lib/python3.7/site-packages/tiatoolbox/models/engine/nucleus_instance_segmentor.py in _predict_one_wsi(self, wsi_idx, ioconfig, save_path, mode)
        632         mask_path = None if self.masks is None else self.masks[wsi_idx]
        633         wsi_reader, mask_reader = self.get_reader(
    --> 634             wsi_path, mask_path, mode, self.auto_generate_mask
        635         )
        636 
    
    ~/.conda/envs/single_cell/lib/python3.7/site-packages/tiatoolbox/models/engine/semantic_segmentor.py in get_reader(img_path, mask_path, mode, auto_get_mask)
        670         """Define how to get reader for mask and source image."""
        671         img_path = pathlib.Path(img_path)
    --> 672         reader = WSIReader.open(img_path)
        673 
        674         mask_reader = None
    
    ~/.conda/envs/single_cell/lib/python3.7/site-packages/tiatoolbox/wsicore/wsireader.py in open(input_img, mpp, power)
        226 
        227         if suffixes[-2:] in ([".ome", ".tiff"],):
    --> 228             return TIFFWSIReader(input_path, mpp=mpp, power=power)
        229 
        230         if last_suffix in (".tif", ".tiff") and is_tiled_tiff(input_path):
    
    ~/.conda/envs/single_cell/lib/python3.7/site-packages/tiatoolbox/wsicore/wsireader.py in __init__(self, input_img, mpp, power, series, cache_size)
       3055                 return np.prod(self._canonical_shape(page.shape)[:2])
       3056 
    -> 3057             series_areas = [page_area(s.pages[0]) for s in all_series]  # skipcq
       3058             self.series_n = np.argmax(series_areas)
       3059         self._tiff_series = self.tiff.series[self.series_n]
    
    ~/.conda/envs/single_cell/lib/python3.7/site-packages/tiatoolbox/wsicore/wsireader.py in <listcomp>(.0)
       3055                 return np.prod(self._canonical_shape(page.shape)[:2])
       3056 
    -> 3057             series_areas = [page_area(s.pages[0]) for s in all_series]  # skipcq
       3058             self.series_n = np.argmax(series_areas)
       3059         self._tiff_series = self.tiff.series[self.series_n]
    
    ~/.conda/envs/single_cell/lib/python3.7/site-packages/tiatoolbox/wsicore/wsireader.py in page_area(page)
       3053             def page_area(page: tifffile.TiffPage) -> float:
       3054                 """Calculate the area of a page."""
    -> 3055                 return np.prod(self._canonical_shape(page.shape)[:2])
       3056 
       3057             series_areas = [page_area(s.pages[0]) for s in all_series]  # skipcq
    
    ~/.conda/envs/single_cell/lib/python3.7/site-packages/tiatoolbox/wsicore/wsireader.py in _canonical_shape(self, shape)
       3088         if self._axes == "SYX":
       3089             return np.roll(shape, -1)
    -> 3090         raise ValueError(f"Unsupported axes `{self._axes}`.")
       3091 
       3092     def _parse_svs_metadata(self) -> dict:
    
    ValueError: Unsupported axes `YX`.
    
    opened by Calebium 2
Releases(v1.3.1)
  • v1.3.1(Dec 20, 2022)

    Major Updates and Feature Improvements

    • Adds NuClick architecture #449
    • Adds Annotation Store Reader #476
    • Adds DFBR method for registering pair of images #510

    Changes to API

    • Adds a sample SVS loading function tiatoolbox.data.small_svs() to the data module #517

    Bug Fixes and Other Changes

    • Simplifies example notebook for image reading for better readability
    • Restricts Shapely version to <2.0.0 for compatibility

    Development related changes

    • Adds GitHub workflow for automatic generation of docker image
    • Updates dependencies
    • Updates bump2version config
    • Enables flake8 E800 checks for commented codes.
    • Fixes several errors generated by DeepSource.
    • Prevent test dumping file to root
    • Removes duplicate functions to generate parameterized test scenarios

    Note: Please note that Python 3.7 support will be removed after this release. We plan to add support for Python 3.11 in the next release.

    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Oct 21, 2022)

    Major Updates and Feature Improvements

    • Adds an AnnotationTileGenerator and AnnotationRenderer which allows serving of tiles rendered directly from an annotation store.
    • Adds DFBR registration model and Jupyter notebook example
      • Adds DICE metric
    • Adds SCCNN architecture. [read the docs]
    • Adds MapDe architecture. [read the docs]
    • Adds support for reading MPP metadata from NGFF v0.4
    • Adds enhancements to tiatoolbox.annotation.storage that are useful when using an AnnotationStore for visualization purposes.

    Changes to API

    • None

    Bug Fixes and Other Changes

    • Fixes colorbar_params #410
    • Fixes Jupyter notebooks for better read the docs rendering
      • Fixes typos, metadata and links
    • Fixes nucleus_segmentor_engine for boundary artefacts
    • Fixes the colorbar cropping in tests
    • Adds citation in README.md and CITATION.cff to Nature Communications Medicine paper
    • Fixes a bug #452 raised by @rogertrullo where only the numerator of the TIFF resolution tags was being read.
    • Fixes HoVer-Net+ post-processing to be inline with original work.
    • Fixes a bug where an exception would be raised if the OME XML is missing objective power.

    Development related changes

    • Uses Furo theme for readthedocs
    • Replaces nbgallery and nbsphinx with myst-nb for jupyter notebook rendering
    • Uses myst for markdown parsing
    • Uses requirements.txt to define dependencies for requirements consistency
    • Adds notebook AST pre-commit hook
    • Adds check to validate python examples in the code
    • Adds check to resolve imports
    • Fixes an error in a docstring which triggered the failing test.
    • Adds pre-commit hooks to format markdown and notebook markdown
    • Adds pip install workflow to resolve dependencies when requirements file is updated
    • Improves TIAToolbox import using LazyLoader
    Source code(tar.gz)
    Source code(zip)
  • v1.2.1(Jul 7, 2022)

    Major Updates and Feature Improvements

    • None

    Changes to API

    • None

    Bug Fixes and Other Changes

    • Fixes issues with dependencies
      • Adds flask to dependencies.
    • Fixes missing file in the python package
    • Clarifies help string for show-wsi option

    Development related changes

    • Removes Travis CI
      • GitHub Actions will be used instead.
    • Adds pre-commit hooks to check requirements consistency.
    • Adds GitHub Action to resolve conda environment checks on Windows and Ubuntu.
    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(Jul 5, 2022)

    1.2.0 (2022-07-05)

    Major Updates and Feature Improvements

    • Adds support for Python 3.10
    • Adds short description for IDARS algorithm #383
    • Adds support for NGFF v0.4 OME-ZARR.
    • Adds CLI for launching tile server.

    Changes to API

    • Renames stainnorm_target() function to stain_norm_target().
    • Removes get_wsireader
    • Replaces the custom PlattScaler in tools/scale.py with the regular Scikit-Learn LogisticRegression.

    Bug Fixes and Other Changes

    • Fixes bugs in UNET architecture.
      • Number of channels in Batchnorm argument in the decoding path to match with the input channels.
      • Padding 0 creates feature maps in the decoder part with the same size as encoder.
    • Fixes linter issues and typos
    • Fixes incorrect output with overlap in predictor.merge_predictions() and return_raw=True
      • Thanks to @paulhacosta for raising #356, Fixed by #358.
    • Fixes errors with JP2 read. Checks input path exists.
    • Fixes errors with torch upgrade to 1.12.

    Development related changes

    • Adds pre-commit hooks for consistency across the repo.
    • Sets up GitHub Actions Workflow.
      • Travis CI will be removed in future release.
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(May 7, 2022)

    • Updates AUTHORS.md

    Major Updates and Feature Improvements

    • Adds DICOM support.
    • Updates license to more permissive BSD 3-clause.
    • Adds MicroNet model.
    • Improves support for tiff files.
      • Adds a check for tiles in a TIFF file when opening.
      • Uses OpenSlide to read a TIFF if it has tiles instead of OpenCV (VirtualWSIReader).
      • Adds a fallback to tifffile if it is tiled but openslide cannot read it (e.g. jp2k or jpegxl tiles).
    • Adds support for multi-channel images (HxWxC).
    • Fixes performance issues in semantic_segmentor.py.
      • Performance gain measurement: 21.67s (new) vs 45.564 (old) using a 4k x 4k WSI.
      • External Contribution: @ByteHexler.
    • Adds benchmark for Annotations Store.

    Changes to API

    • None

    Bug Fixes and Other Changes

    • Enhances the error messages to be more informative.
    • Fixes Flake8 Errors, typos.
      • Fixes patch predictor models based after fixing a typo.
    • Bug fixes in Graph functions.
    • Adds documentation for docker support.
    • General tidying up of docstrings.
    • Adds metrics to readthedocs/docstrings for pretrained models.

    Development related changes

    • Adds pydicom and wsidicom as dependency.
    • Updates dependencies.
    • Fixes Travis detection and makes improvements to run tests faster on Travis.
    • Adds Dependabot to automatically update dependencies.
    • Improves CLI definitions to make it easier to integrate new functions.
    • Fixes compile options for test_annotation_stores.py
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Jan 31, 2022)

    Major Updates and Feature Improvements

    • Updates dependencies for conda recipe #262
      • External Contribution : @sarthakpati

    Changes to API

    • None

    Bug Fixes and Other Changes

    • Adds User Warning For Missing SQLite Functions
    • Fixes Pixman version check errors
    • Fixes empty query in instance segmentor

    Development related changes

    • Fixes flake8 linting issues and typos
    • Conditional pytest.skipif to skip GPU tests on travis while running them locally or elsewhere
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Dec 23, 2021)

    Major Updates and Feature Improvements

    • Adds nucleus instance segmentation base class
    • Adds multi-task segmentor HoVerNet+ model
    • Adds IDaRS pipeline
    • Adds SlideGraph pipeline
    • Adds PCam patch classification models
    • Adds support for stain augmentation feature
    • Adds classes and functions under tiatoolbox.tools.graph to enable construction of graphs in a format which can be used with PyG (PyTorch Geometric).
    • Add classes which act as a mutable mapping (dictionary like) structure and enables efficient management of annotations. (#135)
    • Adds example notebook for adding advanced models
    • Adds classes which can generate zoomify tiles from a WSIReader object.
    • Adds WSI viewer using Zoomify/WSIReader API (#212)
    • Adds README to example page for clarity
    • Adds support to override or specify mpp and power

    Changes to API

    • Replaces models.controller API with models.engine
    • Replaces CNNPatchPredictor with PatchPredictor

    Bug Fixes and Other Changes

    • Fixes Fix filter_coordinates read wrong resolutions for patch extraction
    • For PatchPredictor
      • ioconfig will supersede everything
      • if ioconfig is not provided
        • If model is pretrained (defined in pretrained_model.yaml )
          • Use the yaml ioconfig
          • Any other input patch reading arguments will overwrite the yaml ioconfig (at the same keyword).
        • If model is not defined, all input patch reading arguments must be provided else exception will be thrown.
    • Improves performance of mask based patch extraction

    Development related changes

    • Improve tests performance for Travis runs
    • Adds feature detection mechanism to detect the platform and installed packages etc.
    • On demand imports for some libraries for performance
    • Improves performance of mask based patch extraction
    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(Oct 28, 2021)

    Major Updates and Feature Improvements

    • Adds SemanticSegmentor which is Predictor equivalent for semantic segmentation.
    • Add TIFFWSIReader class to support OMETiff reading.
    • Adds FeatureExtractor API to controller.
    • Adds WSI Serialization Dataset which support changing parallel workers on the fly. This would reduce the time spent to create new worker for every WSI/Tile (costly).
    • Adds IOState data class to contain IO information for loading input to model and assembling model output back to WSI/Tile.
    • Minor updates for get_coordinates to pave the way for getting patch IO for segmentation.
    • Migrates old code to new variable names (patch extraction, patch wsi model).
    • Change in API from pretrained_weight to pretrained_weights.
    • Adds cli for semantic segmentation.
    • Update python notebooks to add read_rect and read_bounds examples with mpp read.

    Changes to API

    • Adds WSIReader.open. get_wsireader will deprecate in the next release. Please use WSIReader.open instead.
    • CLI is now POSIX compatible
      • Replaces underscores in variable names with hyphens
    • Models API updated to use pretrained_weights instead of pretrained_weight.
    • Move string_to_tuple to tiatoolbox/utils/misc.py

    Bug Fixes and Other Changes

    • Fixes README git clone instructions.
    • Fixes stain normalisation due to changes in sklearn.
    • Fixes a test in tests/test_slide_info
    • Fixes readthedocs documentation issues

    Development related changes

    • Adds dependencies for tiffile, imagecodecs, zarr.
    • Adds more stringent pre-commit checks
    • Moved local test files into tiatoolbox/data.
    • Fixed Manifest.ini and added tiatoolbox/data. This means that this directory will be downloaded with the package.
    • Using pkg_resources to properly load bundled resources (e.g. target_image.png) in tiatoolbox.data.
    • Removed duplicate code in conftest.py for downloading remote files. This is now in tiatoolbox.data._fetch_remote_file.
    • Fixes errors raised by new flake8 rules.
      • Remove leading underscores from fixtures.
    • Rename some remote sample files to make more sense.
    • Moves all cli commands/options from cli.py to cli_commands to make it clean and easier to add new commands
    • Removes redundant tests
    • Updates to new GitHub organisation name in the repo
      • Fixes related links
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Sep 16, 2021)

    Major and Feature Improvements

    • Drops support for python 3.6
    • Update minimum requirement to python 3.7
    • Adds support for python 3.9
    • Adds models base to the repository. Currently, PyTorch models are supported. New custom models can be added. The tiatoolbox also supports using custom weights to pre-existing built-in models.
      • Adds classification package and CNNPatchPredictor which takes predefined model architecture and pre-trained weights as input. The pre-trained weights for classification using kather100k data set is automatically downloaded if no weights are provided as input.
    • Adds mask-based patch extraction functionality to extract patches based on the regions that are highlighted in the input_mask. If 'auto' option is selected, a tissue mask is automatically generated for the input_image using tiatoolbox TissueMasker functionality.
    • Adds visualisation module to overlay the results of an algorithm.

    Changes to API

    • Command line interface for stain normalisation can be called using the keyword stain-norm instead of stainnorm
    • Replaces FixedWindowPatchExtractor with SlidingWindowPatchExtractor .
    • get_patchextractor takes the slidingwindow as an argument.
    • Depreciates VariableWindowPatchExtractor

    Bug Fixes and Other Changes

    • Significantly improved python notebook documentation for clarity, consistency and ease of use for non-experts.
    • Adds detailed installation instructions for Windows, Linux and Mac

    Development related changes

    • Moves flake8 above pytest in the travis.yml script stage.
    • Adds set -e at the start of the script stage in travis.yml to cause it to exit on error and (hopefully) not run later parts of the stage.
    • Readthedocs related changes
      • Uses requirements.txt in .readthedocs.yml
      • Uses apt-get installation for openjpeg and openslide
      • Removes conda build on readthedocs build
    • Adds extra checks to pre-commit, e.g., import sorting, spellcheck etc. Detailed list can be found on this commit.
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(May 11, 2021)

    Major and Feature Improvements

    • Add TissueMasker class to allow tissue masking using Otsu and Morphological processing.
    • Add helper/convenience method to WSIReader(s) to produce a mask. Add reader object to allow reading a mask conveniently as if it were a WSI i.e., use same location and resolution to read tissue area and mask area.
    • Add PointsPatchExtractor returns patches that can be used by classification models. Takes csv, json or pd.DataFrame and returns patches corresponding to each pixel location.
    • Add feature FixedWindowPatchExtractor to run sliding window deep learning algorithms.
    • Add example notebooks for patch extraction and tissue masking.
    • Update readme with improved instructions to use the toolbox. Make the README file somewhat more comprehensible to beginners, particularly those with not much background or experience.

    Changes to API

    • tiatoolbox.dataloader replaced by tiatoolbox.wsicore

    Bug Fixes and Other Changes

    • Minor bug fixes

    Development-related changes

    • Improve unit test coverage.
    • Move test data to tiatoolbox server.
    Source code(tar.gz)
    Source code(zip)
  • v0.5.2(Mar 16, 2021)

    Bug Fixes and Other Changes

    • Fix URL for downloading test JP2 image in test config (conftest.py) and notebooks.
    • Update readme with new logo.
    Source code(tar.gz)
    Source code(zip)
  • v0.5.1(Dec 31, 2020)

  • v0.5.0(Dec 30, 2020)

    Major and Feature Improvements

    • Adds get_wsireader() to return appropriate WSIReader.
    • Adds new functions to allow reading of regions using WSIReader at different resolutions given in units of:
      • microns per-pixel (mpp)
      • objective lens power (power)
      • pixels-per baseline (baseline)
      • resolution level (level)
    • Adds functions for reading regions are read_bounds and read_rect.
      • read_bounds takes a tuple (left, top, right, bottom) of coordinates in baseline (level 0) reference frame and returns a region bounded by those.
      • read_rect takes one coordinate in baseline reference frame and an output size in pixels.
    • Adds VirtualWSIReader as a subclass of WSIReader which can be used to read visual fields (tiles).
      • VirtualWSIReader accepts ndarray or image path as input.
    • Adds MPP fall back to standard TIFF resolution tags with warning.
      • If OpenSlide cannot determine microns per pixel (mpp) from the metadata, checks the TIFF resolution units (TIFF tags: ResolutionUnit, XResolution and YResolution) to calculate MPP. Additionally, add function to estimate missing objective power if MPP is known of derived from TIFF resolution tags.
    • Estimates missing objective power from MPP with warning.
    • Adds example notebooks for stain normalisation and WSI reader.
    • Adds caching to slide info property. This is done by checking if a private self._m_info exists and returning it if so, otherwise self._info is called to create the info for the first time (or to force regenerating) and the result is assigned to self._m_info. This could in future be made much simpler with the functools.cached_property decorator in Python 3.8+.
    • Adds pre processing step to stain normalisation where stain matrix encodes colour information from tissue region only.

    Changes to API

    • read_region refactored to be backwards compatible with openslide arguments.
    • slide_info changed to info
    • Updates WSIReader which only takes one input
    • WSIReader input_path variable changed to input_img
    • Adds tile_read_size, tile_objective_value and output_dir to WSIReader.save_tiles()
    • Adds tile_read_size as a tuple
    • transforms.imresize takes additional arguments output_size and interpolation method 'optimise' which selects cv2.INTER_AREA for scale_factor<1 and cv2.INTER_CUBIC for scale_factor>1

    Bug Fixes and Other Changes

    • Refactors glymur code to use index slicing instead of deprecated read function.
    • Refactors thumbnail code to use read_bounds and be a member of the WSIReader base class.
    • Updates README.md to clarify installation instructions.
    • Fixes slide_info.py for changes in WSIReader API.
    • Fixes save_tiles.py for changes in WSIReader API.
    • Updates example_wsiread.ipynb to reflect the changes in WSIReader.
    • Adds Google Colab and Kaggle links to allow user to run notebooks directly on colab or kaggle.
    • Fixes a bug in taking directory input for stainnorm operation for command line interface.
    • Pins numpy<=1.19.3 to avoid compatibility issues with opencv.
    • Adds scikit-image or jupyterlab as a dependency.

    Development related changes

    • Moved test_wsireader_jp2_save_tiles to test_wsireader.py.
    • Change recipe in Makefile for coverage to use pytest-cov instead of coverage.
    • Runs travis only on PR.
    • Adds pre-commit for easy setup of client-side git hooks for black code formatting and flake8 linting.
    • Adds flake8-bugbear to pre-commit for catching potential deepsource errors.
    • Adds constants for test regions in test_wsireader.py.
    • Rearranges usage.rst for better readability.
    • Adds pre-commit, flake8, flake8-bugbear, black, pytest-cov and recommonmark as dependency.

    Co-authored-by: Shan E Ahmed Raza @shaneahmed, John Pocock @John-P, Simon Graham @simongraham, Dang Vu @vqdang, Mostafa Jahanifar @mostafajahanifar Srijay Deshpande @Srijay-lab, Saad Bashir @rajasaad

    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Oct 25, 2020)

    Major and Feature Improvements

    • Adds OpenSlideWSIReader to read Openslide image formats
    • Adds support to read Omnyx jp2 images using OmnyxJP2WSIReader.
    • New feature added to perform stain normalisation using Ruifork, Reinhard, Vahadane, Macenko methods and using custom stain matrices.
    • Adds example notebook to read whole slide images via tiatoolbox.
    • Adds WSIMeta class to save meta data for whole slide images. WSIMeta casts properties to python types. Properties from OpenSlide are returned as string. raw values can always be accessed via slide.raw. Adds data validation e.g., checking that level_count matches up with the length of the level_dimensions and level_downsamples. Adds type hints to WSIMeta.
    • Adds exceptions FileNotSupported and MethodNotSupported

    Changes to API

    • Restructures WSIReader as parent class to allow support to read whole slide images in other formats.
    • Adds slide_info as a property of WSIReader
    • Updates slide_info type to WSIMeta from dict
    • Depreciates support for multiprocessing from within the toolbox. The toolbox is focused on processing single whole slide and standard images. External libraries can be used to run using multi processing on multiple files.

    Bug Fixes and Other Changes

    • Adds scikit-learn, glymur as a dependency
    • Adds licence information
    • Removes pathos as a dependency
    • Updates openslide-python requirement to 1.1.2

    Co-authored-by: Shan E Ahmed @shaneahmed, John Pocock @John-P, Simon Graham @simongraham, Dang Vu @vqdang, Srijay Deshpande @Srijay-lab, Saad Bashir @rajasaad

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Jul 18, 2020)

    Major and Feature Improvements

    • Adds feature read_region to read a small region from whole slide images
    • Adds feature save_tiles to save image tiles from whole slide images
    • Adds feature imresize to resize images
    • Adds feature transforms.background_composite to avoid creation of black tiles from whole slide images.

    Changes to API

    • None

    Bug Fixes and Other Changes

    • Adds pandas as dependency
    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Jul 12, 2020)

    Major and Feature Improvements

    • None

    Changes to API

    • None

    Bug Fixes and Other Changes

    • Fix command line interface for slide-info feature and travis pypi deployment
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Jul 10, 2020)

  • v0.2.0.0(Jul 10, 2020)

    Major and Feature Improvements

    • Adds feature slide_info to read whole slide images and display meta data information
    • Adds multiprocessing decorator TIAMultiProcess to allow running toolbox functions using multiprocessing.

    Changes to API

    • None

    Bug Fixes and Other Changes

    • Adds Sphinx Readthedocs support https://readthedocs.org/projects/tia-toolbox/ for stable and develop branches
    • Adds code coverage tools to test the pytest coverage of the package
    • Adds deepsource integration to highlight and fix bug risks, performance issues etc.
    • Adds README to allow users to setup the environment.
    • Adds conda and pip requirements instructions
    • Updates setup.cfg to use double quotes for bump2version patterns.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Jun 19, 2020)

Owner
Tissue Image Analytics (TIA) Centre
From tissue images to insights
Tissue Image Analytics (TIA) Centre
CRNN With PyTorch

CRNN-PyTorch Implementation of https://arxiv.org/abs/1507.05717

Vadim 4 Sep 01, 2022
Yolov5+SlowFast: Realtime Action Detection Based on PytorchVideo

Yolov5+SlowFast: Realtime Action Detection A realtime action detection frame work based on PytorchVideo. Here are some details about our modification:

WuFan 181 Dec 30, 2022
SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks (Scientific Reports)

SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks Molecular interaction networks are powerful resources for the discovery. While dee

Kexin Huang 49 Oct 15, 2022
Sample code and notebooks for Vertex AI, the end-to-end machine learning platform on Google Cloud

Google Cloud Vertex AI Samples Welcome to the Google Cloud Vertex AI sample repository. Overview The repository contains notebooks and community conte

Google Cloud Platform 560 Dec 31, 2022
Implementation of popular bandit algorithms in batch environments.

batch-bandits Implementation of popular bandit algorithms in batch environments. Source code to our paper "The Impact of Batch Learning in Stochastic

Danil Provodin 2 Sep 11, 2022
Minimal diffusion models - Minimal code and simple experiments to play with Denoising Diffusion Probabilistic Models (DDPMs)

Minimal code and simple experiments to play with Denoising Diffusion Probabilist

Rithesh Kumar 16 Oct 06, 2022
Mmdet benchmark with python

mmdet_benchmark 本项目是为了研究 mmdet 推断性能瓶颈,并且对其进行优化。 配置与环境 机器配置 CPU:Intel(R) Core(TM) i9-10900K CPU @ 3.70GHz GPU:NVIDIA GeForce RTX 3080 10GB 内存:64G 硬盘:1T

杨培文 (Yang Peiwen) 24 May 21, 2022
Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning

Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning This repository provides an implementation of the paper Beta S

Yongchan Kwon 28 Nov 10, 2022
[CVPR 2021] Region-aware Adaptive Instance Normalization for Image Harmonization

RainNet — Official Pytorch Implementation Region-aware Adaptive Instance Normalization for Image Harmonization Jun Ling, Han Xue, Li Song*, Rong Xie,

130 Dec 11, 2022
A minimal yet resourceful implementation of diffusion models (along with pretrained models + synthetic images for nine datasets)

A minimal yet resourceful implementation of diffusion models (along with pretrained models + synthetic images for nine datasets)

Vikash Sehwag 65 Dec 19, 2022
A Python script that creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editing software such as FinalCut Pro for further adjustments.

Text to Subtitles - Python This python file creates subtitles of a given length from text paragraphs that can be easily imported into any Video Editin

Dmytro North 9 Dec 24, 2022
Freecodecamp Scientific Computing with Python Certification; Solution for Challenge 2: Time Calculator

Assignment Write a function named add_time that takes in two required parameters and one optional parameter: a start time in the 12-hour clock format

Hellen Namulinda 0 Feb 26, 2022
BiSeNet based on pytorch

BiSeNet BiSeNet based on pytorch 0.4.1 and python 3.6 Dataset Download CamVid dataset from Google Drive or Baidu Yun(6xw4). Pretrained model Download

367 Dec 26, 2022
Official implementation for paper: A Latent Transformer for Disentangled Face Editing in Images and Videos.

A Latent Transformer for Disentangled Face Editing in Images and Videos Official implementation for paper: A Latent Transformer for Disentangled Face

InterDigital 108 Dec 09, 2022
A PyTorch implementation of "TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?"

TokenLearner: What Can 8 Learned Tokens Do for Images and Videos? Source: Improving Vision Transformer Efficiency and Accuracy by Learning to Tokenize

Caiyong Wang 14 Sep 20, 2022
Unified Instance and Knowledge Alignment Pretraining for Aspect-based Sentiment Analysis

Unified Instance and Knowledge Alignment Pretraining for Aspect-based Sentiment Analysis Requirements python 3.7 pytorch-gpu 1.7 numpy 1.19.4 pytorch_

12 Oct 29, 2022
Data-Uncertainty Guided Multi-Phase Learning for Semi-supervised Object Detection

An official implementation of paper Data-Uncertainty Guided Multi-Phase Learning for Semi-supervised Object Detection

11 Nov 23, 2022
Memory-efficient optimum einsum using opt_einsum planning and PyTorch kernels.

opt-einsum-torch There have been many implementations of Einstein's summation. numpy's numpy.einsum is the least efficient one as it only runs in sing

Haoyan Huo 9 Nov 18, 2022
Code/data of the paper "Hand-Object Contact Prediction via Motion-Based Pseudo-Labeling and Guided Progressive Label Correction" (BMVC2021)

Hand-Object Contact Prediction (BMVC2021) This repository contains the code and data for the paper "Hand-Object Contact Prediction via Motion-Based Ps

Takuma Yagi 13 Nov 07, 2022
Neural Point-Based Graphics

Neural Point-Based Graphics Project   Video   Paper Neural Point-Based Graphics Kara-Ali Aliev1 Artem Sevastopolsky1,2 Maria Kolos1,2 Dmitry Ulyanov3

Ali Aliev 252 Dec 13, 2022