Deep Learning segmentation suite designed for 2D microscopy image segmentation

Overview

minimal Python version License: MIT

Deep Learning segmentation suite dessigned for 2D microscopy image segmentation

This repository provides researchers with a code to try different encoder-decoder configurations for the binary segmentation of 2D images in a video. It offers regular 2D U-Net variants and recursive approaches by combining ConvLSTM on top of the encoder-decoder.

Citation

If you found this code useful for your research, please, cite the corresponding preprint:

Estibaliz Gómez-de-Mariscal, Hasini Jayatilaka, Özgün Çiçek, Thomas Brox, Denis Wirtz, Arrate Muñoz-Barrutia, Search for temporal cell segmentation robustness in phase-contrast microscopy videos, arXiv 2021 (arXiv:2112.08817).

@misc{gómezdemariscal2021search,
      title={Search for temporal cell segmentation robustness in phase-contrast microscopy videos}, 
      author={Estibaliz Gómez-de-Mariscal and Hasini Jayatilaka and Özgün Çiçek and Thomas Brox and Denis Wirtz and Arrate Muñoz-Barrutia},
      year={2021},
      eprint={2112.08817},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Quick guide

Installation

Clone this repository and create all the required libraries

git clone https://github.com/esgomezm/microscopy-dl-suite-tf
pip3 install -r microscopy-dl-suite-tf/dl-suite/requirements.txt

Download or place your data in an accessible directory

Download the example data from Zenodo. Place the training, validation and test data in three independent folders. Each of them should contain an inputand labels folder. For 2D images, the name of the images should be raw_000.tif and instance_ids_000.tif for the input and ground truth images respectively. If the ground truth is given as videos, then the inputs and labels should have the same name.

Create a configuration .json file with all the information for the model architecture and training.

Check out some examples of configuration files. You will need to update the paths to the training, validation and test datasets. All the details for this file is given here.

Run model training

Run the file train.py indicating the path to the configuration JSON that contains all the information. This script will also test the model with the images provided in the "TESPATH" field of the configuration file.

python microscopy-dl-suite-tf/dl-suite/train.py 'microscopy-dl-suite-tf/examples/config/config_mobilenet_lstm_5.json' 

Run model testing

If you only want to run the test step, it is also possible with the test.py:

python microscopy-dl-suite-tf/dl-suite/test.py 'microscopy-dl-suite-tf/examples/config/config_mobilenet_lstm_5.json' 

Cell tracking from the instance segmentations

Videos with instance segmentations can be easily tracked with TrackMate. TrackMate is compatible with cell splitting, merging, and gap filling, making it suitable for the task.

The cells in our 2D videos exit and enter the focus plane, so we fill part of the gaps caused by these irregularities. We apply a Gaussian filter along the time axis on the segmented output images. The filtered result is merged with the output masks of the model as follows: all binary masks are preserved, and the positive values of the filtered image are included as additional masks. Those objects smaller than 100 pixels are discarded.

This processing is contained in the file tracking.py, in the section called Process the binary images and create instance segmentations. TrackMate outputs a new video with the information of the tracks given as uniquely labelled cells. Then, such information can be merged witht he original segmentation (without the temporal interpolation), using the code section Merge tracks and segmentations.

Technicalities

Available model architectures

  • 'mobilenet_mobileunet_lstm': A pretrained mobilenet in the encoder with skip connections to the decoder of a mobileunet and a ConvLSTM layer at the end that will make the entire architecture recursive.
  • 'mobilenet_mobileunet': A pretrained mobilenet in the encoder with skip connections to the decoder of a mobileunet (2D).
  • 'unet_lstm': 2D U-Net with ConvLSTM units in the contracting path.
  • 'categorical_unet_transpose': 2D U-Net for different labels ({0}, {1}, ...) with transpose convolutions instead of upsampling.
  • 'categorical_unet_fc_dil': 2D U-Net for different labels ({0}, {1}, ...) with fully connected dilated convolutions.
  • 'categorical_unet_fc': 2D U-Net for different labels ({0}, {1}, ...) with fully connected convolutions.
  • 'categorical_unet': 2D U-Net for different labels ({0}, {1}, ...).
  • 'unet' or "None": 2D U-Net with a single output.

Programmed loss-functions

When the output of the network has just one channel: foreground prediction

When the output of the network has two channels: background and foreground prediction

  • (Weighted) categorical cross-entropy: keras classical categorical cross-entropy
  • Sparse categorical cross-entropy: same as the categorical cross-entropy but it allows the user to enter labelled grounf truth with a single channel and as many labels as classes, rather tha in a one-hote encoding fashion.

Prepare the data

  • If you want to create a set of ground truth data with the format specified in the Cell Tracking Challenge, you can use the script prepare_videos_ctc.py.
  • If you want to create 2D images from the videos, you can use the script prepare_data.py.
  • In the folder additional_scripts you will find ImageJ macros or python code to keep processing the data to generate borders around the segmented cells for example.

Parameter configuration in the configuration.json

argument description example value
model parameters
cnn_name Model architecture. Options available here "mobilenet_mobileunet_lstm_tips"
OUTPUTPATH Directory where the trained model, logs and results are stored "externaldata_cce_weighted_001"
TRAINPATH Directory with the source of reference annotations that will be used for training the network. It should contain two folders (inputs and labels). The name of the images should be raw_000.tif and instance_ids_000.tif for the input and ground truth images respectively. "/data/train/stack2im"
VALPATH Directory with the source of reference annotations that will be used for validation of the network. It should contain two folders (inputs and labels). The name of the images should be raw_000.tif and instance_ids_000.tif for the input and ground truth images respectively. If you are running different configurations of a network or different instances, it might be recommended to keep always the same forlder for this. "/data/val/stack2im"
TESTPATH Directory with the source of reference annotations that will be used to test the network. It should contain two folders (inputs and labels). The name of the images should be raw_000.tif and instance_ids_000.tif for the input and ground truth images respectively. "/data/test/stack2im"
model_n_filters source of reference annotations, in CTC corresponds to gold and silver truth 32
model_pools Depth of the U-Net 3
model_kernel_size size of the kernel for the convolutions inside the U-Net. It's 2D as the network is thought to be for 2D data segmentation. [3, 3]
model_lr Model learning rate 0.01
model_mobile_alpha Width percentage of the MobileNetV2 used as a pretrained encoder. The values are limited by the TensorFlow model zoo to 0.35, 0.5, 1 0.35
model_time_windows Length in frames of the input video when training recurrent networks (ConvLSTM layers) 5
model_dilation_rate Dilation rate for dilated convolutions. If set to 1, it will be like a normal convolution. 1
model_dropout Dropout ration. It will increase with the depth of the encoder decoder. 0.2
model_activation Same as in Keras & TensorFlow libraries. "relu", "elu" are the most common ones. "relu"
model_last_activation Same as in Keras & TensorFlow libraries. "sigmoid", "tanh" are the most common ones. "sigmoid"
model_padding Same as in Keras & TensorFlow libraries. "same" is strongly recommended. "same"
model_kernel_initializer Model weights initializer method. Same name as the one in Keras & TensorFlow library. "glorot_uniform"
model_lossfunction Categorical-unets: "sparse_cce", "multiple_output", "categorical_cce", "weighted_bce_dice". Binary-unets: "binary_cce", "weighted_bce_dice" or "weighted_bce" "sparse_cce"
model_metrics Accuracy metric to compute during model training. "accuracy"
model_category_weights Weights for multioutput networks (tips prediction)
training
train_max_epochs Number of training epochs 1000
train_pretrained_weights The pretrained weights are thought to be inside the checkpoints folder that is created in OUTPUTPATH. In case you want to make a new experiment, we suggest changing the name of the network cnn_name. This is thought to keep track of the weights used for the pretraining. None or lstm_unet00004.hdf5
callbacks_save_freq Use a quite large saving frequency to store networks every 100 epochs for example. If the frequency is smaller than the number of inputs processed on each epoch, a set of trained weights will be stored in each epoch. Note that this increases significantly the size of the checkpoints folder. 50, 2000, ...
callbacks_patience Number of analyzed inputs for which the improvement is analyzed before reducing the learning rate. 100
callbacks_tb_update_freq Tensorboard updating frequency 10
datagen_patch_batch Number of patches to crop from each image entering the generator 1
datagen_batch_size Number of images that will compose the batch on each iteration of the training. Final batch size = datagen_sigma * datagen_patch_batch. Total number of iterations = np.floor((total images)/datagen_sigma * datagen_patch_batch) 5
datagen_dim_size 2D size of the data patches that will enter the network. [512, 512]
datagen_sampling_pdf Sampling probability distribution function to deal with data unbalance (few objects in the image). 500000
datagen_type If it contains contours, the data generator will create a ground truth with the segmentation and the contours of those segmentations. By default it will generate a ground truth with two channels (background and foreground)
inference
newinfer_normalization Intensity normalization procedure. It will calculate the mean, median or percentile of each input image before augmentation and cropping. "MEAN", "MEDIAN", "PERCENTILE"
newinfer_uneven_illumination To correct or not for uneven illumination the input images (before tiling) "False"
newinfer_epoch_process_test File with the trained network at the specified epoch, with the name specified in cnn_name and stored at OUTPUTPATH/checkpoints. 20
newinfer_padding Halo, padding, half receptive field of a pixel. [95, 95]
newinfer_data Data to process. "data/test/"
newinfer_output_folder_name Name of the forlder in which all the processed images will be stored. "test_output"
PATH2VIDEOS csv file with the relation between the single 2D frames and the videos from where they come. "data/test/stack2im/videos2im_relation.csv"

Notes about the code reused from different sources or the CNN architecture definitions

U-Net for binary segmentation U-Net architecture for TensorFlow 2 based on the example given in https://www.kaggle.com/advaitsave/tensorflow-2-nuclei-segmentation-unet

Owner
Repos - Small extracellular segmentation (BIIG-UC3M/FRU-Net-TEM-segmentation), deepImageJ plugin (deepimagej/deepimagej-plugin), pMoSS (BIIG-UC3M/pMoSS)
OpenGAN: Open-Set Recognition via Open Data Generation

OpenGAN: Open-Set Recognition via Open Data Generation ICCV 2021 (oral) Real-world machine learning systems need to analyze novel testing data that di

Shu Kong 90 Jan 06, 2023
This is a repo of basic Machine Learning!

Basic Machine Learning This repository contains a topic-wise curated list of Machine Learning and Deep Learning tutorials, articles and other resource

Ekram Asif 53 Dec 31, 2022
This project is based on RIFE and aims to make RIFE more practical for users by adding various features and design new models

CPM 项目描述 CPM(Chinese Pretrained Models)模型是北京智源人工智能研究院和清华大学发布的中文大规模预训练模型。官方发布了三种规模的模型,参数量分别为109M、334M、2.6B,用户需申请与通过审核,方可下载。 由于原项目需要考虑大模型的训练和使用,需要安装较为复杂

hzwer 190 Jan 08, 2023
Simulating Sycamore quantum circuits classically using tensor network algorithm.

Simulating the Sycamore quantum supremacy circuit This repo contains data we have obtained in simulating the Sycamore quantum supremacy circuits with

Feng Pan 46 Nov 17, 2022
UMT is a unified and flexible framework which can handle different input modality combinations, and output video moment retrieval and/or highlight detection results.

Unified Multi-modal Transformers This repository maintains the official implementation of the paper UMT: Unified Multi-modal Transformers for Joint Vi

Applied Research Center (ARC), Tencent PCG 84 Jan 04, 2023
Normalization Matters in Weakly Supervised Object Localization (ICCV 2021)

Normalization Matters in Weakly Supervised Object Localization (ICCV 2021) 99% of the code in this repository originates from this link. ICCV 2021 pap

Jeesoo Kim 10 Feb 01, 2022
Code for Robust Contrastive Learning against Noisy Views

Robust Contrastive Learning against Noisy Views This repository provides a PyTorch implementation of the Robust InfoNCE loss proposed in paper Robust

Ching-Yao Chuang 53 Jan 08, 2023
basic tutorial on pytorch

Quick Tutorial on PyTorch PyTorch Basics Linear Regression Logistic Regression Artificial Neural Networks Convolutional Neural Networks Recurrent Neur

7 Sep 15, 2022
An algorithmic trading bot that learns and adapts to new data and evolving markets using Financial Python Programming and Machine Learning.

ALgorithmic_Trading_with_ML An algorithmic trading bot that learns and adapts to new data and evolving markets using Financial Python Programming and

1 Mar 14, 2022
[ACM MM 2021] Multiview Detection with Shadow Transformer (and View-Coherent Data Augmentation)

Multiview Detection with Shadow Transformer (and View-Coherent Data Augmentation) [arXiv] [paper] @inproceedings{hou2021multiview, title={Multiview

Yunzhong Hou 27 Dec 13, 2022
Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation

SUO-SLAM This repository hosts the code for our CVPR 2022 paper "Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation". ArXiv li

Robot Perception & Navigation Group (RPNG) 97 Jan 03, 2023
The challenge for Quantum Coalition Hackathon 2021

Qchack 2021 Google Challenge This is a challenge for the brave 2021 qchack.io participants. Instructions Hello, intrepid qchacker, welcome to the G|o

quantumlib 18 May 04, 2022
Bytedance Inc. 2.5k Jan 06, 2023
This code is an unofficial implementation of HiFiSinger.

HiFiSinger This code is an unofficial implementation of HiFiSinger. The algorithm is based on the following papers: Chen, J., Tan, X., Luan, J., Qin,

Heejo You 87 Dec 23, 2022
A keras-based real-time model for medical image segmentation (CFPNet-M)

CFPNet-M: A Light-Weight Encoder-Decoder Based Network for Multimodal Biomedical Image Real-Time Segmentation This repository contains the implementat

268 Nov 27, 2022
Machine learning Bot detection technique, based on United States election dataset

Machine learning Bot detection technique, based on United States election dataset (2020). Current github repo provides implementation described in pap

Alexander Shevtsov 4 Nov 20, 2022
The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021]. https://arxiv.org/pdf/2101.12378.pdf

NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021] Release Notes The offical PyTorch implementation of NeMo, p

Angtian Wang 76 Nov 23, 2022
Label Hallucination for Few-Shot Classification

Label Hallucination for Few-Shot Classification This repo covers the implementation of the following paper: Label Hallucination for Few-Shot Classific

Yiren Jian 13 Nov 13, 2022
Codebase for arXiv preprint "NeRF++: Analyzing and Improving Neural Radiance Fields"

NeRF++ Codebase for arXiv preprint "NeRF++: Analyzing and Improving Neural Radiance Fields" Work with 360 capture of large-scale unbounded scenes. Sup

Kai Zhang 722 Dec 28, 2022
Efficient Multi Collection Style Transfer Using GAN

Proposed a new model that can make style transfer from single style image, and allow to transfer into multiple different styles in a single model.

Zhaozheng Shen 2 Jan 15, 2022