Segmentation Training Pipeline

Overview

Segmentation Training Pipeline

Build status

This package is a part of Musket ML framework.

Reasons to use Segmentation Pipeline

Segmentation Pipeline was developed with a focus of enabling to make fast and simply-declared experiments, which can be easily stored, reproduced and compared to each other.

Segmentation Pipeline has a lot of common parts with Generic pipeline, but it is easier to define an architecture of the network. Also there are a number of segmentation-specific features.

The pipeline provides the following features:

  • Allows to describe experiments in a compact and expressive way
  • Provides a way to store and compare experiments in order to methodically find the best deap learning solution
  • Easy to share experiments and their results to work in a team
  • Experiment configurations are separated from model definitions
  • It is easy to configure network architecture
  • Provides great flexibility and extensibility via support of custom substances
  • Common blocks like an architecture, callbacks, model metrics, predictions vizualizers and others should be written once and be a part of a common library

Installation

pip install segmentation_pipeline

Note: this package requires python 3.6

This package is a part of Musket ML framework, it is recommended to install the whole collection of the framework packages at once using instructions here.

Launching

Launching experiments

fit.py script is designed to launch experiment training.

In order to run the experiment or a number of experiments,

A typical command line may look like this:

musket fit --project "path/to/project" --name "experiment_name" --num_gpus=1 --gpus_per_net=1 --num_workers=1 --cache "path/to/cache/folder"

--project points to the root of the project

--name is the name of the project sub-folder containing experiment yaml file.

--num_gpus sets number of GPUs to use during experiment launch.

--gpus_per_net is a maximum number of GPUs to use per single experiment.

--num_workers sets number of workers to use.

--cache points to a cache folder to store the temporary data.

Other parameters can be found in the fit script reference

Launching tasks

task.py script is designed to launch experiment training.

Tasks must be defined in the project python scope and marked by an annotation like this:

from musket_core import tasks, model
@tasks.task
def measure2(m: model.ConnectedModel):
    return result

Working directory must point to the musket_core root folder.

In order to run the experiment or a number of experiments,

A typical command line may look like this:

python -m musket_core.task --project "path/to/project" --name "experiment_name" --task "task_name" --num_gpus=1 --gpus_per_net=1 --num_workers=1 --cache "path/to/cache/folder"

--project points to the root of the project

--name is the name of the project sub-folder containing experiment yaml file.

--task is the name of the task function.

--num_gpus sets number of GPUs to use during experiment launch.

--gpus_per_net is a maximum number of GPUs to use per single experiment.

--num_workers sets number of workers to use.

--cache points to a cache folder to store the temporary data.

Other parameters can be found in the task script reference

Launching project analysis

analize.py script is designed to launch project-scope analysis.

Note that only experiments, which training is already finished will be covered.

musket analize --inputFolder "path/to/project"

--inputFolder points to a folder to search for finished experiments in. Typically, project root.

Other parameters can be found in the analyze script reference

Usage guide

Training a model

Let's start from the absolutely minimalistic example. Let's say that you have two folders, one of them contains jpeg images, and another one - png files with segmentation masks for these images. And you need to train a neural network that will do segmentation for you. In this extremely simple setup all that you need is to type following 5 lines of python code:

from segmentation_pipeline.impl.datasets import SimplePNGMaskDataSet
from segmentation_pipeline import  segmentation
ds=SimplePNGMaskDataSet("./pics/train","./pics/train_mask")
cfg = segmentation.parse("config.yaml")
cfg.fit(ds)

Looks simple, but there is a config.yaml file in the code, and probably it is the place where everything actually happens.

backbone: mobilenetv2 #let's select classifier backbone for our network 
architecture: DeepLabV3 #let's select segmentation architecture that we would like to use
augmentation:
 Fliplr: 0.5 #let's define some minimal augmentations on images
 Flipud: 0.5 
classes: 1 #we have just one class (mask or no mask)
activation: sigmoid #one class means that our last layer should use sigmoid activation
encoder_weights: pascal_voc #we would like to start from network pretrained on pascal_voc dataset
shape: [320, 320, 3] #This is our desired input image and mask size, everything will be resized to fit.
testSplit: 0.4
optimizer: Adam #Adam optimizer is a good default choice
batch: 16 #Our batch size will be 16
metrics: #We would like to track some metrics
  - binary_accuracy 
  - iou
primary_metric: val_binary_accuracy #and the most interesting metric is val_binary_accuracy
callbacks: #Let's configure some minimal callbacks
  EarlyStopping:
    patience: 15
    monitor: val_binary_accuracy
    verbose: 1
  ReduceLROnPlateau:
    patience: 4
    factor: 0.5
    monitor: val_binary_accuracy
    mode: auto
    cooldown: 5
    verbose: 1
loss: binary_crossentropy #We use simple binary_crossentropy loss
stages:
  - epochs: 100 #Let's go for 100 epochs

So as you see, we have decomposed our task in two parts, code that actually trains the model and experiment configuration, which determines the model and how it should be trained from the set of predefined building blocks.

Moreover, the whole fitting and prediction process can be launched with built-in script, the only really required python code is dataset definition to let the system know, which data to load.

What does this code actually do behind the scenes?

  • it splits your data into 5 folds, and trains one model per fold;
  • it takes care of model checkpointing, generates example image/mask/segmentation triples, collects training metrics. All this data will be stored in the folders just near your config.yaml;
  • All your folds are initialized from fixed default seed, so different experiments will use exactly the same train/validation splits

Also, datasets can be specified directly in your config file in more generic way, see examples ds_1, ds_2, ds_3 in "segmentation_training_pipeline/examples/people" folder. In this case you can just call cfg.fit() without providing dataset programmatically.

Lets discover what's going on in more details:

General train properties

Lets take our standard example and check the following set of instructions:

testSplit: 0.4
optimizer: Adam #Adam optimizer is a good default choice
batch: 16 #Our batch size will be 16
metrics: #We would like to track some metrics
  - binary_accuracy 
  - iou
primary_metric: val_binary_accuracy #and the most interesting metric is val_binary_accuracy
loss: binary_crossentropy #We use simple binary_crossentropy loss

testSplit Splits the train set into two parts, using one part for train and leaving the other untouched for a later testing. The split is shuffled.

optimizer sets the optimizer.

batch sets the training batch size.

metrics sets the metrics to track during the training process. Metric calculation results will be printed in the console and to metrics folder of the experiment.

primary_metric Metric to track during the training process. Metric calculation results will be printed in the console and to metrics folder of the experiment. Besides tracking, this metric will be also used by default for metric-related activity, in example, for decision regarding which epoch results are better.

loss sets the loss function. if your network has multiple outputs, you also may pass a list of loss functions (one per output)

Framework supports composing loss as a weighted sum of predefined loss functions. For example, following construction

loss: binary_crossentropy+0.1*dice_loss

will result in loss function which is composed from binary_crossentropy and dice_loss functions.

There are many more properties to check in Reference of root properties

Defining architecture

Lets take a look at the following part of our example:

backbone: mobilenetv2 #let's select classifier backbone for our network 
architecture: DeepLabV3 #let's select segmentation architecture that we would like to use
classes: 1 #we have just one class (mask or no mask)
activation: sigmoid #one class means that our last layer should use sigmoid activation
encoder_weights: pascal_voc #we would like to start from network pretrained on pascal_voc dataset
shape: [320, 320, 3] #This is our desired input image and mask size, everything will be resized to fit.

The following three properties are required to set:

backbone This property configures encoder that should be used. Different kinds of FPN, PSP, Linkenet, UNet and more are supported.

architecture This property configures decoder architecture that should be used. net, Linknet, PSP, FPN and more are supported.

classes sets the number of classes that should be used.

The following ones are optional, but commonly used:

activation sets activation function that should be used in last layer.

shape set the desired shape of the input picture and mask, in the form heigth, width, number of channels. Input will be resized to fit.

encoder_weights configures initial weights of the encoder.

Image and Mask Augmentations

Framework uses awesome imgaug library for augmentation, so you only need to configure your augmentation process in declarative way like in the following example:

augmentation:  
  Fliplr: 0.5
  Flipud: 0.5
  Affine:
    scale: [0.8, 1.5] #random scalings
    translate_percent:
      x: [-0.2,0.2] #random shifts
      y: [-0.2,0.2]
    rotate: [-16, 16] #random rotations on -16,16 degrees
    shear: [-16, 16] #random shears on -16,16 degrees

augmentation property defines IMGAUG transformations sequence. Each object is mapped on IMGAUG transformer by name, parameters are mapped too.

In this example, Fliplr and Flipud keys are automatically mapped on Flip agugmenters, their 0.5 parameter is mapped on the first p parameter of the augmenter. Named parameters are also mapped, in example scale key of Affine is mapped on scale parameter of Affine augmenter.

One interesting augementation option when doing background removal task is replacing backgrounds with random images. We support this with BackgroundReplacer augmenter:

augmentation:
  BackgroundReplacer:
    path: ./bg #path to folder with backgrounds
    rate: 0.5 #fraction of original backgrounds to preserve

Freezing and Unfreezing encoder

Freezing encoder is often used with transfer learning. If you want to start with frozen encoder just add

freeze_encoder: true
stages:
  - epochs: 10 #Let's go for 10 epochs with frozen encoder
  
  - epochs: 100 #Now let's go for 100 epochs with trainable encoder
    unfreeze_encoder: true  

in your experiments configuration, then on some stage configuration just add

unfreeze_encoder: true

to stage settings.

Both freeze_encoder and unfreeze_encoder can be put into the root section and inside the stage.

Note: This option is not supported for DeeplabV3 architecture.

Custom datasets

Training data and masks are not necessarily stored in files, so sometimes you need to declare your own dataset class, for example, the following code was used to support Airbus ship detection challenge to decode segmentation masks from rle encoded strings stored in csv file

0.5) def getTrain()->datasets.DataSet: return SegmentationRLE("train.csv","images/")">
from segmentation_pipeline.impl.datasets import PredictionItem
import os
from segmentation_pipeline.impl import rle
import imageio
import pandas as pd

class SegmentationRLE(datasets.DataSet):

    def __init__(self,path,imgPath):
        self.data=pd.read_csv(path);
        self.values=self.data.values;
        self.imgPath=imgPath;
        self.ship_groups=self.data.groupby('ImageId');
        self.masks=self.ship_groups['ImageId'];
        self.ids=list(self.ship_groups.groups.keys())
        pass
    
    def __len__(self):
        return len(self.masks)


    def __getitem__(self, item):
        pixels=self.ship_groups.get_group(self.ids[item])["EncodedPixels"]
        return PredictionItem(self.ids[item] + str(), imageio.imread(os.path.join(self.imgPath,self.ids[item])),
                              rle.masks_as_image(pixels) > 0.5)

def getTrain()->datasets.DataSet:
    return SegmentationRLE("train.csv","images/")

Now, if this python code sits somewhere in python files located in modules folder of the project, and that file is referred by imports instruction, following YAML can refer it:

dataset:
  getTrain: []

dataset sets the main training dataset.

datasets sets up a list of available data sets to be referred by other entities.

Multistage training

Sometimes you need to split your training into several stages. You can easily do it by adding several stage entries in your experiment configuration file.

stages instruction allows to set up stages of the train process, where for each stage it is possible to set some specific training options like the number of epochs, learning rate, loss, callbacks, etc. Full list of stage properties can be found here.

stages:
  - epochs: 100 #Let's go for 100 epochs
  - epochs: 100 #Let's go for 100 epochs
  - epochs: 100 #Let's go for 100 epochs
stages:
  - epochs: 6 #Train for 6 epochs
    negatives: none #do not include negative examples in your training set 
    validation_negatives: real #validation should contain all negative examples    

  - lr: 0.0001 #let's use different starting learning rate
    epochs: 6
    negatives: real
    validation_negatives: real

  - loss: lovasz_loss #let's override loss function
    lr: 0.00001
    epochs: 6
    initial_weights: ./fpn-resnext2/weights/best-0.1.weights #let's load weights from this file    

Balancing your data

One common case is the situation when part of your images does not contain any objects of interest, like in Airbus ship detection challenge. More over your data may be to heavily inbalanced, so you may want to rebalance it. Alternatively you may want to inject some additional images that do not contain objects of interest to decrease amount of false positives that will be produced by the framework.

These scenarios are supported by negatives and validation_negatives settings of training stage configuration, these settings accept following values:

  • none - exclude negative examples from the data
  • real - include all negative examples
  • integer number(1 or 2 or anything), how many negative examples should be included per one positive example
stages:
  - epochs: 6 #Train for 6 epochs
    negatives: none #do not include negative examples in your training set 
    validation_negatives: real #validation should contain all negative examples    

  - lr: 0.0001 #let's use different starting learning rate
    epochs: 6
    negatives: real
    validation_negatives: real

  - loss: lovasz_loss #let's override loss function
    lr: 0.00001
    epochs: 6
    initial_weights: ./fpn-resnext2/weights/best-0.1.weights #let's load weights from this file    

if you are using this setting your dataset class must support isPositive method which returns true for indexes which contain positive examples:

    def isPositive(self, item):
        pixels=self.ddd.get_group(self.ids[item])["EncodedPixels"]
        for mask in pixels:
            if isinstance(mask, str):
                return True;
        return False

Advanced learning rates

Dynamic learning rates

Example

As told in Cyclical learning rates for training neural networks CLR policies can provide quicker converge for some neural network tasks and architectures.

Example2

We support them by adopting Brad Kenstler CLR callback for Keras.

If you want to use them, just add CyclicLR in your experiment configuration file as shown below:

callbacks:
  EarlyStopping:
    patience: 40
    monitor: val_binary_accuracy
    verbose: 1
  CyclicLR:
     base_lr: 0.0001
     max_lr: 0.01
     mode: triangular2
     step_size: 300

There are also ReduceLROnPlateau and LRVariator options to modify learning rate on the fly.

LR Finder

Estimating optimal learning rate for your model is an important thing, we support this by using slightly changed version of Pavel Surmenok - Keras LR Finder

cfg= segmentation.parse(people-1.yaml)
ds=SimplePNGMaskDataSet("./train","./train_mask")
finder=cfg.lr_find(ds,start_lr=0.00001,end_lr=1,epochs=5)
finder.plot_loss(n_skip_beginning=20, n_skip_end=5)
plt.show()
finder.plot_loss_change(sma=20, n_skip_beginning=20, n_skip_end=5, y_lim=(-0.01, 0.01))
plt.show()

will result in this couple of helpful images:

image

image

Training on crops

Your images can be too large to train model on them. In this case you probably want to train model on crops. All that you need to do is to specify number of splits per axis. For example, following lines in config

shape: [768, 768, 3]
crops: 3

will lead to splitting each image/mask into 9 cells (3 horizontal splits and 3 vertical splits) and training model on these splits. Augmentations will be run separately on each cell. crops property sets the number of single dimension cells.

During prediction time, your images will be split into these cells, prediction will be executed on each cell, and then results will be assembled in single final mask. Thus the whole process of cropping will be invisible from a consumer perspective.

Using trained model

Okey, our model is trained, now we need to actually do image segmentation. Let's say, we need to run image segmentation on images in the directory and store results in csv file:

threshold)) rle = rle_encode(post_img) predictions.append(rle) imgs.append(file_name[:file_name.index(".")]) pass cfg= segmentation.parse("config.yaml") predictions = [] images = [] #Now let's use best model from fold 0 to do image segmentation on images from images_to_segment cfg.predict_in_directory("./images_to_segment", 0, 0, onPredict, {"pred": predictions, "images": images}) #Let's store results in csv df = pd.DataFrame.from_dict({'image': images, 'rle_mask': predictions}) df.to_csv('baseline_submission.csv', index=False)">
from segmentation_pipeline import  segmentation
from segmentation_pipeline.impl.rle import rle_encode
from skimage.morphology import remove_small_objects, remove_small_holes
import pandas as pd

#this is our callback which is called for every image
def onPredict(file_name, img, data):
    threshold = 0.25
    predictions = data["pred"]
    imgs = data["images"]
    post_img = remove_small_holes(remove_small_objects(img.arr > threshold))
    rle = rle_encode(post_img)
    predictions.append(rle)
    imgs.append(file_name[:file_name.index(".")])
    pass

cfg= segmentation.parse("config.yaml")

predictions = []
images = []
#Now let's use best model from fold 0 to do image segmentation on images from images_to_segment
cfg.predict_in_directory("./images_to_segment", 0, 0, onPredict, {"pred": predictions, "images": images})

#Let's store results in csv
df = pd.DataFrame.from_dict({'image': images, 'rle_mask': predictions})
df.to_csv('baseline_submission.csv', index=False)

Ensembling predictions

And what if you want to ensemble models from several folds? Just pass a list of fold numbers to predict_in_directory like in the following example:

cfg.predict_in_directory("./images_to_segment", [0,1,2,3,4], onPredict, {"pred": predictions, "images": images})

Another supported option is to ensemble results from extra test time augmentation (flips) by adding keyword arg ttflips=True.

Custom evaluation code

Sometimes you need to run custom evaluation code. In such case you may use: evaluateAll method, which provides an iterator on the batches containing original images, training masks and predicted masks

for batch in cfg.evaluateAll(ds,2):
    for i in range(len(batch.predicted_maps_aug)):
        masks = ds.get_masks(batch.data[i])
        for d in range(1,20):
            cur_seg = binary_opening(batch.predicted_maps_aug[i].arr > d/20, np.expand_dims(disk(2), -1))
            cm = rle.masks_as_images(rle.multi_rle_encode(cur_seg))
            pr = f2(masks, cm);
            total[d]=total[d]+pr

Accessing model

You may get trained keras model by calling: cfg.load_model(fold, stage).

Analyzing experiments results

Okey, we have done a lot of experiments and now we need to compare the results and understand what works better. This repository contains script which may be used to analyze folder containing sub folders with experiment configurations and results. This script gathers all configurations, diffs them by doing structural diff, then for each configuration it averages metrics for all folds and generates csv file containing metrics and parameters that was actually changed in your experiment like in the following example

This script accepts following arguments:

  • inputFolder - root folder to search for experiments configurations and results
  • output - file to store aggregated metrics
  • onlyMetric - if you specify this option all other metrics will not be written in the report file
  • sortBy - metric that should be used to sort results

Example:

python analize.py --inputFolder ./experiments --output ./result.py

What is supported?

At this moment segmentation pipeline supports following architectures:

FPN, PSP, Linkenet, UNet architectures support following backbones:

All them support the weights pretrained on ImageNet:

encoder_weights: imagenet

At this moment DeeplabV3 architecture supports following backbones:

Deeplab supports weights pretrained on PASCAL VOC:

encoder_weights: pascal_voc

Each architecture also supports some specific options, list of options is documented in segmentation RAML library.

Supported augmentations are documented in augmentation RAML library.

Callbacks are documented in callbacks RAML library.

Custom architectures, callbacks, metrics

Segmentation pipeline uses keras custom objects registry to find entities, so if you need to use custom loss function, activation or metric all that you need to do is to register it in Keras as:

keras.utils.get_custom_objects()["my_loss"]= my_loss

If you want to inject new architecture, you should register it in segmentation.custom_models dictionary.

For example:

segmentation.custom.models['MyUnet']=MyUnet 

where MyUnet is a function that accepts architecture parameters as arguments and returns an instance of keras model.

Examples

Training background removal task(Pics Art Hackaton) in google collab

FAQ

How to continue training after crash?

If you would like to continue training after crash, call setAllowResume method before calling fit

cfg= segmentation.parse("./people-1.yaml")
cfg.setAllowResume(True)
ds=SimplePNGMaskDataSet("./pics/train","./pics/train_mask")
cfg.fit(ds)

My notebooks constantly run out of memory, what can I do to reduce memory usage?

One way to reduce memory usage is to limit augmentation queue limit which is 50 by default, like in the following example:

segmentation_pipeline.impl.datasets.AUGMENTER_QUEUE_LIMIT = 3

How can I run sepate set of augmenters on initial image/mask when replacing backgrounds with Background Augmenter?

  BackgroundReplacer:
    rate: 0.5
    path: ./bg
    augmenters: #this augmenters will run on original image before replacing background
      Affine:
        scale: [0.8, 1.5]
        translate_percent:
              x: [-0.2,0.2]
              y: [-0.2,0.2]
        rotate: [-16, 16]
        shear: [-16, 16]
    erosion: [0,5]   

How can I visualize images that are used for training (after augmentations)?

You should set showDataExamples to True like in the following sample

cfg= segmentation.parse("./no_erosion_aug_on_masks/people-1.yaml")
cfg.showDataExamples=True

if will lead to generation of training images samples and storing them in examples folder at the end of each epoch

What I can do if i have some extra training data, that should not be included into validation, but should be used during the training?

extra_data=NotzeroSimplePNGMaskDataSet("./phaces/all","./phaces/masks") #My dataset that should be added to training
segmentation.extra_train["people"] = extra_data

and in the config file:

extra_train_data: people

How to get basic statistics across my folds/stages

This code sample will return primary metric stats over folds/stages

cfg= segmentation.parse("./no_erosion_aug_on_masks/people-1.yaml")
metrics = cfg.info()

I have some callbacks that are configured globally, but I need some extra callbacks for my last training stage?

There are two possible ways how you may configure callbacks on stage level:

  • override all global callbacks with callbacks setting.
  • add your own custom callbacks with extra_callbacks setting.

In the following sample CyclingRL callback is only appended to the sexond stage of training:

loss: binary_crossentropy
stages:
  - epochs: 20
    negatives: real
  - epochs: 200
    extra_callbacks:
      CyclicLR:
        base_lr: 0.000001
        max_lr: 0.0001
        mode: triangular
        step_size: 800
    negatives: real

What if I would like to build a really large ansemble of models?

One option to do this, is to store predictions for each file and model in numpy array, and then sum these predictions like in the following sample:

cfg.predict_to_directory("./pics/test","./pics/arr1", [0, 1, 4, 2], 1, ttflips=True,binaryArray=True)
cfg.predict_to_directory("./pics/test", "./pics/arr", [0, 1, 4, 2], 2, ttflips=True, binaryArray=True)
segmentation.ansemblePredictions("./pics/test",["./pics/arr/","./pics/arr1/"],onPredict,d)

How to train on multiple gpus?

cfg.gpus=4 #or another number matching to the count of gpus that you have
You might also like...
[CVPR 2021] MiVOS - Mask Propagation module. Reproduced STM (and better) with training code :star2:. Semi-supervised video object segmentation evaluation.
[CVPR 2021] MiVOS - Mask Propagation module. Reproduced STM (and better) with training code :star2:. Semi-supervised video object segmentation evaluation.

MiVOS (CVPR 2021) - Mask Propagation Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang [arXiv] [Paper PDF] [Project Page] [Papers with Code] This repo impleme

Code for the paper One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation, CVPR 2021.

One Thing One Click One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation (CVPR2021) Code for the paper One Thi

Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)

Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation This is a pytorch project for the paper Dynamic Divide-and-Conquer Ad

ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation

ST++ This is the official PyTorch implementation of our paper: ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation. Lihe Ya

Experiments on Flood Segmentation on Sentinel-1 SAR Imagery with Cyclical Pseudo Labeling and Noisy Student Training
Experiments on Flood Segmentation on Sentinel-1 SAR Imagery with Cyclical Pseudo Labeling and Noisy Student Training

Flood Detection Challenge This repository contains code for our submission to the ETCI 2021 Competition on Flood Detection (Winning Solution #2). Acco

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.
Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics. By Andres Milioto @ University of Bonn. (for the new P

rastrainer is a QGIS plugin to training remote sensing semantic segmentation model based on PaddlePaddle.
rastrainer is a QGIS plugin to training remote sensing semantic segmentation model based on PaddlePaddle.

rastrainer rastrainer is a QGIS plugin to training remote sensing semantic segmentation model based on PaddlePaddle. UI TODO Init UI. Add Block. Add l

Official code of Retinal Vessel Segmentation with Pixel-wise Adaptive Filters and Consistency Training
Official code of Retinal Vessel Segmentation with Pixel-wise Adaptive Filters and Consistency Training

Official code of Retinal Vessel Segmentation with Pixel-wise Adaptive Filters and Consistency Training (ISBI 2022)

A Sklearn-like Framework for Hyperparameter Tuning and AutoML in Deep Learning projects. Finally have the right abstractions and design patterns to properly do AutoML. Let your pipeline steps have hyperparameter spaces. Enable checkpoints to cut duplicate calculations. Go from research to production environment easily.
Comments
  • Hidden (extra) test data support is missed

    Hidden (extra) test data support is missed

    Some times, me need to evaluate our models against extra test data that does not participates in cross validation split, we need to add this option into pipeline.

    opened by petrochenko-pavel-a 1
  • We need a way to introduce configurable data injectors

    We need a way to introduce configurable data injectors

    Some times large parts of your dataset are generated dynamically, depending on some configuration, we should encorporate this configurations into experiment setup.

    opened by petrochenko-pavel-a 0
  • FileNotFoundError: No such file while fitting

    FileNotFoundError: No such file while fitting

    I'm tried two different datasets from kaggle salt identification challenge (1 grey scale input image) and other one with 3 rgb channels, always get same error

    /home/dex/anaconda3/lib/python3.6/site-packages/musket_core/generic_config.py:345: DeprecationWarning: Function Scale() is deprecated. Use Resize instead. Resize has the exactly same interface as Scale. transforms.append(imgaug.augmenters.Scale({"height": self.shape[0], "width": self.shape[1]})) /home/dex/anaconda3/lib/python3.6/site-packages/musket_core/generic_config.py:379: DeprecationWarning: Function Scale() is deprecated. Use Resize instead. Resize has the exactly same interface as Scale. transforms.append(imgaug.augmenters.Scale({"height": self.shape[0], "width": self.shape[1]})) Traceback (most recent call last): File "/home/dex/anaconda3/lib/python3.6/site-packages/musket_core/datasets.py", line 75, in load id, x, y = self.proceed(i) File "/home/dex/anaconda3/lib/python3.6/site-packages/musket_core/datasets.py", line 93, in proceed item = self.dataset[self.indeces[i]] File "/home/dex/anaconda3/lib/python3.6/site-packages/musket_core/datasets.py", line 332, in getitem out = imageio.imread(os.path.join(self.mask, self.ids[item] + "." + self.out_ext)) File "/home/dex/anaconda3/lib/python3.6/site-packages/imageio/core/functions.py", line 206, in imread reader = read(uri, format, 'i', **kwargs) File "/home/dex/anaconda3/lib/python3.6/site-packages/imageio/core/functions.py", line 117, in get_reader request = Request(uri, 'r' + mode, **kwargs) File "/home/dex/anaconda3/lib/python3.6/site-packages/imageio/core/request.py", line 128, in init self._parse_uri(uri) File "/home/dex/anaconda3/lib/python3.6/site-packages/imageio/core/request.py", line 271, in _parse_uri raise FileNotFoundError("No such file: '%s'" % fn) FileNotFoundError: No such file: '/home/dex/Desktop/ml/buildingsegmentation/data_objects/segmentation/bike_2007_005878.png' Traceback (most recent call last): File "/home/dex/anaconda3/lib/python3.6/site-packages/musket_core/datasets.py", line 75, in load id, x, y = self.proceed(i) File "/home/dex/anaconda3/lib/python3.6/site-packages/musket_core/datasets.py", line 93, in proceed item = self.dataset[self.indeces[i]] File "/home/dex/anaconda3/lib/python3.6/site-packages/musket_core/datasets.py", line 332, in getitem out = imageio.imread(os.path.join(self.mask, self.ids[item] + "." + self.out_ext)) File "/home/dex/anaconda3/lib/python3.6/site-packages/imageio/core/functions.py", line 206, in imread reader = read(uri, format, 'i', **kwargs) File "/home/dex/anaconda3/lib/python3.6/site-packages/imageio/core/functions.py", line 117, in get_reader request = Request(uri, 'r' + mode, **kwargs)

    Config file:

    backbone: resnet18 architecture: Unet augmentation: Fliplr: 0.5

    classes: 1 activation: sigmoid #encoder_weights: pascal_voc shape: [128, 128, 3] optimizer: Adam batch: 16 metrics:

    • binary_accuracy primary_metric: val_binary_accuracy callbacks: EarlyStopping: patience: 15 monitor: val_binary_accuracy verbose: 1 ReduceLROnPlateau: patience: 4 factor: 0.5 monitor: val_binary_accuracy mode: auto cooldown: 5 verbose: 1 loss: binary_crossentropy stages:
    • epochs: 2 negatives: real
    opened by Diyago 3
Releases(V-0.22)
Owner
Musket ML
Musket ML is a a family of tools for deep learning
Musket ML
Code of Puregaze: Purifying gaze feature for generalizable gaze estimation, AAAI 2022.

PureGaze: Purifying Gaze Feature for Generalizable Gaze Estimation Description Our work is accpeted by AAAI 2022. Picture: We propose a domain-general

39 Dec 05, 2022
Totally Versatile Miscellanea for Pytorch

Totally Versatile Miscellania for PyTorch Thomas Viehmann [email protected] Thi

Thomas Viehmann 428 Dec 28, 2022
Official implementation of Influence-balanced Loss for Imbalanced Visual Classification in PyTorch.

Official implementation of Influence-balanced Loss for Imbalanced Visual Classification in PyTorch.

Seulki Park 70 Jan 03, 2023
Fully convolutional deep neural network to remove transparent overlays from images

Fully convolutional deep neural network to remove transparent overlays from images

Marc Belmont 1.1k Jan 06, 2023
基于Paddlepaddle复现yolov5,支持PaddleDetection接口

PaddleDetection yolov5 https://github.com/Sharpiless/PaddleDetection-Yolov5 简介 PaddleDetection飞桨目标检测开发套件,旨在帮助开发者更快更好地完成检测模型的组建、训练、优化及部署等全开发流程。 PaddleD

36 Jan 07, 2023
Classifies galaxy morphology with Bayesian CNN

Zoobot Zoobot classifies galaxy morphology with deep learning. This code will let you: Reproduce and improve the Galaxy Zoo DECaLS automated classific

Mike Walmsley 39 Dec 20, 2022
No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency

This repository contains the implementation for the paper: No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consiste

Alireza Golestaneh 75 Dec 30, 2022
QT Py Media Knob using rotary encoder & neopixel ring

QTPy-Knob QT Py USB Media Knob using rotary encoder & neopixel ring The QTPy-Knob features: Media knob for volume up/down/mute with "qtpy-knob.py" Cir

Tod E. Kurt 56 Dec 30, 2022
AI Based Smart Exam Proctoring Package

AI Based Smart Exam Proctoring Package It takes image (base64) as input: Provide Output as: Detection of Mobile phone. Detection of More than 1 person

NARENDER KESWANI 3 Sep 09, 2022
Weakly-supervised object detection.

Wetectron Wetectron is a software system that implements state-of-the-art weakly-supervised object detection algorithms. Project CVPR'20, ECCV'20 | Pa

NVIDIA Research Projects 342 Jan 05, 2023
This is the codebase for Diffusion Models Beat GANS on Image Synthesis.

This is the codebase for Diffusion Models Beat GANS on Image Synthesis.

OpenAI 3k Dec 26, 2022
Codes for our IJCAI21 paper: Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization

DDAMS This is the pytorch code for our IJCAI 2021 paper Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization [Arxiv Pr

xcfeng 55 Dec 27, 2022
MEND: Model Editing Networks using Gradient Decomposition

MEND: Model Editing Networks using Gradient Decomposition Setup Environment This codebase uses Python 3.7.9. Other versions may work as well. Create a

Eric Mitchell 141 Dec 02, 2022
Food recognition model using convolutional neural network & computer vision

Food recognition model using convolutional neural network & computer vision. The goal is to match or beat the DeepFood Research Paper

Hemanth Chandran 1 Jan 13, 2022
Evaluating Privacy-Preserving Machine Learning in Critical Infrastructures: A Case Study on Time-Series Classification

PPML-TSA This repository provides all code necessary to reproduce the results reported in our paper Evaluating Privacy-Preserving Machine Learning in

Dominik 1 Mar 08, 2022
Simultaneous NMT/MMT framework in PyTorch

This repository includes the codes, the experiment configurations and the scripts to prepare/download data for the Simultaneous Machine Translation wi

<a href=[email protected]"> 37 Sep 29, 2022
This repo implements a 3D segmentation task for an airport baggage dataset.

3D CT Scan Segmentation With Occupancy Network This repo implements a 3D superresolution segmentation task for an airport baggage dataset. Our final p

Christoph Reich 2 Mar 28, 2022
In-place Parallel Super Scalar Samplesort (IPS⁴o)

In-place Parallel Super Scalar Samplesort (IPS⁴o) This is the implementation of the algorithm IPS⁴o presented in the paper Engineering In-place (Share

82 Dec 22, 2022
State-of-the-art language models can match human performance on many tasks

Status: Archive (code is provided as-is, no updates expected) Grade School Math [Blog Post] [Paper] State-of-the-art language models can match human p

OpenAI 259 Jan 08, 2023
Python3 Implementation of (Subspace Constrained) Mean Shift Algorithm in Euclidean and Directional Product Spaces

(Subspace Constrained) Mean Shift Algorithms in Euclidean and/or Directional Product Spaces This repository contains Python3 code for the mean shift a

Yikun Zhang 0 Oct 19, 2021